Small Business

Companies are already using agentic AI to make decisions, but governance is lagging behind

Companies are moving quickly to adopt agentic AI – artificial intelligence systems that work without human guidance – but are much slower to set up governance to oversee it, a new study shows. This mismatch is a major source of risk in AI adoption. I think it is also a business opportunity.

I am a professor of management information systems at Drexel University’s LeBow College of Business, which recently surveyed more than 500 data professionals through the Center for Applied AI & Business Analytics. We found that 41% of organizations use agentic AI in their daily operations. These are not just pilot projects or one-off tests. They are part of regular workflows.

At the same time, the board is lagging behind. Only 27% of organizations say their governance frameworks are mature enough to effectively monitor and manage these systems.

In this context, governance is not about regulation or unnecessary rules. It means there must be policies and practices that allow people to clearly influence how autonomous systems work, including who is responsible for decisions, how behavior is controlled, and when people should be involved.

This mismatch can become a problem when autonomous systems occur in real-world situations before anyone can intervene.

For example, during a recent power outage in San Francisco, the autonomous robot axle got stuck at intersections, blocking emergency vehicles and confusing other drivers. The situation showed that even when autonomous systems behave ‘as designed’, unexpected circumstances can lead to undesirable outcomes.

This raises a big question: if something goes wrong with AI, who is responsible – and who can intervene?

See also  23 Innovative Business Ideas for 2023: The 2nd Semester Edition

Why governance matters

When AI systems act on their own, accountability no longer lies where organizations expect it to. Decisions are still being made, but ownership is harder to trace. In financial services, for example, fraud detection systems increasingly work in real time to block suspicious activity before a human ever reviews the case. Customers often only find out when their card is declined.

What should happen if your card is accidentally declined by an AI system? In that situation, the problem is not with the technology itself – which works as designed – but with responsibility. Research into human-AI governance shows that problems arise when organizations do not clearly define how humans and autonomous systems should work together. This lack of clarity makes it difficult to know who is responsible and when to intervene.

Without autonomy-oriented governance, small problems can quietly snowball. Monitoring becomes sporadic and trust weakens, not because systems fail outright, but because people struggle to explain or get behind what the systems do.

When people get into the loop too late

In many organizations, people are technically ‘in the know’, but only after autonomous systems have already taken action. People tend to get involved as soon as a problem becomes apparent – ​​when a price seems wrong, a transaction is flagged or a customer complains. At that point, the system has already been decided and human judgment becomes corrective rather than supervisory.

Late intervention can limit the consequences of individual decisions, but rarely makes it clear who is responsible. The results can be corrected, but responsibility remains unclear.

See also  Everything You Need to Know About Becoming a Medical Scribe

Recent guidelines show that when authority is unclear, human oversight becomes informal and inconsistent. The problem is not the human involvement, but the timing. Without pre-designed governance, people act as a safety valve rather than as responsible decision makers.

How governance determines who moves forward

Agentic AI often produces fast, early results, especially when tasks are first automated. Our research shows that many companies are seeing these early benefits. But as autonomous systems grow, organizations often add manual controls and approval steps to manage risk.

Over time, what was once simple slowly becomes more complicated. Decision making slows, workarounds increase, and the benefits of automation fade. This does not happen because the technology no longer works, but because people never fully trust autonomous systems.

This delay does not have to happen. Our research shows a clear difference: many organizations see early benefits from autonomous AI, but organizations with stronger governance are much more likely to translate these benefits into long-term results, such as greater efficiency and revenue growth. The main difference is not ambition or technical skills, but preparation.

Good governance does not limit autonomy. It makes it workable by clarifying who owns the decisions, how the operation of systems is monitored and when people should intervene. International guidelines from the OECD – the Organization for Economic Co-operation and Development – ​​emphasize this point: accountability and human oversight should be integrated into AI systems from the start, not added later.

Rather than slowing innovation, governance builds the trust organizations need to expand, rather than quietly withdraw, their autonomy.

See also  Booked to travel through the Middle East? Here’s why you shouldn’t cancel your flight

The next benefit is smarter governance

The next competitive advantage in AI will not come from faster adoption, but from smarter governance. As autonomous systems take on more responsibility, success will belong to organizations that clearly define ownership, oversight and intervention from the start.

In the age of agentic AI, trust will grow towards the organizations that govern best, and not just those that adopt first.

Source link

Back to top button