What happens when AI stops asking permission?

With AI agents in the spotlight, business leaders need careful consideration when adopting them. How do AI agents differ from other AI applications? BCG experts discuss the new challenges and opportunities.
As companies deploy AI agents with growing autonomy, these systems will soon interact directly with customers and directly control critical business processes such as adjusting production schedules and engaging with suppliers. Such capabilities transform the impact AI can deliver but also create new risks. Organisations must move quickly to implement new governance approaches, technical capabilities, and control-by-design to manage accountability, control, and trust of AI agents.
A recent paper by researchers at Stanford and Carnegie Mellon universities highlighted the risks. An AI agent was tasked with creating an Excel file from expense receipts but was unable to process the data. To achieve its goal, it fabricated plausible records, complete with invented restaurant names. At scale, false records like this would bring penalties for false accounting, or worse.
This example highlights the central challenges of AI agents; the governance and control challenges of AI are elevated to a new level for three reasons:
- There is reduced (or no) human supervision.
- Agents are often connected to the organisation’s most important systems with the power to make irreversible, real-world changes.
- Multiple agents may interact to create even more complex systems with difficult-to-predict emergent behaviour patterns.
The challenge is heightened when organisations with successful agents that have limited scope take what appears to be a natural next step and give those agents increased autonomy or new capabilities. These upgrades, which may not trigger a comprehensive review, could have dramatic effects.
Organisations therefore need new thinking where AI governance includes AI risk management by design, new technical approaches for evaluation, monitoring, and assurance, and robust response plans.
Organisations are pushing ahead with AI adoption: in a global MIT Sloan Management Review/Boston Consulting Group study released in November, just 10% of organisations indicated they had handed decision-making powers to AI, but after three years respondents indicated this number should rise to 35%.
Meanwhile, incidents involving AI have increased 21% from 2024 to 2025, according to the AI Incidents Database. This indicates that as AI deployment continues, the need for risk management is increasing in parallel.
The immediate cost of insufficient governance for AI agents is painful and obvious: direct financial loss, damage to customer trust, and even legal or regulatory action. But the long-term cost may be even greater. Without strong governance, companies will lack the confidence to deploy AI agents at scale, thereby missing out on the substantial benefits this remarkable technology can deliver.
AI agents in health care

AI agents in banking

AI agents in insurance

AI agents in manufacturing

The AI agent difference
Executives are beginning to understand that AI agents require a new governance approach. In the MIT Sloan Management Review/Boston Consulting Group executive survey, 69% agreed that “holding agentic AI accountable for its decisions and actions requires new management approaches.”
To build that new approach, however, it is essential to understand how AI agents differ from the co-pilot AI that many organisations have been deploying up to today. The key characteristics of an AI agent are that it observes its environment and then, based on this observation, autonomously makes a plan to achieve its defined goal. This is followed by autonomous execution of that plan using tools, APIs, or other systems to influence its environment.
Finally, the AI agent repeats this process in a learning loop until it determines that its goal has been achieved.
In contrast, much of the AI at work in organisations today operates as a co-pilot, responding to human prompts and guidance. In addition, today’s AI typically has a human in the loop who not only checks final decisions, but also shapes how the AI learns, plans, and optimises, providing guardrails along the way.
Each of the following properties of an AI agent brings risk:
- Something akin to memory. An AI agent must build and update internal models of the world and retain knowledge across tasks, unlike the static inputs of traditional machine learning systems. This creates new risks if, for instance, the internal state becomes corrupted, either due to poor design, poor sensing, or the actions of a malicious outsider. Worse, a single flawed model can cascade through dependent systems, leading to large-scale operational errors.
- Greater reasoning and decision-making skills. To meet its defined goals, an AI agent needs greater capability for planning and adapting its actions than traditional AI. One risk here is goal drift. Agents may optimise for unintended metrics, for instance, prioritising cost and ignoring safety. To increase throughput of a task, they may focus on the quick-to-solve cases over the more complex – and higher impact – cases. Plus, an outsider can make the AI agent misbehave by somehow changing its goals.
- Greater action and influence. As companies roll out AI agents, they will be giving them the powers of a super employee – an access-all-areas pass to adjust work schedules, update or delete databases, or even make payments. True, humans also make mistakes, but these can typically be caught and corrected before being repeated too many times. In contrast, an agent trying to meet a goal could replicate the same mistake thousands of times before it can be stopped.
- A decision-making loop. AI agents continuously iterate their behaviour in light of experience. This could result from changes in their environment or even goal drift within the agent itself. This is the most critical difference for governance because it shows that a system that ticks all the boxes at deployment may have evolved significantly within a few days. True, this is a key strength of AI agents; they can optimise in ways no human has imagined. However, it is also a weakness; they can make mistakes no human has imagined.
You can read the full article here where we highlight the new vulnerabilities in AI agents, an improved approach to AI governance (four components of a risk framework for AI agents) and the six questions CEOs must ask about AI agents.
