The Latest AI Adoption Data Shows Rapid Growth. It Also Reveals a Bigger Problem.

AI is already part of how work gets done. But the latest data shows something more important than scale. People aren’t just using AI to complete tasks. They’re using it to think, interpret, and shape decisions. As reliance grows, the question isn’t whether teams are using AI. It’s whether the answers they’re getting are consistent, reliable, and based on the same underlying data. This is where most organisations are now exposed.

The Latest AI Adoption Data Shows Rapid Growth. It Also Reveals a Bigger Problem.

The scale alone is enough to get attention. Hundreds of millions of people are using these tools regularly. However, the more interesting story isn’t how many people are using AI, but how they’re using it. And that’s where it gets interesting, because the data challenges some of the assumptions driving the conversation and points to a risk most organisations are only just starting to feel.

This isn’t a rollout. It’s now just behaviour

The study shows that AI adoption isn’t happening through structured, top-down implementation. It’s happening through individual behaviour. People are using AI to solve immediate problems in their day-to-day work and lives. Drafting, summarising, interpreting, exploring. Quietly, independently, and most often without oversight - and that matters.

It matters because AI hasn’t been introduced as a system. It has become part of how work gets done.

The data tells a different story than most expect

One of the most revealing findings is how people are actually interacting with AI.

Nearly half of all interactions fall into what the study describes as ‘asking’ behaviour. People seeking advice, direction, or interpretation. Fewer are purely task execution, and that’s a shift. So, AI is not just being used to do work, it’s being used to think things through.

At the same time, only a small proportion of usage is strictly work-related. A large amount is personal, exploratory, or informal.

The productivity narrative on AI misses what’s really happening

Much of the focus on AI adoption has centred on productivity; faster outputs, reduced effort, more efficient execution. The research supports that, to a point. But there’s an assumption underneath it; that individual productivity gains translate into organisational effectiveness.

That only works if the underlying system is aligned. What the data doesn’t capture is what happens when those gains sit on top of fragmented data, inconsistent definitions, and disconnected systems. You don’t get acceleration, you get divergence.

Faster work, but slower decisions. 

AI is shifting from execution to influence

This is the implication that matters most is that if AI is being used to ask questions, sense-check thinking, and shape interpretation, it is no longer just producing outputs. It is influencing decisions - and that changes the risk profile entirely.

You are no longer asking whether AI can complete tasks efficiently. You are asking whether the inputs behind its responses are reliable, because that determines the quality of the decisions that follow.

Six months on, the trajectory is clearer

If the 2025 data showed early patterns, what’s happened since is becoming easier to see. AI is now embedded in everyday workflows across most organisations. Not as a pilot, but as an expectation. The tooling layer has accelerated rapidly. Copilots, agents, and AI embedded across almost every platform.

At the same time, the data foundations inside organisations have not evolved at the same speed. This means the same patterns identified in the research are now playing out at a different scale.

More usage → More reliance → More exposure.

From experimentation to dependency

In 2025, AI was something teams were trying, but now it’s something they depend on.

AI-generated outputs are feeding directly into reports, recommendations, and decisions. At the same time, trust in those outputs has increased faster than verification. As the technology improves, scrutiny often decreases, which creates a quieter, more systemic risk.

So AI adoption is no longer the challenge, but unstructured reliance is.

The real gap isn’t access. It’s accountability

Most organisations can now say AI is being used internally.

Far fewer can answer:

  • Where those outputs are coming from
  • Whether they are consistent across teams
  • Who is accountable for their accuracy

AI has outpaced governance. Not just in policy, but in day-to-day reality.

What’s changed since the 2025 data

The original research captured the early shape of AI adoption. A year on, the shifts are clearer:

  • From experimentation to expectation
    AI is now assumed to be part of how work gets done

  • From productivity gains to decision influence
    AI is shaping analysis and recommendations, not just speeding up tasks

  • From isolated usage to embedded workflows
    AI is integrated into tools, not used separately

  • From visible inefficiency to invisible risk
    The concern is no longer whether AI works, but whether its outputs can be trusted

  • From access challenges to accountability gaps
    Most organisations have access. Few have control

These are not small changes. They move AI from a productivity tool to an operational dependency for all businesses. 

Where this becomes real

At some point, this stops being about efficiency and starts being about decisions. Board reporting, regulatory submissions, investment calls, operational trade-offs. All the moments where the question isn’t how quickly something was produced. It’s whether it is right. That’s where unstructured AI adoption starts to break down.

The shift that actually matters

The organisations getting ahead here are not the ones pushing AI usage, they’re the ones tightening the layer underneath it. Establishing a single, governed view of data, creating shared definitions, controlling how insight is accessed, making sure that when people ask questions, they are drawing from the same underlying truth - these are critical. AI then becomes an interface, not a multiplier of risk.

The bottom line

The data tells us AI adoption is already widespread, but what’s changed since then is the level of reliance. What hasn’t changed at the same pace is the structure underneath it. Adoption is no longer the differentiator, confidence in the decision is. Because when your teams turn to AI for answers, the question isn’t how quickly they get a response. It’s whether those answers are coming from the same version of the truth.



Key takeaways
  • AI adoption is happening through behaviour, not structured rollout
  • People use AI to ask and think, not just do
  • Most usage isn’t strictly work-related, despite the enterprise narrative
  • AI is now influencing decisions, not just outputs
  • Reliance has scaled faster than structure
  • The real risk is inconsistent, ungoverned outputs
  • The advantage is shifting from using AI to trusting it

The Latest AI Adoption Data Shows Rapid Growth. It Also Reveals a Bigger Problem.

The scale alone is enough to get attention. Hundreds of millions of people are using these tools regularly. However, the more interesting story isn’t how many people are using AI, but how they’re using it. And that’s where it gets interesting, because the data challenges some of the assumptions driving the conversation and points to a risk most organisations are only just starting to feel.

This isn’t a rollout. It’s now just behaviour

The study shows that AI adoption isn’t happening through structured, top-down implementation. It’s happening through individual behaviour. People are using AI to solve immediate problems in their day-to-day work and lives. Drafting, summarising, interpreting, exploring. Quietly, independently, and most often without oversight - and that matters.

It matters because AI hasn’t been introduced as a system. It has become part of how work gets done.

The data tells a different story than most expect

One of the most revealing findings is how people are actually interacting with AI.

Nearly half of all interactions fall into what the study describes as ‘asking’ behaviour. People seeking advice, direction, or interpretation. Fewer are purely task execution, and that’s a shift. So, AI is not just being used to do work, it’s being used to think things through.

At the same time, only a small proportion of usage is strictly work-related. A large amount is personal, exploratory, or informal.

The productivity narrative on AI misses what’s really happening

Much of the focus on AI adoption has centred on productivity; faster outputs, reduced effort, more efficient execution. The research supports that, to a point. But there’s an assumption underneath it; that individual productivity gains translate into organisational effectiveness.

That only works if the underlying system is aligned. What the data doesn’t capture is what happens when those gains sit on top of fragmented data, inconsistent definitions, and disconnected systems. You don’t get acceleration, you get divergence.

Faster work, but slower decisions. 

AI is shifting from execution to influence

This is the implication that matters most is that if AI is being used to ask questions, sense-check thinking, and shape interpretation, it is no longer just producing outputs. It is influencing decisions - and that changes the risk profile entirely.

You are no longer asking whether AI can complete tasks efficiently. You are asking whether the inputs behind its responses are reliable, because that determines the quality of the decisions that follow.

Six months on, the trajectory is clearer

If the 2025 data showed early patterns, what’s happened since is becoming easier to see. AI is now embedded in everyday workflows across most organisations. Not as a pilot, but as an expectation. The tooling layer has accelerated rapidly. Copilots, agents, and AI embedded across almost every platform.

At the same time, the data foundations inside organisations have not evolved at the same speed. This means the same patterns identified in the research are now playing out at a different scale.

More usage → More reliance → More exposure.

From experimentation to dependency

In 2025, AI was something teams were trying, but now it’s something they depend on.

AI-generated outputs are feeding directly into reports, recommendations, and decisions. At the same time, trust in those outputs has increased faster than verification. As the technology improves, scrutiny often decreases, which creates a quieter, more systemic risk.

So AI adoption is no longer the challenge, but unstructured reliance is.

The real gap isn’t access. It’s accountability

Most organisations can now say AI is being used internally.

Far fewer can answer:

  • Where those outputs are coming from
  • Whether they are consistent across teams
  • Who is accountable for their accuracy

AI has outpaced governance. Not just in policy, but in day-to-day reality.

What’s changed since the 2025 data

The original research captured the early shape of AI adoption. A year on, the shifts are clearer:

  • From experimentation to expectation
    AI is now assumed to be part of how work gets done

  • From productivity gains to decision influence
    AI is shaping analysis and recommendations, not just speeding up tasks

  • From isolated usage to embedded workflows
    AI is integrated into tools, not used separately

  • From visible inefficiency to invisible risk
    The concern is no longer whether AI works, but whether its outputs can be trusted

  • From access challenges to accountability gaps
    Most organisations have access. Few have control

These are not small changes. They move AI from a productivity tool to an operational dependency for all businesses. 

Where this becomes real

At some point, this stops being about efficiency and starts being about decisions. Board reporting, regulatory submissions, investment calls, operational trade-offs. All the moments where the question isn’t how quickly something was produced. It’s whether it is right. That’s where unstructured AI adoption starts to break down.

The shift that actually matters

The organisations getting ahead here are not the ones pushing AI usage, they’re the ones tightening the layer underneath it. Establishing a single, governed view of data, creating shared definitions, controlling how insight is accessed, making sure that when people ask questions, they are drawing from the same underlying truth - these are critical. AI then becomes an interface, not a multiplier of risk.

The bottom line

The data tells us AI adoption is already widespread, but what’s changed since then is the level of reliance. What hasn’t changed at the same pace is the structure underneath it. Adoption is no longer the differentiator, confidence in the decision is. Because when your teams turn to AI for answers, the question isn’t how quickly they get a response. It’s whether those answers are coming from the same version of the truth.



Key takeaways
  • AI adoption is happening through behaviour, not structured rollout
  • People use AI to ask and think, not just do
  • Most usage isn’t strictly work-related, despite the enterprise narrative
  • AI is now influencing decisions, not just outputs
  • Reliance has scaled faster than structure
  • The real risk is inconsistent, ungoverned outputs
  • The advantage is shifting from using AI to trusting it

Configur connects the dots between your systems, teams, and obligations, giving you one place to see the full picture, act faster, and stay audit-ready.