AI is already part of how work gets done. But the latest data shows something more important than scale. People aren’t just using AI to complete tasks. They’re using it to think, interpret, and shape decisions. As reliance grows, the question isn’t whether teams are using AI. It’s whether the answers they’re getting are consistent, reliable, and based on the same underlying data. This is where most organisations are now exposed.
Stop Saying Please and Thank You to Your LLMs
Sam Altman's 'polite prompts cost millions' headline made for great debate, but it hid the real story. The cost of AI isn't in your manners, it's in the data foundation beneath.

Stop Saying Please and Thank You to Your LLMs
When Sam Altman, Open AI’s CEO, revealed last year that politeness in ChatGPT prompts had cost OpenAI tens of millions of dollars in compute (industry shorthand for the processing power and electricity needed to run AI tools at scale), the internet did what the internet does. Fresh debate broke out over whether we should be more clipped with our AI tools, or whether being civil to a machine somehow keeps us civil to each other.
It is a fun debate, but it is also a complete distraction.
The real question isn't whether you're being too polite
If you run a business of any size, the question isn't whether you're being too polite to your large language model (LLM, the type of AI behind tools like ChatGPT, Claude and Gemini). The real question is whether your business is set up to get any meaningful value from AI in the first place. And the honest answer, for a lot of companies, is not yet.
When AI underdelivers, the prompts are rarely the problem
Here is what tends to happen behind the scenes when AI fails to deliver. A company launches a deliberate AI initiative, whether that is a customer service assistant, an internal AI tool trained on company documents, an AI agent (a system designed to take action on the user's behalf, like qualifying leads or drafting outbound emails), or a full Copilot rollout across the business. Expectations are high. The actual output is generic answers, hallucinated facts about their own customers, and insights that tell them nothing they could not have found with a quick search. They blame the AI, the prompts, the team. The truth is usually that none of those things are the real problem. The problem is the data and infrastructure underneath.
The hidden AI sprawl most businesses don't realise they have
There is also a second, increasingly common version of this same problem. Most businesses today are not really running one AI tool, they are running a dozen without realising it. The CRM has AI built in, the email platform has AI built in, and the customer service tool, the document editor, the analytics dashboard, the marketing platform and the meeting recorder all have their own AI assistants or features, each one trained on its own narrow slice of your data. The result is a kind of AI sprawl, where partial answers from one tool contradict partial answers from another, none of them seeing the full picture, and all of them producing output that feels almost right but never quite useful. The underlying issue is the same. Disconnected tools sitting on top of disconnected data will always produce disconnected intelligence, no matter how clever the AI inside each one happens to be.
The kitchen, not the chef
Think of it this way. An LLM without a strong data foundation is like a Michelin-starred chef being asked to cook with whatever happens to be in the back of your fridge. Talented, certainly, and capable of remarkable things, but without the right ingredients prepared properly you end up with a confused omelette rather than a memorable meal. The chef is not the issue. The kitchen is.
What a 'data foundation' actually means
A ‘data foundation’ simply refers to the systems, structures and processes that make your business data accessible, accurate and usable. For most businesses that means having a single source of truth for customer information, well organised and properly tagged documents, integrated tools that talk to each other rather than living in silos, and clear governance rules about who can access what and when. None of it is glamorous, but all of it determines whether your AI investment delivers a return or quietly drains away.
The companies winning with AI aren't the ones with the cleverest prompts
The companies winning with AI right now are not the ones writing the cleverest prompts. They are the ones who have invested in getting their data house in order, so that when an AI tool is pointed at their business it actually has something useful to work with. They have clean records, well organised internal knowledge, and clear rules about how AI can interact with both. The result is an AI that knows your business, speaks your language, and produces output you can actually use.
Different size, same underlying risk
This is not a problem reserved for one type of business. The risk simply takes different shapes depending on size. For a smaller business, a failed AI investment can be genuinely existential, with tight margins leaving little room to absorb wasted spend or a year of lost progress. For a larger organisation, the same underlying problem plays out more quietly but no less expensively, with pilot programmes that never scale, departments running parallel AI projects on top of disconnected data, mounting subscription costs for tools producing mediocre output, and a slow erosion of internal trust in AI before it has had a real chance to prove itself. The companies that move ahead, regardless of size, are the ones who recognise that the shiny tools only ever pay back when the foundations underneath them are right. The good news is that getting there does not require a hundred-person data team. It requires a clear plan, the right partner, and a willingness to invest in the unglamorous work that makes the glamorous work possible.
Save your manners for the humans
So by all means, save your please and thank you for the humans in your life. The LLM does not care either way. While the world has spent the last year debating the cost of being polite to machines, the real cost was hiding in plain sight: the price of pulling old, obsolete or inaccurate data into every AI interaction, and then acting on whatever comes back. Every polite ‘thank you’ might cost a fraction of a penny in compute. Every decision made on broken data costs something far bigger, whether that is a wasted campaign, a missed market shift, a misjudged hire or a strategy built on the wrong picture of your customers.
That is the conversation worth having about AI right now, and it is the one we have with our clients every day at Configur, helping them assess where they stand, what needs strengthening underneath, and how to build the kind of foundation that turns AI from a curiosity into a genuine commercial advantage.
If you are thinking about where AI fits into your business, but suspect the foundations are not quite ready, that is exactly the right place to start.
Ready to find out if your data and AI foundations are built for what's next? The Data & AI Readiness Framework gives you the answer. Get in touch to explore.
Stop Saying Please and Thank You to Your LLMs
When Sam Altman, Open AI’s CEO, revealed last year that politeness in ChatGPT prompts had cost OpenAI tens of millions of dollars in compute (industry shorthand for the processing power and electricity needed to run AI tools at scale), the internet did what the internet does. Fresh debate broke out over whether we should be more clipped with our AI tools, or whether being civil to a machine somehow keeps us civil to each other.
It is a fun debate, but it is also a complete distraction.
The real question isn't whether you're being too polite
If you run a business of any size, the question isn't whether you're being too polite to your large language model (LLM, the type of AI behind tools like ChatGPT, Claude and Gemini). The real question is whether your business is set up to get any meaningful value from AI in the first place. And the honest answer, for a lot of companies, is not yet.
When AI underdelivers, the prompts are rarely the problem
Here is what tends to happen behind the scenes when AI fails to deliver. A company launches a deliberate AI initiative, whether that is a customer service assistant, an internal AI tool trained on company documents, an AI agent (a system designed to take action on the user's behalf, like qualifying leads or drafting outbound emails), or a full Copilot rollout across the business. Expectations are high. The actual output is generic answers, hallucinated facts about their own customers, and insights that tell them nothing they could not have found with a quick search. They blame the AI, the prompts, the team. The truth is usually that none of those things are the real problem. The problem is the data and infrastructure underneath.
The hidden AI sprawl most businesses don't realise they have
There is also a second, increasingly common version of this same problem. Most businesses today are not really running one AI tool, they are running a dozen without realising it. The CRM has AI built in, the email platform has AI built in, and the customer service tool, the document editor, the analytics dashboard, the marketing platform and the meeting recorder all have their own AI assistants or features, each one trained on its own narrow slice of your data. The result is a kind of AI sprawl, where partial answers from one tool contradict partial answers from another, none of them seeing the full picture, and all of them producing output that feels almost right but never quite useful. The underlying issue is the same. Disconnected tools sitting on top of disconnected data will always produce disconnected intelligence, no matter how clever the AI inside each one happens to be.
The kitchen, not the chef
Think of it this way. An LLM without a strong data foundation is like a Michelin-starred chef being asked to cook with whatever happens to be in the back of your fridge. Talented, certainly, and capable of remarkable things, but without the right ingredients prepared properly you end up with a confused omelette rather than a memorable meal. The chef is not the issue. The kitchen is.
What a 'data foundation' actually means
A ‘data foundation’ simply refers to the systems, structures and processes that make your business data accessible, accurate and usable. For most businesses that means having a single source of truth for customer information, well organised and properly tagged documents, integrated tools that talk to each other rather than living in silos, and clear governance rules about who can access what and when. None of it is glamorous, but all of it determines whether your AI investment delivers a return or quietly drains away.
The companies winning with AI aren't the ones with the cleverest prompts
The companies winning with AI right now are not the ones writing the cleverest prompts. They are the ones who have invested in getting their data house in order, so that when an AI tool is pointed at their business it actually has something useful to work with. They have clean records, well organised internal knowledge, and clear rules about how AI can interact with both. The result is an AI that knows your business, speaks your language, and produces output you can actually use.
Different size, same underlying risk
This is not a problem reserved for one type of business. The risk simply takes different shapes depending on size. For a smaller business, a failed AI investment can be genuinely existential, with tight margins leaving little room to absorb wasted spend or a year of lost progress. For a larger organisation, the same underlying problem plays out more quietly but no less expensively, with pilot programmes that never scale, departments running parallel AI projects on top of disconnected data, mounting subscription costs for tools producing mediocre output, and a slow erosion of internal trust in AI before it has had a real chance to prove itself. The companies that move ahead, regardless of size, are the ones who recognise that the shiny tools only ever pay back when the foundations underneath them are right. The good news is that getting there does not require a hundred-person data team. It requires a clear plan, the right partner, and a willingness to invest in the unglamorous work that makes the glamorous work possible.
Save your manners for the humans
So by all means, save your please and thank you for the humans in your life. The LLM does not care either way. While the world has spent the last year debating the cost of being polite to machines, the real cost was hiding in plain sight: the price of pulling old, obsolete or inaccurate data into every AI interaction, and then acting on whatever comes back. Every polite ‘thank you’ might cost a fraction of a penny in compute. Every decision made on broken data costs something far bigger, whether that is a wasted campaign, a missed market shift, a misjudged hire or a strategy built on the wrong picture of your customers.
That is the conversation worth having about AI right now, and it is the one we have with our clients every day at Configur, helping them assess where they stand, what needs strengthening underneath, and how to build the kind of foundation that turns AI from a curiosity into a genuine commercial advantage.
If you are thinking about where AI fits into your business, but suspect the foundations are not quite ready, that is exactly the right place to start.
Ready to find out if your data and AI foundations are built for what's next? The Data & AI Readiness Framework gives you the answer. Get in touch to explore.

%201.png)
