š§ Introduction: changing your mindset
Artificial intelligence is a powerful tool: in just a few seconds, it can answer questions, analyze information, and generate content.
But that power can be misleading.
Itās easy to assume that asking more will lead to better results. In reality, vague or repeated requests mostly create noise, back-and-forth⦠and less useful answers.
Because AI is neither unlimited nor āfreeā: every interaction consumes compute, resources, and can degrade quality if not properly framed.
š More requests does not mean better results.
This is where a key concept comes in: frugality.
Being frugal with AI doesnāt mean using it less. It means getting more value with fewer interactions, by making clearer and more intentional requests.
āļø What does āfrugalā mean with AI?
Applying frugality to AI means aiming to get the best possible result with the fewest interactions. Itās about briefing it better.
Concretely, this means:
making fewer requests
using fewer tokens
limiting unnecessary back-and-forth
avoiding any low-value noise in interactions
šÆ Why does it matter?
Adopting a frugal approach to AI isnāt just common senseāit directly addresses very real challenges around performance, cost, and impact, all of which are already measurable today.
ā” Performance
Optimizing how you use AI saves time.
Clear, structured prompts reduce iterations, and each additional exchange adds latency (API calls, generation time, display, etc.).
š A well-formulated prompt can divide the number of interactions by 2 to 5.
But the cost isnāt just technical or financialāitās also very real for the user:
reading the response,
identifying whatās wrong,
reformulating,
trying again.
Thatās time, attention, and often unnecessary fatigue.
In other words, the real cost of a poor prompt isnāt just a few extra tokensāitās the cognitive cost of going through three mediocre answers instead of one useful one.
Fewer iterations = less friction = faster outcomes.
š° Cost
AI isnāt freeāitās paid for, directly or indirectly.
Most models are billed per token (input + output). As a rough estimate, this ranges from a few cents to several dollars per million tokens, depending on the provider and model.
š In a company setting, inefficient usage (repeated queries, unnecessary messages, poorly structured prompts) can multiply token usage by 2 to 10āand therefore the budget.
Conclusion: more poorly used AI = more cost, not more value.
On the other hand, a frugal approach reduces the number of requests and optimizes each interaction.
š Impact
This is often the least visible aspectābut the numbers make it very real.
According to several analyses (including Microsoft and the University of California), a single AI query can consume on average:
2 to 3 Wh of electricity
up to 500 ml of water (for data center cooling)
Other estimates, reported by MIT Technology Review and Les NumƩriques, suggest 10 to 30 times more energy than a standard web search.
At scale, the impact is massive: OpenAI and other industry analyses mention billions of queries per day, representing consumption comparable to tens of thousands of households.
Finally, academic research shared by Hugging Face and the International Energy Agency suggests that just a few dozen queries can already represent several liters of water consumed indirectly.
In a context like Outmind, where infrastructure is mostly hosted in France or Europe, some nuance is needed: French electricity is among the lowest-carbon in Europe, and Microsoft highlights renewable sourcing with hourly tracking in regions like Sweden. āRegionalā or ādata zoneā deployments also limit processing to a given region or zone.
But this doesnāt eliminate the impact: data centers still need to be cooled and operate at full capacity.
For the same volume, a well-structured usage consumes fewer resources than a messy one.
š Less noise, fewer resources, more efficiency.
š The real levers for more frugal AI usage
To improve your usage in practice, two things matter: avoiding common mistakes and adopting a few simple but structuring habits.
ā Avoid common mistakes
Before improving, it helps to recognize what typically reduces efficiency:
Asking vague questions forces the AI to interpret, leading to generic answers and extra iterations.
Providing too much context dilutes key information and can reduce relevance. The goal is the right level of informationāno more, no less.
Chaining requests without improving them leads to repeating the same mistakes.
Rephrasing without clarifying the need increases requests without fixing the core issue.
Not leveraging the response wastes time, even though it often already contains useful elements.
Finally, adding low-value messages (politeness, conversational filler) doesnāt help understanding and unnecessarily lengthens interactions. If it doesnāt change the expected output, it probably doesnāt belong in the prompt.
š In all cases: less noise, more intention.
A common example:
āSummarize this documentā
āMake it shorterā
āAdd recommendationsā
āPut it in a tableā
In many cases, all of this could be asked in one go:
āSummarize this document in 5 key points, then add 3 actionable recommendations in a table.ā
Same goal, fewer iterations.
ā Best practices to adopt
Once these pitfalls are clear, a few simple principles can immediately improve your results.
šÆ Clarify your need before writing: a prompt should aim for a directly usable output
Before even writing your request, be clear about:
the exact objective
the expected format
š The clearer the intention, the more relevant the response from the start.
āļø Structure your request upfront
A good request relies on three elements:
context (what is this about?)
objective (what do you want?)
expected format (list, summary, email, etc.)
š This simple structure avoids most unnecessary back-and-forth.
In practice, many iterations come from a simple issue: the content is good, but the format isnāt right. So you ask again just to reformatāwhen it could have been defined upfront.
Be explicit about:
the expected format (bullet points, table, outline, email, stepsā¦)
the level of detail (brief, concise, in-depth)
any specific constraints (number of points, length, tone, audience)
Examples of useful constraints:
āin 5 key pointsā
āunder 150 wordsā
āas a comparison tableā
āwith actionable recommendationsā
š The more you define the output format upfront, the less youāll need to iterate.
š In practice: good output control significantly reduces iterationsāeven when the content is already correct.
To go further, see: Creating a good prompt for your assistant: method and examples
š Iterate smartly
When a response isnāt satisfactory, donāt start overābe specific:
whatās missing
what needs to be corrected
š A targeted iteration is always more effective than a vague new request.
š¦ Provide only the useful context
Find the right balance:
too little ā ambiguity
too much ā noise
š The goal is to guide the AI, not overwhelm it.
š§Ŗ Test and reuse what works
Over time, some prompt structures perform better.
š Keep and reuse what saves you time instead of starting from scratch every time.
š§ Manage conversation memory effectively
Conversation memory helps maintain coherence, but keeping a thread open too long adds unnecessary contextāleading to more noise, higher costs, and more iterations.
Stay in the same conversation if youāre working toward the same objective (same document, same output, same constraints).
Start a new conversation when the objective changes (new topic, audience, or format), or when previous context is no longer relevant.
š Simple rule: 1 conversation = 1 objective.
š Choose the right mode: direct search vs AI
An often overlooked lever of frugality is choosing the right tool for the job.
When you know exactly which documents or sources you need, Outmindās direct search is usually more efficient than using the AI assistant:
immediate access to targeted information (no generation step),
zero iterations,
minimal resource usage.
On the other hand, the AI assistant is useful when you need to synthesize, analyze, or combine information.
š In practice:
precise need ā direct search
analysis / synthesis ā AI assistant
āļø Frugality ā limitation
Being frugal with AI doesnāt mean using it lessāit means using it better.
The goal is to reduce noise and unnecessary back-and-forth, not to hold back.
With clear, structured requests, you get usable results faster.
The difference lies in the quality of interactions, not their quantity.
A good prompt upfront is often worth more than several approximate attempts.
š Concrete example: before / after
One of the simplest ways to understand frugality is to compare two approaches.
Non-frugal approach
āCan you help me with this document?ā
Here, the AI lacks context. It responds generically, requiring multiple exchanges to specify the need: type of document, objective, format, level of detailā¦
š Result: 3ā4 iterations before getting something usable.
Frugal approach
āSummarize this document in 5 key points, with a professional tone and actionable recommendations.ā
Here, the request is clear, structured, and directly usable. The AI immediately understands the objective and expected format.
Because the AI doesnāt guess your output standardsāyou have to define them.
š Result: a relevant answer in the first interaction.
What changes in practice:
fewer back-and-forth exchanges
less time wasted
a directly usable output
š The difference isnāt the toolāitās how you formulate the request.
š§ Conclusion: toward a more mature use of AI
The real skill isnāt just using AIāitās using it well.
Adopting a frugal approach means moving from intuitive usage to controlled usage: clearer requests, fewer iterations, and directly usable results.
š The outcome: more speed, lower costs, and reduced impactāwithout sacrificing the power of the tool.
