Skip to main content

šŸ’” AI: better, not more — adopting a frugal approach

Written by Nicolas Movio
Updated this week

🧠 Introduction: changing your mindset

Artificial intelligence is a powerful tool: in just a few seconds, it can answer questions, analyze information, and generate content.

But that power can be misleading.

It’s easy to assume that asking more will lead to better results. In reality, vague or repeated requests mostly create noise, back-and-forth… and less useful answers.

Because AI is neither unlimited nor ā€œfreeā€: every interaction consumes compute, resources, and can degrade quality if not properly framed.

šŸ‘‰ More requests does not mean better results.

This is where a key concept comes in: frugality.

Being frugal with AI doesn’t mean using it less. It means getting more value with fewer interactions, by making clearer and more intentional requests.


āš™ļø What does ā€œfrugalā€ mean with AI?

Applying frugality to AI means aiming to get the best possible result with the fewest interactions. It’s about briefing it better.

Concretely, this means:

  • making fewer requests

  • using fewer tokens

  • limiting unnecessary back-and-forth

  • avoiding any low-value noise in interactions


šŸŽÆ Why does it matter?

Adopting a frugal approach to AI isn’t just common sense—it directly addresses very real challenges around performance, cost, and impact, all of which are already measurable today.

  • ⚔ Performance

Optimizing how you use AI saves time.

Clear, structured prompts reduce iterations, and each additional exchange adds latency (API calls, generation time, display, etc.).

šŸ‘‰ A well-formulated prompt can divide the number of interactions by 2 to 5.

But the cost isn’t just technical or financial—it’s also very real for the user:

  • reading the response,

  • identifying what’s wrong,

  • reformulating,

  • trying again.

That’s time, attention, and often unnecessary fatigue.

In other words, the real cost of a poor prompt isn’t just a few extra tokens—it’s the cognitive cost of going through three mediocre answers instead of one useful one.

Fewer iterations = less friction = faster outcomes.

  • šŸ’° Cost

AI isn’t free—it’s paid for, directly or indirectly.

Most models are billed per token (input + output). As a rough estimate, this ranges from a few cents to several dollars per million tokens, depending on the provider and model.

šŸ‘‰ In a company setting, inefficient usage (repeated queries, unnecessary messages, poorly structured prompts) can multiply token usage by 2 to 10—and therefore the budget.

Conclusion: more poorly used AI = more cost, not more value.

On the other hand, a frugal approach reduces the number of requests and optimizes each interaction.

  • šŸŒ Impact

This is often the least visible aspect—but the numbers make it very real.

According to several analyses (including Microsoft and the University of California), a single AI query can consume on average:

  • 2 to 3 Wh of electricity

  • up to 500 ml of water (for data center cooling)

Other estimates, reported by MIT Technology Review and Les NumƩriques, suggest 10 to 30 times more energy than a standard web search.

At scale, the impact is massive: OpenAI and other industry analyses mention billions of queries per day, representing consumption comparable to tens of thousands of households.

Finally, academic research shared by Hugging Face and the International Energy Agency suggests that just a few dozen queries can already represent several liters of water consumed indirectly.

In a context like Outmind, where infrastructure is mostly hosted in France or Europe, some nuance is needed: French electricity is among the lowest-carbon in Europe, and Microsoft highlights renewable sourcing with hourly tracking in regions like Sweden. ā€œRegionalā€ or ā€œdata zoneā€ deployments also limit processing to a given region or zone.

But this doesn’t eliminate the impact: data centers still need to be cooled and operate at full capacity.

For the same volume, a well-structured usage consumes fewer resources than a messy one.

šŸ‘‰ Less noise, fewer resources, more efficiency.


šŸš€ The real levers for more frugal AI usage

To improve your usage in practice, two things matter: avoiding common mistakes and adopting a few simple but structuring habits.

āŒ Avoid common mistakes

Before improving, it helps to recognize what typically reduces efficiency:

  • Asking vague questions forces the AI to interpret, leading to generic answers and extra iterations.

  • Providing too much context dilutes key information and can reduce relevance. The goal is the right level of information—no more, no less.

  • Chaining requests without improving them leads to repeating the same mistakes.

  • Rephrasing without clarifying the need increases requests without fixing the core issue.

  • Not leveraging the response wastes time, even though it often already contains useful elements.

  • Finally, adding low-value messages (politeness, conversational filler) doesn’t help understanding and unnecessarily lengthens interactions. If it doesn’t change the expected output, it probably doesn’t belong in the prompt.

šŸ‘‰ In all cases: less noise, more intention.

A common example:

  1. ā€œSummarize this documentā€

  2. ā€œMake it shorterā€

  3. ā€œAdd recommendationsā€

  4. ā€œPut it in a tableā€

In many cases, all of this could be asked in one go:

ā€œSummarize this document in 5 key points, then add 3 actionable recommendations in a table.ā€

Same goal, fewer iterations.


āœ… Best practices to adopt

Once these pitfalls are clear, a few simple principles can immediately improve your results.

  • šŸŽÆ Clarify your need before writing: a prompt should aim for a directly usable output

Before even writing your request, be clear about:

  • the exact objective

  • the expected format

šŸ‘‰ The clearer the intention, the more relevant the response from the start.

  • āœļø Structure your request upfront

A good request relies on three elements:

  • context (what is this about?)

  • objective (what do you want?)

  • expected format (list, summary, email, etc.)

šŸ‘‰ This simple structure avoids most unnecessary back-and-forth.

In practice, many iterations come from a simple issue: the content is good, but the format isn’t right. So you ask again just to reformat—when it could have been defined upfront.

Be explicit about:

  • the expected format (bullet points, table, outline, email, steps…)

  • the level of detail (brief, concise, in-depth)

  • any specific constraints (number of points, length, tone, audience)

Examples of useful constraints:

  • ā€œin 5 key pointsā€

  • ā€œunder 150 wordsā€

  • ā€œas a comparison tableā€

  • ā€œwith actionable recommendationsā€

šŸ‘‰ The more you define the output format upfront, the less you’ll need to iterate.

šŸ‘‰ In practice: good output control significantly reduces iterations—even when the content is already correct.

  • šŸ” Iterate smartly

When a response isn’t satisfactory, don’t start over—be specific:

  • what’s missing

  • what needs to be corrected

šŸ‘‰ A targeted iteration is always more effective than a vague new request.

  • šŸ“¦ Provide only the useful context

Find the right balance:

  • too little → ambiguity

  • too much → noise

šŸ‘‰ The goal is to guide the AI, not overwhelm it.

🧪 Test and reuse what works

Over time, some prompt structures perform better.

šŸ‘‰ Keep and reuse what saves you time instead of starting from scratch every time.

  • 🧠 Manage conversation memory effectively

Conversation memory helps maintain coherence, but keeping a thread open too long adds unnecessary context—leading to more noise, higher costs, and more iterations.

Stay in the same conversation if you’re working toward the same objective (same document, same output, same constraints).

Start a new conversation when the objective changes (new topic, audience, or format), or when previous context is no longer relevant.

šŸ‘‰ Simple rule: 1 conversation = 1 objective.

  • šŸ“š Choose the right mode: direct search vs AI

An often overlooked lever of frugality is choosing the right tool for the job.

When you know exactly which documents or sources you need, Outmind’s direct search is usually more efficient than using the AI assistant:

  • immediate access to targeted information (no generation step),

  • zero iterations,

  • minimal resource usage.

On the other hand, the AI assistant is useful when you need to synthesize, analyze, or combine information.

šŸ‘‰ In practice:

  • precise need → direct search

  • analysis / synthesis → AI assistant


āš–ļø Frugality ≠ limitation

Being frugal with AI doesn’t mean using it less—it means using it better.

The goal is to reduce noise and unnecessary back-and-forth, not to hold back.

With clear, structured requests, you get usable results faster.

The difference lies in the quality of interactions, not their quantity.

A good prompt upfront is often worth more than several approximate attempts.


šŸ”Ž Concrete example: before / after

One of the simplest ways to understand frugality is to compare two approaches.

Non-frugal approach

ā€œCan you help me with this document?ā€

Here, the AI lacks context. It responds generically, requiring multiple exchanges to specify the need: type of document, objective, format, level of detail…

šŸ‘‰ Result: 3–4 iterations before getting something usable.

Frugal approach

ā€œSummarize this document in 5 key points, with a professional tone and actionable recommendations.ā€

Here, the request is clear, structured, and directly usable. The AI immediately understands the objective and expected format.

Because the AI doesn’t guess your output standards—you have to define them.

šŸ‘‰ Result: a relevant answer in the first interaction.

What changes in practice:

  • fewer back-and-forth exchanges

  • less time wasted

  • a directly usable output

šŸ‘‰ The difference isn’t the tool—it’s how you formulate the request.


🧭 Conclusion: toward a more mature use of AI

The real skill isn’t just using AI—it’s using it well.

Adopting a frugal approach means moving from intuitive usage to controlled usage: clearer requests, fewer iterations, and directly usable results.

šŸ‘‰ The outcome: more speed, lower costs, and reduced impact—without sacrificing the power of the tool.

Did this answer your question?