Stop Guessing, Start Engineering: Google's Framework for Better AI Prompts

March 17, 2026

Stop Guessing, Start Engineering: Google's Framework for Better AI Prompts

Most people treat AI prompting like a vending machine — put something in, hope something good comes out, and shake it when it doesn't. If that sounds familiar, you're not alone. But there's a better way.

Whether you're using AI to write customer emails, build estimates, handle negative reviews, or draft job postings, the quality of your prompt determines the quality of your output. Google's AI Prompt Engineering course lays out a structured, repeatable framework that takes the guesswork out of it entirely.

Let's break it down.

The TCREI Framework: Google's 5-Step Prompting System

Google's framework, known as TCREI, treats prompting less like a conversation and more like a design process. Each letter stands for a step:

1. Task | Be explicit about what you want. Use action verbs: draft, summarize, compare, rewrite. Vague tasks produce vague results. 

2. Context | Give the model the "why" behind the request. Who's the audience? What's the goal? What format do you need? 

3. References | Provide examples, data, or source material. This "few-shot" approach shows the model what good looks like — rather than leaving it to guess. 

4. Evaluate | Read the output critically. Did it actually do what you asked? Is anything factually off, tonally wrong, or missing entirely? 

5. Iterate | Refine your prompt based on what you found. Google calls this mindset ABI — Always Be Iterating. 

The biggest shift this framework requires is accepting that your first prompt is rarely your best one. That's not a failure — it's the process.

A full TCREI prompt in action:

Say you want help responding to a negative Google review from a customer who claims your HVAC tech left without fixing the issue. A weak prompt looks like: "Write a response to a bad review." A TCREI prompt looks like this:

"Draft a professional response to a negative Google review from a customer who says our HVAC technician left without fully resolving their AC issue. Our audience is potential customers reading this publicly. Tone: empathetic, accountable, and professional — not defensive. We want to acknowledge the concern, invite them to call us directly to make it right, and close on a positive note. Keep it under 120 words. Here's an example of a response we've used before that we liked: [paste example]."

Same situation. Completely different result.

The ABI Mindset: Always Be Iterating

Here's something most people don't realize: the models themselves aren't the bottleneck. The prompt usually is.

Google's ABI principle treats iteration not as a last resort, but as a built-in part of the workflow. When the output misses the mark, there are four specific levers you can pull:

1. Revisit the Framework

Before anything else, check your prompt against the five steps. Did you skip context? Is the task genuinely clear? More often than not, a missing piece in the framework is what's causing a weak output, not the model itself.

If you asked AI to "write a follow-up email after a service call" and got something generic, the problem is probably missing context. Who's the customer? What service was done? What do you want them to do next — leave a review, schedule maintenance, refer a friend?

2. Break It Into Shorter Sentences

Long, dense prompts are surprisingly easy for LLMs to fumble. When instructions are buried in a wall of text, the model may deprioritize certain requirements entirely. Separating each instruction into its own sentence helps the model process each condition individually.

Instead of: "Write a professional but friendly email to a homeowner who just had their furnace replaced reminding them about the warranty registration and asking for a review without being pushy."

Try:

"Write an email to a homeowner who just had their furnace replaced. Tone: warm and professional. Purpose 1: remind them to register their equipment warranty. Purpose 2: ask for a Google review. Do not make the review request feel pushy. Keep it under 150 words."

3. Try Different Phrasing or an Analogous Task

Sometimes the model doesn't connect with how you're framing something, especially with technical or industry-specific requests. Switching to an analogy or a related task can unlock a completely different reasoning path.

Instead of: "Explain the difference between a single-stage and two-stage HVAC system for a customer."

Try: "Explain the difference between a single-stage and two-stage HVAC system like you're comparing a light switch to a dimmer. Write it for a homeowner who has no technical background and just wants to know which one is worth paying more for."

This isn't about dumbing it down. It's about giving the model a mental model to work from, and giving your customer an explanation they'll actually understand.

4. Introduce Constraints

Telling the model what not to do is often more effective than telling it what to do. Constraints narrow the model's focus and produce tighter, more usable output.

Some useful constraint patterns for home services:

  • "Do not use HVAC industry jargon — write for a homeowner."
  • "Do not mention competitors by name."
  • "Keep it under 100 words — this is for a text message, not an email."
  • "Do not make pricing promises or include specific dollar amounts."
  • "Write in a friendly tone, but do not use exclamation points."

Think of constraints as guardrails, not limitations.

Beyond the Basics: Advanced Reasoning Techniques

Once you're comfortable with TCREI, you can start pushing into techniques that shift how the model thinks, not just what it responds to. These methods move LLMs from fast, pattern-matching responses to slower, more deliberate reasoning.

Chain of Thought (CoT) Prompting

What it is: You explicitly ask the model to think through a problem step-by-step before arriving at a conclusion.

How to use it: Add a simple instruction like "Think through this step by step" or "Walk me through your reasoning before giving the final answer."

Why it works: By generating intermediate steps, the model is less likely to jump to a shallow or incorrect answer. This is especially useful when the path to the answer matters as much as the answer itself.

Example:

"I run a plumbing company with 6 technicians. I'm getting a lot of callbacks on water heater installs, specifically with customers calling back within 30 days with issues. Think through step by step what the most likely causes of this problem could be, starting with the most common, and then suggest what I should investigate first."

Without Chain of Thought, AI might just list generic possibilities. With it, you get a structured diagnostic that you can actually act on, almost like a second opinion from a consultant.

Tree of Thought (ToT) Prompting

What it is: An evolution of Chain of Thought, Tree of Thought asks the model to explore multiple reasoning paths simultaneously, like a decision tree, before settling on the best one.

How to use it: Ask the model to generate several approaches to a problem, evaluate each one, and recommend the strongest option.

Why it works: Rather than committing to the first plausible solution, the model considers alternatives and reasons through trade-offs. It's a great technique for business decisions, hiring, pricing strategy, or any situation where there's no single right answer.

Example:

"I run an HVAC company and I'm losing customers to a competitor who's charging 15% less than me. Generate three different strategic approaches I could take to respond. For each approach, explain the logic behind it and the risks involved. Then recommend which you'd pursue and why."

You might get options like: match the price selectively, double down on service quality and reputation, or carve out a premium niche. The model then evaluates each, which gives you a strategic brief in seconds instead of hours.

Another example:

"I want to start offering service agreements to residential electrical customers but I've never done it before. Give me three different ways I could structure the pricing and what's included. Evaluate the pros and cons of each, then recommend the best starting point for a company with 4 electricians doing mostly repair and panel work."

Meta Prompting

What it is: You use the AI to help you build a better prompt. Instead of trying to engineer the perfect prompt yourself, you describe your goal and ask the model to write the prompt for you.

How to use it: Describe what you're trying to accomplish and ask: "Write the best possible prompt I could use to get a great result for this."

Why it works: The model has a deep understanding of how prompts affect its own outputs. By letting it draft the prompt, you're leveraging that knowledge directly, especially useful when you're not sure how to frame something complex.

Example:

"I want to use AI to help me write better job postings for HVAC technician roles. We're a family-owned company in a mid-sized market and we're competing against larger companies for talent. Write the best possible prompt I could use to generate a compelling job posting that attracts experienced techs, not just entry-level applicants."

The model will produce a detailed, well-structured prompt you can then paste in and use immediately. It might be one that includes instructions you wouldn't have thought to add yourself.

Meta prompting is also a great learning tool. Study the prompts the model writes for itself. You'll quickly develop an instinct for what makes a prompt actually work.

Putting It All Together

Here's a quick cheat sheet for applying everything above:

For everyday tasks such as customer emails, review responses, job postings, service descriptions, use the TCREI framework. Make sure all five components are present before you evaluate the output.

When the output isn't quite right, pull one of the four iteration levers: revisit the framework, break it into shorter sentences, reframe the task, or add constraints.

For bigger decisions like pricing strategy, hiring, callback problems, competitive positioning, layer in Chain of Thought, Tree of Thought, or Meta Prompting depending on what the task demands.

The shift this requires is subtle but important: stop thinking of prompts as one-shot requests and start treating them as drafts. Every output is feedback. Every refinement makes the next one better.

That's what ABI really means. Not just iterating on prompts, but developing an instinct over time for what works and why. The business owners who build that instinct now will have a real advantage over the ones still shaking the vending machine.

Try it today: take one task you already use AI for, or wish you could, and run it through the TCREI framework from scratch. The difference in output quality might surprise you. Want more tips on using AI in your business? Join the Just Start AI community.

want to run a free website assessment?
Get your free SEO & AI Visibility Assessment. We’ll analyze how your website performs across Google and AI search engines — uncovering SEO gaps, speed issues, and LLM visibility opportunities that impact traffic and conversions.
Launch the most advanced AI-powered technology in the home services industry. Faster load times, higher rankings, greater visibility, and more conversions.
SPEAK TO AN EXPERT