|
Listen to this article
Getting your Trinity Audio player ready...
|
-AI for Leaders
BLUF: Busy marketing leaders are leaving serious performance on the table because nobody taught them how to prompt. Here’s the 5-step framework from Google that fixes it in under 5 minutes.
Topic: Generative AI · Productivity · Leadership
Most marketing leaders aren’t bad at AI. They’re bad at giving instructions.
That’s not a comfort. That’s the problem.
You’ve spent years delegating to humans who could read the room. AI can’t read the room. It doesn’t even know there is one. You have to build the context from scratch. Every single time.
The Google TCREI framework – Task, Context, References, Evaluate, Iterate – is the 5-step structure that fixes it. It takes 5 minutes. Most people skip it entirely. That’s the gap.

Research Spotlight
AI can help people with adjacent skills accomplish unfamiliar tasks – but hits a wall when people lack sufficient domain expertise. It amplifies what you already know. It can’t replace what you don’t.
There’s a framework. It takes two minutes. Use it, and your AI outputs will feel like they came from a genuinely talented team member, not a content generator with a blank stare.
The Google TCREI Prompt Framework
“Thoughtfully Create Really Excellent Inputs”
Task · Context · References · Evaluate · Iterate
The 5 Steps. No Skipping.
STEP 01. Task
Describe exactly what you want. Include a persona (what expertise should the AI draw from?) and a format preference (how should the output look?).
Persona shifts the model’s output quality and tone. It’s not decorative – it’s directional.
“Act as a marketing director who’s survived three rebrands and two recessions and has no patience for vague briefs” produces meaningfully different results than “write me a marketing plan.”
Same tools. Completely different output.
WEAK TASK
“Write a subject line for a reactivation email.”
STRONG TASK
“You’re a copywriter who believes the best subject line is the one that doesn’t feel like a subject line. Write five options for a reactivation email targeting B2B marketers who went cold three months ago. Under 50 characters. No exclamation marks. No emojis. No fake personalisation. Just something worth opening.”
STEP 02. Context
This is where most people stop too soon.
Context is the difference between an output that’s technically correct and one that’s actually useful. Tell the AI about the audience, the constraints, the situation, the stakes. The more specific, the better.
Same task. Completely different usefulness.
WITHOUT CONTEXT
“Give me five holiday present ideas under €50.”
WITH CONTEXT
“Give me five holiday present ideas under €50. The recipient is 35, loves horse-riding, traditional Irish music, and just got accepted to compete in the Dublin Horse Show.”
“The quality of your AI output is a direct reflection of the quality of your input. Garbage in, polished-sounding garbage out.”

Nicola Ziady
/
CMO
STEP 03. References
If you have examples, use them. Past campaigns, competitor copy, tone of voice guidelines, a piece of writing you liked – drop it in.
References aren’t always available, especially for exploratory work. That’s fine. But when they exist, they dramatically narrow the gap between what you imagined and what you get.
Think of it like briefing a new freelancer. You wouldn’t just describe the work – you’d show them examples.
STEP 04. Evaluate
Before you hit “use this output,” ask yourself one question: Did the input I gave actually produce what I needed?
This sounds obvious. Most people skip it anyway – because they’re in a rush, and the output is almost right, and editing is faster than re-prompting.
That’s how mediocre AI-assisted work gets published.
Evaluation is where your expertise matters most. You know the brand, the audience, the goal. The AI doesn’t. Be critical. It can take it.
STEP 05. Iterate
Prompting isn’t a one-shot deal.
The best outputs almost always come from a conversation — adding detail, adjusting tone, pushing back, asking for alternatives.
If your first output was 70% there, don’t start over. Build on it. Tell the AI what worked and what didn’t. Treat it like a real creative collaboration.
Iterative follow-up example “The second option is closest. Make it punchier, cut two words, and add a subtle sense of urgency without using the word ‘now’ or ‘today’.“
That one instruction moves you from 70% to 95%. It’s not magic. It’s just a better brief.
Why Your AI Output Is Under-performing Right Now
It’s not scepticism. It’s workflow friction.
The first prompt produces a flat output. The natural conclusion is “AI isn’t good enough for this.” That’s a misdiagnosis – and an expensive one.

The TCREI framework doesn’t add time. It redirects the thinking you were already going to do – from vague intent to structured input. The whole thing adds ninety seconds to your process.
The ROI on ninety seconds is significant.
Research Spotlight
60% of marketing teams that adapted their measurement approach to AI report returns of 2–3× or higher.

One More Thing: Order Doesn’t Matter. Substance Does.
The Framework Is a Checklist, Not a Script .
You don’t need to write your prompts in Task → Context → References → Evaluate → Iterate order every time. This isn’t a legal contract.
What matters is that when you send a prompt, you’ve thought about all 5 elements. Sometimes context comes first. Sometimes you lead with a reference. That’s fine.
Think of it like a journalist’s checklist. You don’t write in that order. But you don’t file without answering all five.
Same principle. Better output.
Frequently Asked Questions
Two minutes, once you know the framework. The TCREI structure isn’t about writing more – it’s about thinking more clearly before you type. Most of the time, you already have the context in your head. The framework just stops you from leaving it there.
Not always. For simple tasks, “summarise this in three bullet points”, you don’t need all 5. But for any output you’d actually publish, present, or act on, yes. Running through the checklist mentally takes about 30 seconds and consistently improves the result.
Yes. The TCREI framework is model-agnostic. The underlying principle that structured input produces better output applies regardless of which tool you’re using. The specific outputs will vary between models, but the prompting approach is universal.
That’s what the Iterate step is for. A bad second output usually means your context was too vague or your task wasn’t specific enough. Go back, tighten those two elements first. If you’re still not getting there, add a reference – even a rough example of what good looks like is often enough to unlock a significantly better result.
Everyone who produces output that gets used externally. Research from Jasper’s 2026 State of AI in Marketing report found that 60% of teams with structured AI workflows report 2–3x returns but adoption of structured approaches remains low. The teams pulling ahead aren’t just using AI more. They’re using it more deliberately, at every level.
Models are getting better at handling vague input but the output quality ceiling still scales with input quality. A March 2026 Harvard Business School study found that AI amplifies existing expertise but can’t bridge a domain knowledge gap. In other words: the better you are at your craft, the more your prompting skill compounds. AI rewards the people who already know what good looks like.
Sources
- McKinsey & Company — The State of AI in Early 2024
- Google & Ipsos — AI Works for America, 2024
- Google Career Certificates — Prompting Essentials
- Jasper — State of AI in Marketing 2026
- Harvard Business School — Working Knowledge, March 2026
Nicola Ziady is a marketing strategist helping leaders build visibility, authority, and strategy in the AI era.
By Nicola Ziady. Published: March 21, 2026. Last revised: March 30, 2026