- digitalSamaritan Newsletter
- Posts
- đ§ Google's free Prompting GuideâMade Stupidly Simple
đ§ Google's free Prompting GuideâMade Stupidly Simple
Prompt Smarter, Not Harder (Straight from Googleâs Playbook)
đ Hey there,
Last week, we read Googleâs Prompt Engineering whitepaper, so you donât have to. If youâve ever yelled at ChatGPT for being âmidââthis oneâs for you.
Most people are getting bad AI results not because the model sucks⌠but because the prompt does.
Weâre breaking down the most useful stuff from the 60+ page paper into bite-sized, actionable takeaways (with a side of sass). But more importantly, Iâll show you how to apply themâwith real use-cases and prompts you can literally copy-paste.
And yesâweâll also show you how to skip the tinkering and let Prompt Genie do the thinking for you.
Letâs break it down đ
1ď¸âŁ First, understand what a prompt actually is
Youâre not just âasking ChatGPT a question.â
Youâre giving it instructions. Youâre feeding it context. Youâre shaping its identity. Youâre programming a model using language.
And the way you do that changes everything. Hereâs how:
𧊠Framework:
Role + Task + Structure + Constraints = Gold
![]() Both are about retro games, but notice how different they sound? Thatâs the power of the prompt. The way you frame your question changes the kind of answer the AI gives youâmore detail, more personality, or just a quick summary. It all depends on what you ask. | ![]() |
2ď¸âŁ LLM Settings (aka how you control the vibe)
Large Language Models (LLMs) like ChatGPT donât just spit out random wordsâthey generate outputs one word (or token) at a time based on probabilities. But hereâs the fun part: you can tweak how they do that.
Here are the big 3:
Temperature â Controls creativity vs. precision.
Low temp (e.g., 0.1) = logical, predictable output
High temp (e.g., 0.9) = imaginative, unpredictable output
Top-K â Limits the number of ânext wordâ options.
Top-K = 5? It chooses from the top 5 likely next words only.
Higher K = more diverse outputs; lower = tighter focus.
Top-P â Looks at the smallest set of words that make up P% of total probability.
Top-P = 0.9? The model will only consider words that collectively have a 90% chance of being used next.
Think of it like this:
Setting | Controls | Youâd Use It When You Want |
---|---|---|
Temperature | How wild or safe the output is | More creativity or more logic |
Top-K | Number of choices to consider | More or less variety in style or wording |
Top-P | How confident the AI should be in its picks | Slightly wilder or safer depending on range |
đ§ Hereâs a real example:
Prompt: âGive me three brand names for a personal finance app.â
Setting | Possible Output |
---|---|
Temp 0.2, Top-K 5 | âMoney Manager, Budget Pro, Finance Trackerâ |
Temp 0.8, Top-K 50 | âWealthNest, CoinSage, BudgetBuddyâ |
Temp 0.8, Top-P 0.95 | âNestEgg Now, SavvyStash, SpendSmartâ |
Theyâre all goodâbut as you move from lower to higher K or P, you start seeing names with more flair, creativity, or surprise.
3ď¸âŁ Easy Prompting Techniques That Actually Work
đ ď¸ The most powerful formats:
Technique | Example Prompt | Why It Works |
---|---|---|
Zero-shot | "Write a one-line caption for this photo." | Good for simple tasks but often too generic and lacks context. |
One-shot / Few-shot | "Write a tweet like this: 'Mondays are for deep work and deeper coffee.'" | Shows the model a structure or style to follow, improving consistency. |
Chain of Thought (CoT) | "Iâm trying to calculate my budget. Letâs break it down step by step." | Helps with logical tasks by encouraging the model to reason through each step. |
Step-back prompting | "Before writing the email, who is the audience and what do they care about?" | Adds useful context by zooming out before zooming in. |
Role prompting | "You are a career coach. Help me write a confident LinkedIn summary." | Frames tone and expertise for more relevant, tailored output. |
System prompting | "Summarize the text and return only the top 3 points in bullet form." | Defines how the model should behave or return results. |
Contextual prompting | "Context: Iâm writing a blog post for Gen Z freelancers. Suggest catchy titles." | Provides task-specific info that guides the modelâs output effectively. |
Self-consistency | Ask the same prompt multiple times: "Is this email spam or not? Explain why." | Improves reliability by comparing multiple reasoning paths and choosing the best. |
Tree of Thoughts (ToT) | "Give me 3 ways to explain blockchain to a 10-year-old, then pick the best one." | Encourages deeper reasoning by exploring several paths before settling on one. |
ReAct (Reason & Act) | "Use tools to find the weather in Paris this weekend, then suggest what to pack." | Combines reasoning and action for multi-step, research-based tasks. |
Automatic Prompt Engineering | "Write 10 different ways someone might say: 'I need help resetting my password.'" | Uses the model to generate effective prompt variations and optimize instructions. |
â Try this: Add âLetâs think step by stepâ to your next complex request and compare the results.
đŻ Alternatively, just use our tool Prompt Genie to create these super prompts for any AI task
đ ICYMI
AI Upskilling
Microsoft just released a free course on How to Create AI Agents for beginners.
AI Roundup
Verizon Sees Sales Boost with Google AI Assistant
Verizon saw a sales lift after fully deploying a Google AI assistant in Jan 2025. Powered by Googleâs Gemini LLM, the tool helps customer service reps respond faster and more effectively by tapping into a database of 15,000 internal documents.
ChatGPT Gets a Major Memory Upgrade
OpenAI upgraded ChatGPT with memory, allowing it to recall past interactions for more personalized help across writing, learning, and advice.
As always, youâre in control of ChatGPTâs memory. You can opt out of referencing past chats, or memory altogether, at any time in settings.
If youâre already opted out of memory, youâll be opted out of referencing past chats by default.
If you want to change what ChatGPT knows
â OpenAI (@OpenAI)
5:06 PM ⢠Apr 10, 2025
Did you learn something new? |
đ Weâd Love Your Feedback
Got 30 seconds? Tell us what you liked (or didnât).
Until next time,
Team DigitalSamaritan
Reply