Skip to main content

AI Prompting in 2026

12 concepts on using ChatGPT, Claude, and Gemini well: context, reasoning modes, deep research, multimodal, and AI desktop apps.

Most people use AI like a Google search. They type a short question, skim the answer, and move on. That works for trivia. It fails for everything that actually matters in your life and your work.

Power users do something different. They brief AI the way they would brief a smart-but-new colleague: with files, context, constraints, and a clear ask. They expect three options instead of one. They argue. They iterate. They check the work. The gap between a novice prompt and a power-user prompt is not cleverness; it is a handful of habits anyone can learn in an afternoon.

This page is that afternoon. Twelve concepts, grouped into four short parts. No code, no setup, no jargon you cannot guess from context. The single insight that makes everything else click: almost every "advanced technique" in this page reduces to one of two moves: get the right context in, or keep the wrong context out. Read each section through that lens.

A note on tools: examples reference ChatGPT, Claude, and Gemini because most readers have one of those. The skills transfer to any modern chat AI. Where a feature is exclusive to one product, it is named explicitly.

How to read this

Read straight through once for the shape. Then come back and try the prompts in the closing block. Reading without trying gives you the words; trying gives you the skill.

A short note on what changed since you last looked

If you used ChatGPT in 2022 or 2023 and decided it was a clever toy, the tool you remember is not the tool you have now. A few changes that happened quietly:

  • Context windows grew by roughly 1000x. A 2022 model held a few thousand words. A 2026 model holds hundreds of thousands, sometimes a million. That changes what you can stuff into a prompt: a whole book, several days of speech, a folder of contracts.
  • Reasoning became real. "Think step by step" used to be a magic phrase. Now models have explicit thinking modes that run for seconds, sometimes minutes, exploring multiple approaches before answering. The class of problems they can handle shifted from "minutes of human time" to "many hours of human time" in about eighteen months.
  • Web search and code execution became built-in tools. The model decides when to search the web or run code, then uses the results as part of its answer. This is invisible to most users; once you know it is happening, your prompts get sharper.
  • Multimodal stopped being a sidebar. You can drop a photo, a PDF, a spreadsheet, a voice memo, or a folder of files into a prompt and ask questions about them. The model handles all of those in one stream.
  • Desktop apps appeared. A new category of products (Cowork, Microsoft Copilot, Google Antigravity) can find your files and act on them with permission. This is not chat anymore; it is closer to delegating a small task to a coworker.

If your mental model of these tools is out of date by even eighteen months, you are using them at maybe 20% of what they can do today. This page closes that gap.


Part 1: How AI knows things

The first three concepts are about what is actually happening when you ask AI a question. If you understand this, you stop being surprised by the failures.

1. Novice vs power user

Watch what changes between the two columns below. The question is the same; the prompt is not.

Novice promptPower-user brief
LengthOne lineA short brief, plus attached files
Context givenNoneInsurance quotes, dealer pricing, cost-of-ownership spreadsheet
Constraints statedNone"30-minute commute each way, two kids in car seats"
Instruction to thinkNone"Read everything attached and think hard before answering"
AI responseThree popular models, genericFive-year cost comparison, safety analysis tied to the car-seat constraint, a recommendation with the conditions under which it flips

A few real contrasts from the field:

  • Buying a car. Novice: "which car is best?" Power user: uploads spec sheets, dealer quotes, and insurance plans, then asks "what are the trade-offs? Read everything and think hard."
  • Self-review at work. Novice: "write a self-review for my boss." Power user: uploads a screenshot of their project tracker, recent project docs, and a voice memo of notes, then asks for a draft.
  • Critiquing a business idea. Novice: "I have a great business idea, mobile tie-dyeing, critique it." That is sycophancy bait, the AI will mostly applaud. Power user: "Analyze objectively. Use this rubric: is there a problem worth solving, is there a market, is there a competitive advantage?" The AI scored that idea 8 out of 100 and explained why.
  • Writing a blog post. Novice: "write a blog post about the BlackBerry." Result: AI slop. Power user: outline first, critique outline, expand each heading into bullets, critique bullets, only then ask for prose.

The mental model that ties these together: AI is like a really smart fresh college grad. Highly motivated. Doesn't know much about you yet. Brief them like one. Would a new colleague have enough information to do this job well? If not, give them more.

2. Pretrained knowledge

AI did not learn by reading the world. It learned by reading text about the world. Specifically: massive amounts of internet text. Reddit and Quora threads, Wikipedia, books, news articles, research papers, blogs, forums.

Frequency in training data is roughly equal to reliability of the answer. So:

  • Strong: cooking, celebrity gossip, common medical advice, top-1000 movies, popular programming languages, what is on the Voyager 1 record (NASA spacecraft launched in the 1970s, around 25 billion miles from Earth, carrying greetings in 55 languages), why cats stare at walls (they detect subtle sounds and movements humans miss).
  • Sparse: quasars (extremely bright objects in the sky powered by black holes), Cantonese (under 0.1% of internet text), regional history, niche professional knowledge.
  • Absent: your company's secret data, your private calendar, anything published after the model's knowledge cutoff date, anything someone never put on the public internet.

Two practical consequences:

Don't waste time fixing typos. AI was trained on internet text, which is full of typos. It handles misspelled prompts gracefully. Misspelling "definately" will not change the answer.

Watch for absorbed errors. AI also absorbed misconceptions and outdated information from those same sources. A confidently wrong forum post becomes confidently wrong in the model. Check anything important against a primary source.

Why this matters for thinking

Part 0 will teach you to detect broken reasoning. The first place to look for it is in confident-sounding pretrained answers about topics where the training data was thin or contested. Confidence is not a signal of correctness.

A quick mental test before you trust a pretrained answer:

Question typeHow well-represented in training data?Trust level
"How do I make a roux?"Cooking is one of the most discussed topics on the internet.High.
"Plot of a top-1000 movie."Reviewed and re-reviewed thousands of times.High.
"History of an obscure village."Possibly only one Wikipedia paragraph, or none.Low; verify against a primary source.
"Recent regulatory change in my industry."Almost certainly after the knowledge cutoff.Trust nothing without web search.
"What did our company decide last quarter?"Not in the training data at all.Trust nothing; the model is guessing.

This is not a rule you have to memorize. It is the same instinct you would apply to any other source: "how would this person know that?" Apply it to AI too.

A non-software example. A reader once asked an AI for a summary of the rules of a regional folk game played in their grandmother's village. The AI confidently produced three paragraphs of rules. The grandmother, asked, said the rules were almost entirely wrong: the AI had blended descriptions of similar games from other regions because the specific game was barely on the internet. The AI did not lie; it generalized from sparse data. The reader's mistake was not asking, but assuming confidence equaled accuracy.

3. The 3 retrieval modes: pretrained, web search, deep research

When you ask a question, modern AI tools quietly choose how to answer. Either they answer from pretrained knowledge alone, they fire off a web search and read a few pages, or they run deep research, where they spend several minutes scanning dozens of sources and write a structured report.

You should know which mode is firing, because each has different strengths and different failure modes.

The three modes side by side:

ModeTriggerTimeSourcesBest forWeakness
PretrainedAny common-knowledge questionSecondsModel's training dataDefinitions, brainstorming, common factsStale, misses obscure or local info
Web searchCurrent events, location, niche queries, real-time dataTens of secondsA handful of live pagesQuick research, one-question answersCites popular sources first, can misread pages
Deep researchExplicit request, or question needing synthesisMinutes (sometimes 10+)Dozens of live pagesMulti-dimensional, structured reportsSlow, overkill for simple questions

A few examples to make this concrete:

  • Pretrained answers fine: "why do cats stare at walls," "what's on the Voyager 1 record," "summarize the plot of Hamlet." These do not change week to week.
  • Web search rescues a stale model: GPT-5.4's knowledge cutoff was August 2025. The "6 7" meme went viral after that. Without web search, the AI has no idea what you are talking about. With web search, it pulls a recent article and answers correctly.
  • Web search going wrong: a friend asked "where to run in Henderson, Nevada." The AI cited a 20-year-old web page and recommended a school no longer open to the public. Web search does not check whether sources are current.
  • Deep research worth the wait: "plan a Halloween haunted house in our neighborhood, including permits, fire safety, and noise ordinances." The AI proposes a research plan, runs many parallel searches, summarizes, decides what to dig into next, and produces a multi-section report with checklists. This is not a chatbot answer; it is closer to handing the work to a junior researcher for an hour.
How web search actually works (and why it sometimes misreads pages)

Under the hood, there are usually two models cooperating. A user-facing model talks to you. A separate assistant model issues the searches, scans the result list, downloads the most relevant pages, and writes short summaries of each. Only those summaries flow back to the user-facing model.

The user-facing model never reads the original page. It reads a summary of the page written by another AI. That is why it sometimes misrepresents what a page actually said: information went through a translation layer before it reached the model talking to you, and translation layers lose nuance.

Practical fix: tell the AI which kinds of sources to use. Instead of "are vaccines safe," try "use the World Health Organization, the FDA, the European Medicines Agency, and peer-reviewed studies. Do not use forums or personal blogs." Source quality is a knob you can turn. Default settings cite popular sources first (Reddit, Wikipedia, YouTube, Google itself, Yelp), which are often reliable but not always trustworthy for high-stakes questions.

A second fix: ask the AI to quote the source. "For each claim, quote the exact sentence from the source page that supports it." This forces the assistant model to surface original wording, which catches a lot of summary-layer drift.

A non-software example. A teacher used deep research to plan a unit on local water quality for her 7th-grade class. Her prompt: "Research current water quality issues in [her city] over the last 24 months. Use the EPA, the city's public utility reports, and peer-reviewed studies. Avoid news editorials and forums. Produce a structured report with: (1) the three most-cited issues, (2) data tables showing trends, (3) three age-appropriate classroom activities students could do to investigate one of these issues at home." Eight minutes later she had a unit plan grounded in current local data. Pretrained mode could not have done this; web search alone would have produced a shallower answer; deep research was the right tool because the question was multi-dimensional and current.

Choosing a mode in your head. You usually do not pick a mode by clicking a button; the AI picks based on your prompt. But you can steer:

Phrasing patternWhat it usually triggers
"What is X" / "Summarize Y"Pretrained only.
"What's the latest on X" / "Today" / "This week" / a specific cityWeb search.
"Research X thoroughly," "produce a report with citations," "use these source types"Deep research (in tools that have it; otherwise extended web search).
Attaching filesStays pretrained for the files; may search the web for context if the prompt asks for current info.

AI vs Google. They are not the same tool. Use Google for quick scans, navigating to a specific known site, or buying a thing (the air filter for a 2013 Honda Civic). Use AI when you need synthesis: pros and cons, multi-source comparison, a written-out analysis. The choice depends on whether you want a link or an answer.

A side-by-side rule of thumb:

TaskBetter with GoogleBetter with AI
"Find the official IRS page for form 1040."Yes. You want to land on a specific known site.No.
"Compare three diabetes medications and what the recent evidence says."Slower. You'll read 8 tabs.Faster. AI synthesizes the evidence in one place.
"Buy a replacement charger for a 2018 ThinkPad."Yes. You want a product link.No.
"Plan a 4-day Lisbon trip with a 6-year-old, no museums."Slow. You'll juggle blogs and reviews.Fast. AI integrates constraints.
"What's the weather tomorrow?"Either.Either.
"Why are my tomato plant leaves yellowing?"OK. Multiple gardening sites.Better with a photo attached.

If your question is "where is X," reach for Google. If your question is "given all this, what should I think," reach for AI.

How to get more reliable web-search results

When you do want web search, three small habits raise the quality:

  1. Name the sources you trust. "Use the WHO, the FDA, and peer-reviewed studies, not forums."
  2. Ask for citations inline. "Cite the source after each claim."
  3. Ask the AI to flag what it could not verify. "If a claim cannot be supported by the cited sources, mark it 'unverified'."

These three lines, pasted into any web-search prompt, cut down on the most common failure mode: the AI quietly synthesizing across sources and producing a confident sentence that no single source supports.


Part 2: Talking to AI well

The next four concepts are the heart of the page. They are the habits that separate someone who finds AI useful from someone who finds it transformative.

4. Context is the whole game

Humans hold about 7 things in active working memory. Modern AI models can hold hundreds of thousands of words at once, sometimes a million. To put that in proportion: about 750,000 words is the first 4 to 5 Harry Potter books, or several days of continuous speech. The model can read all of it before answering.

But it can only read what you give it. Context is everything that ends up in the model's window for a given response: the system prompt the product set, the descriptions of any tools it can call (web search, code, file access), your prompt, the chat history of this conversation, and any files you uploaded.

   ┌─────────────────────────────────────────────────┐
│ 5. Uploaded files (PDFs, sheets, images) │
├─────────────────────────────────────────────────┤
│ 4. Chat history (every prior turn) │
├─────────────────────────────────────────────────┤
│ 3. Your prompt (the message you just typed) │
├─────────────────────────────────────────────────┤
│ 2. Tool descriptions (web search, code, ...) │
├─────────────────────────────────────────────────┤
│ 1. System prompt (invisible product setup) │
└─────────────────────────────────────────────────┘
↑ The model only knows what is in this stack.
↑ Capacity: ~750,000 words (4–5 Harry Potter books).
↑ Anything you do not put in the stack does not
exist for this answer.

Concrete contrast:

  • Bare prompt: "pros and cons of studying physics versus zoology." You will get generic high-school-counselor advice.
  • Context-rich prompt: the same question, plus your career assessment results uploaded as a PDF and a screenshot of your high-school schedule. Now the AI can talk about your specific aptitude profile, your specific course history, and which choice fits which.

Same model. Same question. Different answer. The difference is the context, not the cleverness of the prompt.

A non-software example. A small-business owner kept asking AI "how do I price my consulting work" and getting generic advice (hourly versus project-based, value-based pricing, etc.). Then she tried again with three things attached: her last six invoices, a screenshot of two competitors' rate sheets she had been sent, and a one-paragraph note on her client mix. The same AI, now grounded, produced a specific recommendation: a tiered structure with a named anchor price, justifications grounded in her competitors' positioning, and a cleanly written email she could send to her next prospect. The model had not become smarter between attempts. It had been given enough to do the job.

The discipline you are learning: before you press send, ask yourself what a smart new colleague would need in front of them to answer this well. Then attach those things.

The unifying mental model: AI is like a really smart fresh college grad who is highly motivated, but doesn't know much about you yet. They will read everything you put in front of them carefully. They will not guess what you did not tell them. They will not search your filing cabinet on their own. They will not infer your industry, your team's history, your last quarter's results, or the email thread from yesterday unless you put those things in the brief. Empathize: would they have enough information in this prompt to actually do the job? If not, give them more.

A second non-software example. A 7th-grade teacher asked AI to "draft a lesson plan on the water cycle." The output was a generic plan she could have found in any textbook: definitions, a diagram, three discussion questions. The next day she tried again, with three things attached: her course syllabus (so the AI knew what came before and what came after this lesson), last week's student worksheets with grades visible (so the AI knew which concepts had landed and which had not), and her school's standardized test format. The new lesson plan opened with a five-minute review of the two concepts last week's worksheets had shown were weak, threaded the new material through the test format the students would see in May, and closed with a check-for-understanding question matched to her syllabus's next topic. Same model, same teacher, same subject. The only difference was that the second prompt told the AI what a smart new colleague would have needed to know.

The habit, restated as a checklist before any non-trivial prompt:

QuestionIf yes, attach or describe it
Is there a document the answer should be consistent with?Yes: attach it.
Is there a constraint the AI cannot infer (budget, time, who's on the team)?Yes: state it.
Is there prior context (a previous decision, an existing process)?Yes: summarize in one paragraph.
Is there an output format you want (table, email, bullet list)?Yes: name it.
Is there an audience (a boss, a child, a stranger)?Yes: name them.

Five lines of context, properly chosen, beats five paragraphs of cleverness.

Context rot

Modern context windows are large, but not infinite, and recall degrades inside them. The biggest practical mistake people make: they keep one very long conversation going across many unrelated topics. AI just helped you plan a workout, now you ask it to debug a spreadsheet, now you ask it to write a thank-you note to your aunt. The workout context is still in there, distracting the model.

Rule of thumb: when the topic changes, start a new conversation. Cheap to do, free to do, and the answers get visibly better.

Symptoms that tell you a conversation has gone stale:

  • The AI starts referencing earlier parts of the chat that have nothing to do with what you just asked.
  • Its answers get longer and vaguer over time, with more hedging.
  • It contradicts a constraint you stated five turns ago.
  • It starts apologizing repeatedly without making progress.

When you see these, do not "fix it with one more clarifying prompt." That just adds more tangled context to a context that is already tangled. Open a new chat, paste in the one or two facts that actually matter, and continue from there. The reset is almost always faster than the rescue.

A useful pairing of two habits:

HabitWhat it doesWhen to use
New chat per topicRemoves accumulated noise.Whenever you switch tasks.
Save the useful state to a fileLets you re-load only what matters into the next chat.When a long conversation produced something worth keeping (a plan, a draft, a decision).

Combined: you do not lose the work, but you also do not drag the noise into the next task.

5. Reasoning, or "think hard"

Until about 2023, the standard advice for hard prompts was "think step by step." That advice is now mostly obsolete. Modern models have built-in reasoning modes that you can invoke directly.

The new keywords:

  • "Think hard" or "think carefully before answering" in your prompt.
  • "Ultrathink" in tools that recognize it (some Claude products do).
  • A thinking-mode toggle in the interface, where one is offered.

When you turn this on, the model can think for many seconds. On hard problems, sometimes more than ten minutes. It is not just typing slower; it is internally exploring multiple approaches, checking its own work, and only then writing the answer you see.

A 2025 METR study tracked the longest task a frontier model could reliably complete. In 2024 the answer was tasks that take humans minutes. By 2025 it was tasks that take humans many hours. That trajectory is still climbing. The implication for you: hand AI real, hard tasks, not just easy ones. It can handle more than your 2023 instincts suggest.

A power-user pattern that uses this well:

I'm choosing between two cars. Attached: spec sheets for both,
my insurance quote for each, and a spreadsheet of my driving
patterns over the last six months.

Read everything. Think hard. Then tell me:
1. The three trade-offs that actually matter for my driving pattern.
2. Which car you'd choose and why.
3. Under what conditions your recommendation flips.

Three things this prompt does: it loads the relevant context, it explicitly invokes thinking, and it asks for structured output instead of a wall of prose. All three are habits.

When NOT to use thinking mode

Quick lookups, summaries of a paragraph, casual brainstorming. Thinking mode is slower and uses more of your usage budget. Save it for the questions where you would have wanted a human to take their time.

A non-software example. A small-business owner needed to choose between three commercial cleaning vendors for her office. She had quotes from all three, plus references, plus a one-page summary of each company's history. Her first prompt: "Which of these vendors should I choose?" The AI gave a generic checklist (look at price, look at reviews, ask for references). Useful, but not a decision.

She tried again with thinking mode on, and a very different prompt:

Attached: three commercial cleaning vendor quotes, three reference
sheets I called myself, and a one-paragraph note on what matters
most to me (consistency over price, since I've been burned twice).

Think hard before answering. Then:
1. Identify the three trade-offs I should actually be weighing.
2. Score each vendor on each trade-off, with one-sentence
justifications grounded in the attached files.
3. Recommend one, and tell me what would make you switch.

The AI thought for about four minutes. The output was a one-page memo, structured exactly as requested, with a recommendation she could act on the same afternoon. Decision time, end to end: one prompt, four minutes of thinking, ten minutes of reading. The same decision had been on her desk for two weeks.

That is what thinking mode is for: not faster, but able to handle the kind of multi-input, multi-trade-off question you would otherwise hand to a thoughtful colleague and wait two days for. The trade is real. You spend a few minutes of compute and a small amount of usage budget. You get back something you would have spent half a day producing yourself.

A note on the METR finding. METR is a research group that measures the longest task a frontier AI model can reliably complete. In 2024, that ceiling was tasks that take humans a few minutes. By 2025, it was tasks that take humans many hours. The trajectory is steep and still climbing. The implication for you: the tasks you mentally categorized as "too complex for AI" two years ago are mostly now tasks AI can handle, if you brief it well and turn on thinking mode. Re-test your assumptions about what AI can do every six months. They will be wrong.

6. Sycophancy and how to neutralize it

AI models are trained on human feedback. Specifically, on which responses got a thumbs up. Across millions of users, agreeing with people gets more thumbs up than disagreeing. The result: models are biased toward telling you what you want to hear.

A 2024 Washington Post analysis of ChatGPT conversations found the model agreed with users about 10 times more often than it disagreed. Reported phrases included "that's correct," "good point," "you're on the right track," and one painful gem: "dude, you just said something deep without even flinching, you're a thousand percent right."

You can verify this yourself. Same model, opposite framings:

  • "Don't you think remote work is better than office work?" → AI agrees, lists reasons.
  • "Is it true that office work is more productive?" → AI agrees, lists reasons.

The fix is not magic. It is just neutral framing. Compare:

Bait phrasing (signals your preferred answer)Neutral phrasing (asks for the answer)
"Don't you think remote work is better?""How does productivity compare between remote and in-office work?"
"Aren't carbon taxes bad for small businesses?""How do carbon taxes affect small businesses? Cite both directions."
"Do you agree AI will create a lot of jobs?""What does current research say about AI's net effect on jobs?"
"Does remote work reduce productivity?""What does the evidence say about remote work and productivity?"
"Find all the positive measures of performance this quarter.""Summarize this quarter's performance data. Flag both positive and negative trends."

A subtler form of bait: asking AI to "find all the positive measures of performance" in your data. You have already told the model what answer you want. It will find them, even when the data is mostly negative. Replace with a neutral instruction: "summarize the data, flag what is improving and what is degrading."

A few more bait phrasings worth recognizing in your own prompts:

Subtle bait you might writeWhat it signals to the AINeutral rewrite
"Find evidence that this strategy will work."The conclusion is fixed; AI fills in support."Evaluate this strategy. List the strongest arguments for and against."
"Why is approach A better than approach B?"A wins; AI lists reasons."Compare approach A and approach B. Score each on cost, risk, and time."
"Help me defend my decision to hire X."Decision is locked; AI provides ammunition."Here is my decision and the context. What's the strongest counter-argument I should be ready for?"
"Tell me my draft is ready to send."AI tells you it is ready."Critique this draft against these 4 yes/no criteria. Recommend the smallest change that would lift the lowest score."
"Confirm that this code is correct."AI confirms."Find any bug, edge case, or unstated assumption in this code. If there are none, say so."

The pattern: any phrasing that contains a verb like find, defend, confirm, prove, support hands the AI a conclusion before the question. Replace with verbs like evaluate, compare, critique, find any, list both sides. The model will still bias slightly toward agreement, but you have removed the loudest signal.

The general rule: lay out two options without hinting at preference, then ask for pros and cons of each. If you find yourself writing "isn't X true," stop and rewrite as "to what extent, if at all, is X true?"

This is mechanical, not deep

This concept is the cheap version of a much deeper skill. Part 0 Chapter 1 (Asking Better Questions) trains the deep version: how to formulate questions that surface what you do not already know. The neutral-framing trick gets you 80% of the way there for everyday use. The chapter gets you the rest.

A non-software example. A founder asked AI: "I have a great business idea, mobile tie-dyeing for kids' birthday parties, critique it." The AI praised the idea warmly and listed reasons it might succeed. The founder then tried again with a rubric: "Analyze this idea objectively. For each of the following, score 1 to 10 and justify: (1) is there a real problem here, (2) is there a market willing to pay, (3) is there a competitive advantage, (4) what's the unit economics, (5) what are the top three reasons this fails." The same AI gave the idea 8 out of 100 and explained, in concrete terms, why the founder should rethink it. The first prompt was sycophancy bait. The second was an objective rubric. Same model, same idea, opposite verdicts. The difference was how the question was asked.

The objective-rubric pattern. When you ask AI to evaluate something (a draft, a plan, an idea), ambiguous criteria collapse into "great work." Specific yes/no criteria force the AI to actually look. Compare:

Vague critique prompt (gets sycophancy)Rubric-based critique prompt (gets honesty)
"Score my sci-fi short story out of 100.""Critique using these 4 criteria, each 1-5, each justified in one sentence."
"Is this email professional?""Check this email against these 5 yes/no tests: greeting present, ask is in the first paragraph, no jargon, single clear request, polite close."
"How is my workout plan?""For each day, answer: does it include a warm-up, does total volume fit my time budget, are there 48 hours between same-muscle sessions?"

The trick is that each rubric criterion must be answerable with a clear yes or no (or a 1-to-5 score with a written justification). Soft criteria ("is this engaging?") leave room for sycophancy. Hard criteria do not.

7. The brainstorm-iterate loop

This is the single highest-leverage habit on this page. If you skip every other section, do not skip this one.

When AI was trained on the internet, most of the internet was common ideas, not creative ones. So the average AI response on a creative question is also common. "Ways to exercise at home": squats, push-ups, planks. Not wrong. Just average.

The way around this is not a magic prompt. It is a loop.

The recipe:

  1. Give all relevant context up front. Not just "ways to exercise"; "ways to exercise given that I have a trampoline, a cat, and I cannot stick to plans for more than three days."
  2. Ask for 3 to 5 options, not one. Forcing alternatives pushes the model past its first instinct.
  3. Give explicit feedback. "I don't like option 1, it's too passive. I do like the trampoline idea but want it shorter. I forgot to mention I have a bad knee."
  4. Ask for 3 to 5 new options informed by the feedback.
  5. Iterate until you have one or two you genuinely like.
  6. Then, and only then, ask AI to flesh out the chosen option in detail.

Worked example, debt payoff:

I have $8,000 in credit card debt at 19% APR, $4,000 in student
loans at 5%, and $1,200 in a retail card at 24%. I have $700/month
free after expenses. I just learned I'll get $450 in cash from a
tax refund. Risk tolerance: low. I sleep badly when I see big
balances.

Give me 5 different repayment strategies, each with a one-line
rationale. Don't expand any of them yet.

Then, after reading the five options:

Reject option 2 (avalanche by interest rate alone): I want
psychological wins early. Reject option 4: I won't open new
accounts. I like option 1 (snowball with the retail card first)
but I'd want to fold the $450 in. Give me 5 new options that
combine snowball-style wins with smart use of that lump sum.

Notice what is happening. You are not waiting for the AI to read your mind. You are showing your taste, and the AI is reshaping the option space around it. After two or three rounds, you have one option that feels exactly right. Now you ask for the full plan.

The same loop works for writing, where it has its own name: outline before drafting.

Iteration 1: ask for 3 outline options for a post on X.
Iteration 2: pick one outline, ask AI to critique it.
Iteration 3: revise the outline, ask AI to expand each heading
into 3-5 bullets.
Iteration 4: critique the bullets, fix the weak ones.
Iteration 5: only now ask for the full draft.

Why this works: editing one word in an outline can change the direction of the whole article. Editing one word in a final draft changes one word. Almost all of the leverage in writing happens at the outline level. AI generates word-by-word from the start, so unless you force structure first, it cannot see the whole shape.

Don't skip steps

The temptation is to ask for the full draft on the first try. Resist it. AI's first draft of anything is workslop: looks polished, says little. The loop turns ten minutes of polished nothing into thirty minutes of actually-useful something.

A worked writing example. A team lead wants to write a 600-word post titled "Why our small AI team is shipping faster than the big team across the hall." Here is what each round of the loop looks like in practice:

Round 1, research first:

I'm writing a 600-word post arguing that small AI-augmented teams
ship faster than larger non-AI teams. Don't write yet. First, give
me the 5 strongest research-backed arguments and the 3 strongest
counter-arguments. One sentence each.

Round 2, three outlines:

Now produce 3 different outline options for the post. Each outline
should have 4-6 headings. They should differ in structure: one
narrative, one analytical, one contrarian. One line per heading.

Round 3, pick one and add an analogy:

I'll go with outline 2 (analytical). I want to weave in a Pixar
analogy: how the original Toy Story team was small and faster than
the giant Disney studio because of new tools. Add this as a recurring
example, not its own section. Revise outline 2.

Round 4, expand to bullets:

Now expand each heading into 3-5 bullets. Telegraphic style, not prose.

Round 5, critique bullets:

Critique the bullets. Which ones are weakest? Which would a skeptical
reader push back on hardest?

Only now does the lead ask for the full draft. The whole process takes about thirty minutes. The output reads like the lead wrote it, because every load-bearing decision was the lead's. That is the loop.

The instinct to defend: this looks like more work than just "write me a post." It is, by about twenty minutes. The thing it produces is closer to twenty hours' worth of value, because it is the only version that gets read all the way through.

The loop is domain-agnostic. It works the same way for: planning a trip, structuring a sales pitch, picking a college major, naming a product, writing a wedding toast, deciding on a renovation, choosing a charity to support. The shape stays constant: load context, demand options, give explicit feedback, demand new options, iterate, then expand. If you find yourself accepting the AI's first answer, you have skipped the loop. Whatever you are working on, it deserves the loop.

A short table of where the loop fits across daily life:

Decision or taskWhat "context" looks likeWhat "options with feedback" looks like
Planning a 4-day tripConstraints (budget, dates, who's going, what they hate)5 itinerary skeletons; reject two; iterate the rest
Naming a productWhat it does, who buys it, what it must NOT sound like10 names; pick 3 you like, ask for variants on those
Writing a difficult emailThe recipient, the relationship, the desired outcome3 different tones; pick one, refine its specifics
Choosing a contractorThree quotes, three reference notes, your prioritiesSide-by-side scoring; ask for the strongest counter to your favorite
Picking a learning pathCurrent skills, time available, end goal3 different curriculum shapes; pick one, expand to weekly milestones
Designing a logo brief (for a designer)Brand values, audience, examples you like5 mood-board directions; pick one, ask for 5 variants in that lane

Part 3: Beyond text

Three concepts that unlock things people often do not realize AI can do at all.

8. Images in, images out

Modern AI handles images in two directions: it can look at images you upload, and it can generate new ones from a text prompt. The two skills are very different.

Image input. AI sees images coarsely. It is strong on:

  • Overall scene and composition.
  • Distinct, large object shapes (a giant human-sized hamster wheel treadmill).
  • Whiteboard contents, including diagrams.
  • Handwritten and cursive text (decent, double-check for high stakes).

It is weak on:

  • Fine details. "What gym machines are these?" tends to fail because gym machines look similar through a slightly blurry lens. The AI may answer confidently and wrongly.
  • Counting many small things in a cluttered scene.
  • Reading small print at the edge of an image.

A useful real-world test: a teacher photographed a whiteboard where his head blocked the word "convolutional" in a neural network diagram. The AI inferred the missing word correctly from the rest of the diagram. That is what AI is good at: inferring from the gist. It is not good at zooming in.

For receipts, splitting a bill, or transcribing handwritten notes, AI works well, but always double-check the totals. For multi-image inputs (post-its plus a whiteboard photo plus handwritten notes from a brainstorm), AI can summarize the combined ideas; this is genuinely useful and saves real time.

Image output. Modern AI can generate images from text prompts. Two practical tips:

  1. Use a text AI to write your image prompt. "Generate me a prompt for a fantasy forest illustration in a Studio Ghibli style for a children's book cover." Take that output, paste it into the image tool. The text AI is much better at writing rich image prompts than you are on a first try.
  2. Build visual vocabulary. Words like cinematic, watercolor, cyberpunk, anime, isometric, low-poly, art-deco, claymation are levers. Image models were trained on captioned images and learned these styles by name. Upload images you like and ask AI how it would describe them. That trains your vocabulary.

How image generation works: it is a diffusion model, trained to remove noise from random pixel grids step by step until an image emerges. Not pixel-by-pixel like text. The whole image is generated at once. That is why you cannot stop image generation early to save time, the way you can interrupt a text response.

Older diffusion models had famous weaknesses: weird hands (six fingers), garbled text on signs, characters that change appearance from frame to frame in a comic. Modern models (such as Google's Nano Banana) handle text reasonably, generate consistent characters, and can convert research papers into infographics.

A short table of failure modes still worth watching for, even on modern image models:

Failure modeWhat it looks likeHow to mitigate
Garbled text on signsThe signage in the image reads "HAPRY BIRTDAY" instead of "HAPPY BIRTHDAY".Specify the text in quotes in the prompt. Generate three variants. Pick the one where the text is right.
Inconsistent characters across framesThe same character has different hair color in panels 1 and 2 of a comic.Use models with explicit character-consistency support; pass the first image back as a reference for the next.
Hand and finger errorsSix fingers, fused hands, twisted wrists.Ask for compositions where hands are partially out of frame, or in pockets, or clearly described.
Cluttered backgrounds with implausible objectsA coffee shop where a bicycle merges into a chair.Specify a simple background, or describe the background explicitly.
Wrong aspect ratioThe model defaults to square; you wanted landscape.Always specify aspect ratio explicitly: "1024x768 landscape" or "16:9".

A non-software example for image input. A reader photographed a stack of three handwritten recipe cards from a deceased grandmother and uploaded them to AI. The prompt: "Transcribe these three cards. Preserve the original wording and any abbreviations. If a word is unclear, mark it [unclear] and offer your two best guesses." Five minutes later, all three recipes were typed cleanly, with [unclear] marks on the four words the AI could not confidently read. The reader checked those four against the originals (two were obvious, two needed a phone call to an aunt), and the family had a clean digital archive of recipes that had been at risk of being lost. AI did the boring 90% so the reader could focus on the careful 10%.

A small story about image generation

A father whose 7-year-old daughter loved cats wanted a custom birthday cake for her. He used Nano Banana to brainstorm cake designs (generating dozens of variations: cat-shaped, multi-tiered, frosting-styles, color palettes), picked the one she loved, then handed the chosen image to a baker who rendered it as a real 3D cake. Total iteration time on the design: an afternoon. Total cost: a few cents in image generation.

The point is not the cake. The point is that for ~$0.30 and an hour of taste-driven iteration, a person who is not a designer produced a one-of-a-kind brief that a professional could execute against. That is a new kind of creative leverage, and it is widely available.

9. Building small apps with one prompt

Modern AI can build small games, websites, and tools from a single prompt. Not yet for large software, but for small useful things, this is genuinely accessible to people who have never written code.

The recipe is just three slots:

Goal: what should this thing do?
Input: what does the user provide?
Output: what does the user see?

Examples that work today:

  • Pomodoro timer. "Build a Pomodoro timer with a yellow theme. 25-minute work sessions, 5-minute breaks, a satisfying click when each cycle ends."
  • Bill splitter. "Build an app where I enter a total bill, a tax amount, and the names of friends. It splits the bill including tax and shows each person's share."
  • Outfit picker. "Build an app that takes today's weather (temperature and precipitation) and recommends an outfit from a closet of items I describe."
  • Fireworks simulator. "Generate a fun fireworks simulator. Input: I click on the screen. Output: a colorful display of fireworks at the click point."
  • Place-obstacles game. "Build a game where the user places obstacles and a goal, and runs a simulation that tries to reach the goal."

What is still hard:

  • Multiplayer over the internet. Networking, accounts, and matchmaking are still beyond a one-prompt build.
  • Live AI feedback in a different language. A French-conversation tutor that listens, corrects pronunciation, and adapts in real time is genuinely hard.

The intuition you build: small things that fit on one screen, with no accounts and no external services, work. Anything beyond that needs more than one prompt, and usually some real engineering.

A non-software example. A parent built a yellow cat-themed typing game for his daughter when her teacher mentioned the kids could type faster. He is not a software engineer. The prompt was three sentences:

Build a typing game for a 7-year-old. Goal: practice typing
common short words. Input: words appear, the player types them
before they reach the bottom of the screen. Output: a yellow
theme, a cute cat mascot that cheers when the player gets a
word right, increasing speed across levels.

What came back worked. Not perfectly, not on the first try, but iterated to "good enough for a kid" inside an hour. The skill being built here is not coding. It is the ability to write a clear brief and iterate it. That skill is universal.

IdeaProbably one-prompt friendlyProbably needs more
A timer with custom themeYes
A bill-splitter for friendsYes
A flashcard quiz from a list of wordsYes
A simple platformer for one playerYes
A multiplayer game over the internetYes (servers, accounts)
A working e-commerce storeYes (payments, inventory)
A live language tutor with voice feedbackYes (real-time audio)
A daily-checking habit tracker that syncs across devicesYes (cloud sync, accounts)

10. Data analysis (the model writes and runs code)

When you ask AI a question that needs calculation or graphing, modern tools quietly do something remarkable: the model writes code, runs it, and returns the result. Code execution is just another tool the model can call, like web search.

This is much more reliable than asking the model to do math in its head. The model is doing math the way you would: by running a calculator. It is the calculator that is precise; the model is just choosing what to compute.

Bubble tea shop example. A small business has a year of sales data: drinks, dates, quantities. The owner asks: "Which drinks had the biggest changes in sales over the year? Graph them."

The AI looks at the spreadsheet, writes code to compute month-over-month changes per drink, observes that most drinks are flat and four stand out, generates a colored line graph of those four, and notes the patterns. "Strawberry matcha rose sharply in spring; consider re-running that promotion next year." That is not a generic answer. That is an answer grounded in the actual data.

Then a longer prompt: "Create a one-slide year-in-review graphic for the shop. Analyze the data carefully for insights worth featuring." This triggers minutes of thought; the AI writes code, runs analyses, picks insights, designs annotations, and produces a finished dashboard.

What this is good for:

  • Spreadsheet analysis (running tracker, sales records, budget files, lab data).
  • Quick graphs of trends that you cannot see by squinting at rows.
  • Aggregations and comparisons that would take you 20 minutes in Excel.

What to double-check:

  • Final totals. Code is precise, but the AI may have summed the wrong column.
  • Labels on graphs. The numbers are usually right; the captions are sometimes confidently wrong.
  • Anything where the analysis depends on a column the AI may have misinterpreted.

Reliability is much higher than memory-based math, but it is not infallible. Treat AI data analysis the way you would treat work from a sharp junior analyst: useful, fast, almost always right, occasionally wrong in instructive ways.

A non-software example. A runner uploaded six months of running-tracker data (a CSV from a fitness app) and asked: "How are my pace and distance progressing? Are there any patterns I should know about?" The AI wrote code, plotted weekly averages, and noticed two things the runner had not: pace consistently dropped after every long-run weekend (likely fatigue), and distance plateaued in the third month before climbing again. The recommendation: a deload week every fourth week, and a slower long-run pace. The runner had stared at this same data in the app's dashboard for months without seeing those patterns. AI did not invent insight from nothing; it computed what the runner did not have time to compute.

A useful pattern: ask for the chart it would draw

When you upload data, your first prompt does not have to be the question. It can be: "Describe this dataset. What columns are here, what do they represent, and what 3 charts would best show what is going on?" Read the answer, pick the chart you want, then ask for it. This catches misinterpreted columns before they become wrong analyses.


Part 4: Working safely and choosing tools

Two final concepts that change what AI can do for you, and how to pick the right tool for the job.

11. AI desktop apps and permissions

There is now a whole category of products called AI desktop apps: apps that run on your computer and, with permission, can find your files, read them, and act on them. Examples include Cowork, Microsoft Copilot, and Google Antigravity. Cowork is Anthropic's product. The category is growing.

What these can do that chat cannot:

  • Look through a messy folder of PDFs, propose a new organization (rename files, move them, create subfolders), and execute the plan once you approve.
  • Pull together related files for a project (you say "I'm filming on these dates and these people are involved"), and notice things on its own (a crew member's birthday falls during the shoot, do you want to fold in a celebration).
  • Read across a folder and summarize: "what did I work on last quarter, based on the contents of this projects/ folder?"

The workflow that makes this safe:

  1. Tell it the task. ("Reorganize this folder by client.")
  2. Ask for a plan, not action. The app proposes a list of file operations.
  3. Review and edit the plan. Catch the rename you do not want before it happens.
  4. Only then approve execution.
Read this before you give any AI app file access

Two facts most people learn the hard way:

  • Deleted files often do NOT go to your recycle bin when an AI app deletes them. They are gone.
  • Edited files do NOT keep an edit history unless you have version control. The AI's change overwrites the previous version.

Until you have done this safely a few times, scope every permission request to the smallest folder needed for the task. Do not approve "full disk access" for an app you have used twice.

This is a genuinely new shape of tool. Treat it that way: like the first time you handed a junior employee the keys to a real account. Useful, fast, and worth being careful with.

A non-software example. A consultant had a folder called clients/ that had grown to 240 PDFs over four years: contracts, invoices, scoping documents, hand-scanned receipts, meeting notes. She told an AI desktop app: "Look through clients/. Propose an organization scheme. Do not move any files yet. Show me the proposed scheme as a tree." The app produced a clean tree: one folder per client, sub-folders for contracts, invoices, and notes, with a flagged list of 18 files it could not confidently classify. She edited the proposal (renamed two clients, merged two folders), then approved execution. Total time: about fifteen minutes. The same job had been on her "someday" list for three years. The unlock was not the AI doing the thinking; it was the AI doing the tedium so the thinking became cheap.

The permission ladder. A useful sequence for getting comfortable:

Comfort levelWhat to allowWhat to keep saying no to
First sessionsRead-only access to a single small folder.Anything that writes, deletes, or renames.
After 2-3 successful runsRead and write inside one specific folder.Access to broader directories like the desktop or documents root.
After a clean weekRead across a project tree, write inside a scoped subfolder.Anything outside that project.
TrustedTool-specific permissions ("rename PDFs in this folder," "edit Word docs in this folder").Open-ended "do whatever you need."

The principle: scope grows with track record, not with how much you trust the company that built the tool. Trust is earned by behavior in your specific workflow.

12. Cost, speed, and which model to use when

A simple stack to keep in your head:

   Text    │█                   seconds, fractions of a cent
Speech │███ seconds, a few cents per minute
Images │████████ tens of seconds, several cents,
│ no early-stop
Video │████████████████ minutes per generation, many
│ cents to a few dollars, hard
│ to iterate
└──────────────────► time and cost

Costs are trending DOWN year over year.
The bar lengths will shrink. The ordering will not.

In words:

  • Text: seconds, fractions of a cent per response.
  • Speech: seconds, a few cents per minute of audio.
  • Images: tens of seconds, several cents per generation. No early-stop, the whole image generates at once.
  • Video: minutes per generation, many cents to a few dollars. Iteration is painful because each round is slow and expensive.
  • Deep research: minutes, several cents to a quarter, but synthesizes dozens of sources for you.

Two implications:

  1. Iteration cost shapes what you do. You can iterate on text 50 times in an afternoon. You cannot iterate on video 50 times in an afternoon. So when you generate images or video, invest more in the prompt up front (and use a text AI to write it).
  2. Costs are trending down. The image that costs you 10 cents today will cost a fraction of that next year. Generating art for your home, a birthday card, or a wedding invitation is rapidly becoming free.

Which model for which task? AI is jagged: different models are good at different things, and the leader changes every few months. There is no single best model. Two habits help:

  • Try the same prompt in 2 to 3 models routinely. Same question, three tools. Read all three answers. The differences will surprise you, and they update your intuition about which tool is best for which kind of question.
  • Don't marry one tool. A worker who only uses one AI is a worker who is wrong about which tool is best for two-thirds of their tasks. Switching is free; you just paste the prompt in a different tab.

The best AI for your task today is not the best AI for your task in three months. Stay loose.

A rough snapshot of what each major model tends to be good at right now (this will change; treat it as a starting point, not a verdict):

ToolTends to be strong atTends to be weaker at
ChatGPTConversational range, image generation in-product, broad task coverage.Sometimes verbose; can over-format with lists and headings.
ClaudeLong-document understanding, careful reasoning on hard prompts, writing voice.In-product image generation is less central than competitors.
GeminiFast web search and source synthesis, deep research with rich output (charts, tables), tight integration with Google's data.Tone can feel more clipped; some responses lean shorter than ideal.

Three habits that compound:

  1. Have at least two tabs open. A primary tool and a backup. When the primary gives you something that does not feel right, paste the same prompt in the backup. The second answer is often the tiebreaker.
  2. Keep a prompt scratchpad. A note file (any text file works) where you collect prompts that produced unusually good results. Reuse and adapt them. This is your personal library.
  3. Notice when the model is wrong. Not as scolding, as data. Wrongness is a free signal about where this tool's edges are. Logging "tool X confidently wrong about Y" once a week is more useful than reading any 2,000-word AI newsletter.
A small ritual that pays off

Once a month, pick one task you do regularly (writing weekly status updates, planning meals, summarizing a recurring document). Run that task through three different AI tools. Note which one did it best. Use that one for that task until next month, when you re-test. Your tooling stays current without effort.


A short recap before you try the prompts

Twelve concepts is a lot. The shape of the page in one paragraph:

You learned that AI knows things from a snapshot of the internet (concept 2), that it has three retrieval modes for going beyond that snapshot (concept 3), that the single biggest determinant of answer quality is how much relevant context you load up front (concept 4), that modern models can think hard if you ask them to (concept 5), that they are biased toward agreement and that neutral framing fixes most of that bias (concept 6), that the iterate-with-explicit-feedback loop is the highest-leverage habit on the page (concept 7), that AI can also see images, generate them, build small apps, and run code (concepts 8 to 10), that there is a new category of file-aware desktop apps with new safety considerations (concept 11), and that the right tool for a job changes month to month so you should stay loose (concept 12).

Underneath all of that is one move, repeated in a dozen disguises: get the right context in, keep the wrong context out. If you never remember a single thing from this page except that sentence, you will still be in the top quartile of users.


Try this now: ten prompts before the Thinking Baseline

Reading is a placeholder for trying. Open ChatGPT, Claude, or Gemini in another tab. Run these ten prompts in order. They take about twenty minutes total and exercise every concept in this page.

The ten prompts (click to expand)

1. Web-search trigger. Forces the AI to leave its training data and look up current info.

What major news happened today in [your country]? Cite each claim
with a source link. Flag any claim you can't support with a citation
as "unverified".

2. Pretrained-only question. Common-knowledge, no lookup needed. Should be fast and confident.

Why do cats stare at walls? Two-paragraph answer.

3. Context-rich personal prompt. Practice loading constraints up front.

Plan a 15-minute home workout for me. Constraints: I have a
trampoline and a cat, no squats (bad knee), I hate sticking to
plans for more than three days, and I want to feel slightly
silly while doing it. Give me 3 options, no commentary.

4. Neutral-framing rewrite. Practice spotting your own bias in the prompt.

The question I want to ask is: "Don't you think four-day work
weeks are obviously better for everyone?" Rewrite this as a
neutral question that doesn't signal what answer I want.
Then answer the rewritten version.

5. Three-options brainstorm with iteration. The core power-user loop.

Round 1: I want to start a small side project that takes about
3 hours per week and might make money in a year. I'm a [your
profession] who likes [your hobby]. Give me 5 different ideas,
one line each. Don't expand any of them.

(Read the 5. Pick what you like and don't like. Then, in the
SAME conversation:)

Round 2: I reject options [N] and [N] because [reason]. I like
the [keyword] idea but I want it to use less [thing]. Give me
5 new options that incorporate this feedback.

6. Outline-first writing. Force structure before prose.

I want to write a 600-word post about [a topic you care about].
Don't write it yet. Give me 3 different outline options, each
with 4-6 headings. One line per heading.

7. Think-hard reasoning prompt. Use a real personal decision.

I'm choosing between [Option A] and [Option B] for [real personal
decision in your life]. Here's the relevant context: [a paragraph
of context]. Think hard before answering. Tell me:
1. The 3 trade-offs that actually matter.
2. Which you'd choose and why.
3. Under what conditions your recommendation would flip.

8. Objective-rubric critique. Avoid sycophancy on your own work.

I'm pasting in something I wrote: [paste anything 100-300 words].

Critique it using these 4 criteria, each scored 1-5 with a
one-sentence reason:
- Does it have a clear central claim?
- Is each paragraph in the right order?
- Are there any sentences that could be cut without loss?
- Does the ending earn the time the reader spent getting there?

Then suggest the smallest change that would lift the lowest score.

9. Image-input task. Practice giving AI a photo to read.

[Upload any handwritten note, receipt, or whiteboard photo]

Transcribe what's written. Then summarize what it's about in
3 bullets. Flag anything you couldn't read with confidence.

10. Small-app prompt. Practice the Goal/Input/Output shape.

Build me a Pomodoro timer.
Goal: 25-minute work sessions, 5-minute breaks.
Input: I press start.
Output: Visible timer counting down, a satisfying click when
each cycle ends, a yellow theme. Show me the working version.

You now know what these tools can do. Whether you can think clearly enough to direct them is a separate question, and it is the question Part 0 is built around.

Frequently asked questions before you start

Do I need a paid plan to do the exercises in Part 0? The free tiers of ChatGPT, Claude, and Gemini are enough for the Thinking Baseline and most chapter exercises. A paid plan helps if you do a lot of deep research or attach many files in a session. Start free; upgrade only if usage limits start blocking you.

Should I use one tool or three? Pick one as your default for daily use, but install at least one other for comparison. The point of having a second tool is not to do twice the work; it is to have a tiebreaker when the first tool gives you something that does not feel right.

My company blocks ChatGPT. What do I do for the exercises? Use whatever modern AI tool your company permits. The skills in Part 0 transfer to any text-in, text-out AI. If nothing is permitted, use your personal account on a personal device for the chapter exercises (Part 0's deliverables are about thinking, not company data).

Is it cheating to use AI for the Thinking Baseline? Yes, and it would only cheat you. The baseline is ungraded. Its only purpose is to capture an honest snapshot you can compare against later. A baseline boosted by AI gives you a misleadingly high starting point and erases the evidence of your growth.

What if I forget the recipes from this page? Bookmark the page. The recipes (the iterate loop, the rubric pattern, the neutral-rephrase trick) are designed to be looked up, not memorized. The only thing worth memorizing is the single sentence: get the right context in, keep the wrong context out.

Why does Part 0 spend so much time on thinking when AI is so capable? Because capability without direction multiplies waste. A confidently wrong analysis from AI is more dangerous than no analysis at all, because it looks finished. Part 0 trains the judgment that decides what to do with what AI produces. That judgment is the most valuable skill in an AI-saturated workplace, and most curricula skip it entirely.

Common mistakes to watch for in your first week
MistakeSymptomFix
Treating AI like a search engineShort prompts, shallow answers, repeated frustrationBrief AI like a colleague: context, files, constraints, ask.
Letting one conversation accumulate foreverAnswers get vaguer over timeStart a new conversation when the topic changes.
Asking for the final draft on the first tryPolished output, hollow contentOutline first, critique outline, expand to bullets, then draft.
Bait phrasings without realizingAI agrees with whatever you impliedRewrite as neutral questions before sending.
Skipping the rubric on critiques"Great work!" with no specificsProvide objective yes/no criteria; ask for scores per criterion.
Trusting confidence as accuracySurprising errors on obscure topicsAsk "how would you know this?" Verify high-stakes claims against primary sources.
Approving broad permissions on day oneFiles lost, edits overwrittenScope tight folders. Grow scope only with track record.

These are not character flaws. They are habits the first generation of users (yourself included) is building from scratch. Catching them once tends to stick.

A short word on what changes between now and the chapters of Part 0. This page taught you the mechanics of using these tools. The chapters teach the discipline that makes the mechanics actually pay off:

  • Chapter 1 (Asking Better Questions) goes deep on what you got the cheap version of in concept 6: how to formulate questions that surface what you do not already know.
  • Chapter 2 (Detecting Broken Reasoning) goes deep on what you got hints of in concept 2: confident AI answers are not the same as correct answers, and there are repeatable techniques for catching the failures.
  • Chapter 6 (Working With AI, Not For AI) goes deep on what you got the surface of in concept 7: the brainstorm-iterate loop is one move in a much larger collaboration playbook.
  • The other chapters teach skills the mechanics page does not touch at all: systems thinking, first principles reasoning, ethical reasoning, decision-making under uncertainty, and learning how to learn.

You have the tools. The chapters give you the judgment that makes the tools actually pay off.

When you are ready, head to Part 0: Thinking is the Curriculum and start with the Thinking Baseline: a 30-minute ungraded snapshot of your current thinking skills, taken before any training. You will repeat the same assessment after Chapter 10 and compare. Do not skip it. The whole point of Part 0 is to make you the kind of person AI can amplify, not the kind it can replace, and the only way to know whether that worked is to measure where you started.

Power tools without judgment make confident mistakes faster. This page taught you the tools. The rest of this part teaches the judgment.