Chapter 3: Thinking in Systems
Most AI tools analyze problems in isolation. This chapter trains you to see the interconnections they miss.
Most AI tools give you a list of effects when you ask about a decision. What they miss are the feedback loops -- where effects circle back to amplify or dampen the original decision. This chapter builds your ability to trace cascading consequences across multiple domains and adapt your analysis when conditions change.
Teaching Aid
Core Skill
Systems Thinking -- the ability to see interconnections, feedback loops, and higher-order effects that linear analysis misses.
What You Will Learn
- How to draw a cascade map tracing effects across five domains
- How to identify feedback loops (amplifying and dampening)
- How human and AI systems analysis complement each other
- How to adapt your analysis when a key variable changes
- How to defend your systems analysis under peer questioning
Exercises
| Exercise | What You Do | Layers Used |
|---|---|---|
| 1. The Cascade Map | Draw a cascade map before AI, tracing effects across 5 domains with feedback loops | Layer 1, Layer 6 |
| 2. Human vs. AI Systems Analysis | Compare your map against Claude and ChatGPT, creating a merged map with attribution | Layer 2, Layer 5 |
| 3. The Variable Shift | Revise your map when a key variable changes, tracking what survived and what broke | Layer 4, Layer 6 |
| 4. Peer Cross-Examination | Defend your cascade map in a live 15-minute peer exchange without AI access | Layer 3, Layer 6 |
Chapter Deliverable
A Systems Thinking Portfolio containing your original cascade map (Draft 1), merged map with attribution (Draft 2), revised map after variable shift (Draft 3), peer feedback, all AI evaluations, and a final reflection on your systems thinking growth.
What You Need
- A web browser with access to claude.ai and chatgpt.com
- Paper or a digital document for drawing cascade maps
- A partner for Exercise 4 (or use the Solo Learner Alternative)