Thinking in Systems
Most AI tools analyze problems in isolation. Ask about automating customer support and you get an answer about customer support. You do not get the second-order effect on employee morale, the third-order effect on company culture, or the feedback loop where cost savings lead to worse service which leads to customer churn which eliminates the savings. This chapter trains you to see interconnections that AI tools consistently miss.
Exercise 1: The Cascade Map (Human First)
Layers Used: Layer 1 (Predict Before You Prompt), Layer 6 (Iterative Drafts)
You will use the Error Taxonomy from Chapter 2, Exercise 1 to identify errors in causal reasoning. Systems errors build on the error detection skills you practiced there.
What You Do
You receive a single decision. Without AI, draw a cascade map on paper or in a document — tracing effects across at least five domains: employees, customers, competitors, regulators, and the organization's own internal knowledge base. Identify at least three feedback loops (where an effect circles back to amplify or dampen the original decision). This map is your Draft 1 — submitted before any AI is consulted.
Choose Your Scenario
- Finance
- Engineering
- Healthcare
Scenario A (Finance): "A major bank decides to replace all loan officers with AI agents."
Scenario B (Engineering): "A city decides to replace all human-driven public buses with autonomous vehicles."
Scenario C (Healthcare): "A hospital network decides to use AI for all initial patient triage, removing human nurses from the first point of contact."
Choose one. The exercises work identically regardless of which you pick.
A cascade map (hand-drawn scan or digital document) showing: the central decision, at least 5 domains affected, first-order effects in each domain, at least 3 second-order effects, at least 3 third-order effects, and at least 3 feedback loops clearly labeled (e.g., "cost savings leads to reduced service quality leads to customer churn leads to reduced revenue leads to negated cost savings"). Each effect should have a one-sentence explanation of the mechanism.
I am a student learning systems thinking. I was given this scenario: "A major bank decides to replace all loan officers with AI agents." Before using AI, I created a cascade map tracing consequences across five domains with feedback loops.
Please: (1) Evaluate the completeness of my map -- which important effects or domains did I miss? (2) Rate each of my feedback loops: are they logically sound? Would they actually occur? (3) Identify at least 3 second or third-order effects I missed that are non-obvious but important. (4) Rate the overall sophistication of my systems thinking from Beginner / Developing / Proficient / Advanced. (5) Do any of my causal chains have logical errors -- effects that would not actually follow from the cause I described?
Here is my cascade map:
Finally, complete the Thinking Score Card for this exercise: Independent Thinking (1-10), Critical Evaluation (1-10), Reasoning Depth (1-10), Originality (1-10), Self-Awareness (1-10). For each score, give a one-sentence justification.
Discuss with an AI. Question your scores.
Come back when you have your BEST evaluation.
Deliverable Template (click to expand)
CASCADE MAP TEMPLATE
- Central Decision: ___
- DOMAIN 1 [Employees]:
- 1st-order effect: ___
- 2nd-order: ___
- 3rd-order: ___
- DOMAIN 2 [Customers]:
- 1st-order: ___
- 2nd-order: ___
- DOMAIN 3 [Competitors]:
- 1st-order: ___
- 2nd-order: ___
- DOMAIN 4 [Regulators]:
- 1st-order: ___
- 2nd-order: ___
- DOMAIN 5 [Internal Knowledge]:
- 1st-order: ___
- 2nd-order: ___
- FEEDBACK LOOP 1: [A] leads to [B] leads to [C] leads back to [A] | Type: Amplifying/Dampening | Mechanism: ___
- FEEDBACK LOOP 2: ___
- FEEDBACK LOOP 3: ___
What This Teaches You
You learn to see consequences that do not appear on a linear list. By forcing yourself to map effects before AI does it for you, you build the mental habit of asking "and then what?" for every decision. The AI feedback reveals effects you missed — expanding your systems thinking vocabulary for future problems.