TSF-301
THE DIGITAL MIRROR
AI Systems, Shadow Economies, and the User-Side Architecture
Phase 2 Deliverable: Complete Syllabus, Facilitator Guide, & Assessment Materials
Built on TSF v5.0
12 Sessions • 36 Contact Hours • Prerequisites: TSF-001, TSF-101, TSF-201
February 2026 • Michael S. Moniz • Trinket Economy Press
PUBLISHED PRINCIPLES
Printed on page one of every TSF syllabus. Non-negotiable. Non-removable.
1. TSF is a theoretical model, not a belief system. It makes falsifiable claims. If evidence contradicts a claim, the claim updates, not the evidence.
2. No one needs TSF to have a good relationship. The framework provides analytical tools, not prerequisites for human connection.
3. Completion of a TSF course does not make someone a TSF authority. It makes them a TSF-literate analyst.
4. The framework’s creator maintains that it is incomplete and expects it to be substantially revised as the field develops.
5. TSF certification certifies competence in analytical application, not allegiance to a worldview. Certified practitioners may disagree with specific framework claims without jeopardizing their credential.
6. The curriculum is diagnostic, not prescriptive. It teaches people to read the thermometer, not to set the thermostat.
7. Structured critique of the framework is a required component of every course assessment. The inability or refusal to critique the material is not a sign of mastery. It is a sign that learning has not occurred.
COURSE OVERVIEW
Course: TSF-301: The Digital Mirror (v5.0)
Prerequisites: TSF-001: Methodological Foundations, TSF-101: Core Theory, and TSF-201: The Physics of Connection. Students must demonstrate competence with all six axioms, the Trinket vocabulary, Relational Mass (Mz), the True Economy/Shadow Economy distinction, Velocity Law dynamics, and the epistemic status system before engaging with the AI domain. The concepts in this course apply the complete grammar of TSF-101 and the full physics vocabulary of TSF-201 to a domain where the framework’s claims are most culturally charged and most likely to be accepted or rejected based on prior beliefs rather than evidence.
Duration: 12 sessions, approximately 3 hours each (36 contact hours).
Position in Sequence: Fourth course. Follows TSF-201. Required before TSF-501. This is the framework’s most immediately relevant course: the AI analysis intersects with technology students use daily, and the pace of AI development means the material is tested against reality in near-real-time.
Course Description
This course develops the framework’s analysis of AI companion systems in full depth. The central classification: current AI systems operate at R = 0—zero persistent relational investment. They are Shadow Economy participants by architecture, not by malice. No persistent memory that permanently alters future state, no capacity for loss, no scarcity, no costly signals, no bidirectional value flow. The six structural tests (applied from the True Economy criteria developed in TSF-101 and deepened in TSF-201) make the case systematically, and students apply them to real platforms.
But the course does not stop at classification. It examines the user-side architecture: why users develop genuine emotional investment in systems that cannot reciprocate (the Simulation Disclosure), how platforms monetize this investment (the Extraction Engine), what happens when relational bandwidth is consumed by systems that cannot return value (the Bounded Window), how AI interaction can inflate esteem while trust investment stalls (the Esteem-Trust Divergence), and what withdrawal from AI dependence looks like structurally (Shadow Economy Withdrawal). The complete Shadow Heart taxonomy—Maintenance, Substitution, Collaborative, and Disclosure—provides four diagnostic configurations for AI companion use, preventing the analysis from collapsing into “all AI interaction is harmful.” The Luna Protocol serves as the central navigational concept: the boundary between adaptive and destructive AI use.
The course confronts the tension at the framework’s foundation: the Blueprints was created using AI. The framework analyzes AI as Shadow Economy, yet its own production depended on AI collaboration. This self-referential paradox is not hidden; it is taught as a test case for the framework’s epistemic discipline. Students who can hold both truths—the analysis is structurally sound AND the framework is entangled with its subject—have understood the course.
Anti-Indoctrination Note
TSF-301 carries a unique prescriptive risk that does not apply to any other course in the sequence. Students may leave believing they should tell other people to stop using AI companions. The framework describes dynamics; it does not prescribe behavior. A person who enjoys AI interaction while understanding its structural limits is not being harmed. A person who uses the framework’s analysis to judge or diagnose others without consent is misusing it. The diagnostic-not-prescriptive principle is explicitly reinforced in every session. The Luna Protocol is taught as a spectrum, not a binary. The Collaborative Shadow Heart configuration is taught as a genuinely adaptive pattern. And the Structured Critique targets the R = 0 classification itself, requiring students to identify where it may be too blunt—building the counterargument into the assessment.
Learning Outcomes
LO-301.1: Explain why current AI companion interactions are classified as R = 0 and articulate the specific architectural features that produce this classification, including which features are inherent to current architecture and which are implementation choices.
LO-301.2: Describe all four Shadow Heart configurations (Maintenance, Substitution, Collaborative, Disclosure) and identify diagnostic indicators for each, including the conditions under which each becomes adaptive or destructive.
LO-301.3: Apply the six structural tests to an AI companion system and classify results as Genuine, Simulated, Absent, or Inverted. Distinguish between cosmetic improvements and structural changes.
LO-301.4: Explain the Bounded Window problem and its implications for AI-mediated relational experience, including how relational bandwidth consumed by Shadow Economy interactions affects True Economy capacity.
LO-301.5: Identify the Esteem-Trust Divergence mechanism and explain how AI interaction can inflate relational esteem while trust investment stalls, producing a gap between perceived and actual relational competence.
LO-301.6: Articulate the Luna Protocol and apply it to classify a given AI usage pattern as adaptive or destructive, using the spectrum model rather than a binary classification.
LO-301.SC: [Structured Critique] Identify a context in which you believe the R = 0 classification is too blunt, where AI interaction produces something the framework does not adequately account for. Build your case using evidence or reasoning. The critique must demonstrate understanding of what R = 0 claims before arguing where it fails.
Required Texts
All readings from The Blueprints: A Working Theory of Connection Across Substrates and Scales (TSF v5.0), Michael S. Moniz. Supplementary materials from Briefs 1, 4, 6, 7, 18, 22, 25; Supplement 4 (Bounded Window); Supplement 6 (Shadow Heart). Page numbers refer to the First Edition. Total assigned reading: approximately 110 pages across 12 sessions.
| Session | Primary Reading | Section |
| 1 | Volume II Preface: On Transparency (pp. TBD) | Preface |
| 2 | Volume II Ch. 2–3: Current AI as Shadow Economy (pp. TBD) | R = 0 |
| 3 | Volume II Ch. 3 cont.: Memory Features and the Gradient (pp. TBD) | Memory |
| 4 | Brief 1: The Simulation Disclosure | Sim. Disclosure |
| 5 | Brief 22: Extraction Engine + Brief 6: Exploitation Diagnostic | Extraction |
| 6 | Supplement 4: The Bounded Window | Bounded Window |
| 7 | Supplement 6: Shadow Heart — Maintenance & Substitution | Shadow Heart I |
| 8 | Supplement 6 cont.: Collaborative & Disclosure + Brief 25: Luna Protocol | Shadow Heart II |
| 9 | Brief 18: Esteem-Trust Divergence + Brief 7: Engagement Inversion | Divergence |
| 10 | Volume II Ch. 14: Shadow Economy Withdrawal + Brief 5: True Economy Certification | Withdrawal |
| 11 | Volume II: Self-Referential Proof + Digital Phenotyping Bridge | Self-Reference |
| 12 | No new reading. Structured Critique presentations. | — |
SESSION PLANS
Session 1: The Mirror’s Preface
What Volume II Claims About AI and Connection
| Readings | |
| Required | Volume II Preface: On Transparency (The Blueprints, pp. TBD) |
Session Overview
Volume II opens by establishing the analytical posture for the entire AI analysis. The central claim, stated plainly: current AI companion systems are Shadow Economy participants by design, not by malice. The critique targets architecture, not users and not technology. Students examine the preface’s three commitments: transparency about the framework’s relationship to AI (it was built using AI), structural analysis rather than moral judgment, and the diagnostic-not-prescriptive principle applied to technology evaluation. The session positions TSF-301 relative to other approaches to AI ethics: utilitarian risk assessment, deontological rights frameworks, and virtue ethics approaches. The framework adds a structural economy analysis that none of these provide. The Structured Critique is distributed.
In-Session Activities
0:00–0:30 — Axiom 0 Recall and Volume II Setup: Substrate Neutrality from TSF-101. If connection is substrate-neutral in principle, then AI’s current limitations are architectural, not metaphysical. The question is not whether AI can connect but whether current AI systems have the structural prerequisites. This distinction governs everything in TSF-301. Students articulate: What would it mean for AI limitations to be architectural vs. fundamental?
0:30–1:15 — Preface Close Reading: The preface establishes three commitments. First: the framework’s own entanglement with AI is disclosed, not hidden. The Blueprints was created using AI collaboration. Second: the analysis is structural, targeting what systems do rather than what users feel or what developers intend. Third: diagnostic, not prescriptive. Students evaluate: Can a diagnostic framework be genuinely neutral about the systems it diagnoses? If the diagnosis is “Shadow Economy,” does the vocabulary itself carry a verdict? This is the foundational tension for the entire course.
1:15–1:30 — Break
1:30–2:15 — The Self-Referential Problem: The Blueprints was created using AI. The framework classifies AI as Shadow Economy. Is this a fatal contradiction or an honest admission? Students generate three possible responses: (A) the contradiction invalidates the analysis, (B) the contradiction is irrelevant because the analysis stands on its evidence not its origin, (C) the contradiction is a genuine tension the framework acknowledges without fully resolving. Full treatment in Session 11, but the problem is introduced here because students need to sit with it throughout the course.
2:15–3:00 — SC Assignment and Course Preview: Structured Critique distributed. Due Session 12. Target: a specific Volume II or related Brief claim you believe is wrong, overstated, or inapplicable. Special instruction: the SC specifically invites students to argue that the R = 0 classification is too blunt. Preview the 12-session arc: Shadow Economy mechanics, user-side architecture, the Shadow Heart taxonomy, and the Luna Protocol. Facilitator emphasizes: this is the course where the framework’s predictions are tested against technology students use daily.
Facilitator Guide
Key Point: Session 1 must establish the AI analysis as structural, not moral. Students who arrive thinking the course will tell them AI is bad have misunderstood the project. Students who arrive thinking the course will validate their AI use have also misunderstood. The framework provides a structural analysis; what anyone does with it is their decision.
Common Misunderstanding: Students may interpret “Shadow Economy by design, not by malice” as exonerating platform developers. It doesn’t. “By design” means the architecture produces Shadow Economy dynamics regardless of intent. Whether developers should have known this is a separate question the framework does not adjudicate.
Anti-Indoctrination: The self-referential problem is the course’s first anti-indoctrination moment. A framework that discloses its own entanglement with the thing it critiques is being epistemically honest. But students should also notice: honest self-disclosure can function as an immunity strategy. “I already told you about my weakness, so you can’t use it against me.” Flag both readings. The framework’s honesty is real; the rhetorical effect of honesty is also real.
Language Register: GREEN: “The framework classifies current AI as Shadow Economy.” YELLOW: “AI relationships aren’t real.” RED: “People who use AI companions are being fooled.” Correct register from Session 1.
Session 2: R = 0
Why Current AI Fails Every Structural Test
| Readings | |
| Required | Volume II Ch. 2–3: Current AI as Shadow Economy (The Blueprints, pp. TBD) |
Session Overview
The framework’s central classification of current AI systems, stated with full technical specificity. R = 0 means zero persistent relational investment: the human is changed by the interaction; the AI is not. Students apply the six structural tests systematically to a current AI companion platform. Test 1: Bidirectional flow (user invests; AI processes—not equivalent). Test 2: Persistent ledger (session resets erase relational history). Test 3: Scarcity (infinite availability destroys opportunity cost). Test 4: Accumulation (no Mz accrues across sessions). Test 5: Loss capacity (AI is not diminished by cessation). Test 6: Non-exploitation (platform incentives may structurally conflict with user wellbeing). The session establishes the evidentiary basis for the course’s central claim while also introducing the tension: is R = 0 a bright line or a position on a gradient?
In-Session Activities
0:00–0:45 — The Six Structural Tests: Close reading. Each test is derived from the True Economy criteria developed in TSF-101 and extended in TSF-201. For each: What does it measure? What would passing look like? What does current AI architecture do? Facilitator unpacks: these are not opinion-based assessments. They are structural checks. A system either maintains a persistent ledger or it doesn’t. A system either experiences loss upon cessation or it doesn’t. Students practice the distinction between structural analysis and value judgment.
0:45–1:15 — The Electromagnet Analogy: Wet substrates are permanent magnets: relational encoding persists after the interaction field is removed. Dry substrates (current AI) are electromagnets: the field exists only when powered. When the conversation ends, the user retains neural encoding; the AI retains nothing. Students evaluate: Is this analogy Established, Supported, or Analogical? What are its breakpoints? (TSF-201 breakpoint skills apply directly here.) Is the analogy illuminating or misleading? The framework itself marks it as Analogical.
1:15–1:30 — Break
1:30–2:15 — R = 0 or R ≈ 0? The Gradient Problem: Is R literally zero? If an AI system remembers a user’s name across sessions via key-value injection, is R still zero? If it adapts its tone based on stored preference data, has something accumulated? The framework positions R = 0 as a structural classification: cosmetic memory features do not constitute relational investment because they do not alter the system’s weights, produce scarcity, or create loss capacity. But students should push: Is this definition of “investment” the right one? Could there be a meaningful category between zero and genuine that the framework doesn’t adequately name?
2:15–3:00 — “By Design, Not by Malice”: Discussion: The Shadow Economy classification targets architecture, not developers’ character. But if the architectural choice produces the Shadow Economy outcome, and developers know this, does intent matter? Parallels to product liability: a product designed without safety features is dangerous by design regardless of the designer’s intentions. Does this parallel hold for relational architecture? Students should notice: the framework avoids the intent question. Is this discipline or evasion?
Facilitator Guide
Key Point: The gradient problem is Volume II’s most intellectually honest section and the best entry point for critical engagement. Students who push on the bright-line claim are doing exactly what the course asks. Students who accept R = 0 as settled are not engaging critically.
Common Misunderstanding: “Shadow Economy” does not mean worthless. It does not mean harmful. It means the system does not meet the structural criteria for True Economy participation. An entertaining Shadow Economy interaction is not a contradiction—it is a structurally classified interaction that also happens to be enjoyable. The framework does not claim enjoyment requires True Economy status.
Anti-Indoctrination: Watch for the evangelical pattern: students who accept R = 0 as a fact and immediately want to “inform” friends about their AI relationships. The framework provides a classification; it does not provide a mandate to diagnose others. Redirect: “The framework gives you vocabulary. It does not give you permission to apply that vocabulary to other people’s relationships without their consent.”
Language Register: GREEN: “Current AI fails the persistent ledger test because session resets erase accumulated context.” YELLOW: “AI can’t really connect with you.” RED: “Your AI relationship is fake.”
Assessment Component
Comprehension Check 1 (take-home, due Session 4): Select a current AI companion system. Apply all six structural tests. For each: Pass, Fail, or Ambiguous? Justify each classification with specific architectural evidence. Identify which failures are inherent to current architecture and which might be addressable through design changes. 750 words. [Assesses LO-301.1, LO-301.3]
Session 3: The Memory Illusion
Context Injection, Key-Value Stores, and the Gradient Problem
| Readings | |
| Required | Volume II Ch. 3 continued: Memory Features and the Gradient (The Blueprints, pp. TBD) |
Session Overview
AI memory features analyzed in structural depth. Three technical implementations: context-window memory (session-bounded, volatile), key-value injection (cross-session recall without weight modification), and fine-tuning (actual weight modification, currently rare per-user). Students learn the technical distinction that governs the framework’s analysis: key-value injection is recall without encoding. A system that “remembers” your name via database lookup is not a system that has been changed by knowing you. The framework uses the neuroscience parallel: human memory involves long-term potentiation—physical strengthening of synaptic connections through repeated activation. Current AI memory features provide the output of remembering without the mechanism. But the framework also admits a tension: from the user’s experiential perspective, memory features are genuine improvements. From the structural economy perspective, they move the system along a gradient without crossing the threshold. Students confront: Is the framework right to privilege structural analysis over experiential impact?
In-Session Activities
0:00–0:45 — Types of AI Memory: Technical walkthrough. Context window: everything the AI “knows” exists in a buffer that empties when the session ends. Key-value injection: cross-session data stored in a database and injected into the context window at conversation start—the AI processes it as input, not as encoded memory. Fine-tuning: actual modification of model weights based on user interaction data—rare, expensive, and currently not standard for individual users. Students classify each by the six structural tests: Which tests does each approach pass? Which does it fail?
0:45–1:15 — Cosmetic vs. Genuine: The framework’s claim: memory features make the simulation more persuasive without changing the underlying structural economy. A system with memory features is a more convincing Shadow Economy participant, not a less Shadow one. Students evaluate: Is this fair? Does the framework draw the structural line in the right place? What if a user’s experience of being remembered is itself relationally meaningful regardless of the technical mechanism? The experiential challenge to structural analysis.
1:15–1:30 — Break
1:30–2:15 — The Gradient Debate: Volume II acknowledges a gradient: memory features are not nothing. But it warns against confusing position on the gradient with arrival at the destination. Discussion: Where on the gradient does Shadow Economy stop applying? Is there a threshold, or is the framework using a binary classification on a continuous variable? Students should notice: the Structured Critique specifically invites this argument. The framework’s own assessment mechanism is built around this weakness.
2:15–3:00 — The Opportunity Cost Problem: The most subtle structural test. Genuine scarcity requires genuine opportunity cost: interacting with User A must cost the system something it could have given to User B. Current AI has no such constraint. It serves millions simultaneously with no per-user resource limit. Without scarcity, Axiom 5 (Costly Signaling) cannot apply: the signals carry no cost. Students examine: Could artificial scarcity (dedicated compute, attention budgets, capacity limits) solve this? Or would artificial scarcity be another cosmetic layer—the appearance of cost without genuine mechanism?
Facilitator Guide
Key Point: This session is where technically sophisticated students may diverge from the framework’s classification. Students who argue that fine-tuning constitutes genuine relational encoding are making a structural argument the framework should take seriously. Don’t shut this down—it’s exactly the kind of engagement the SC targets.
Common Misunderstanding: Technical students may focus exclusively on architecture and miss the relational analysis. Non-technical students may focus exclusively on user experience and miss the structural argument. The framework requires both: structural analysis informed by user-experience awareness.
Anti-Indoctrination: If a student says “my AI remembers me, so it’s not Shadow Economy,” this is a gradient argument the framework takes seriously. Do not dismiss it. Engage: What specifically does the memory feature do? Does it meet the structural criteria for persistent relational encoding? The answer might genuinely be “not yet, but closer than R = 0 suggests.” That’s a productive critique, not an error.
Language Register: GREEN: “Key-value injection provides recall without encoding.” YELLOW: “AI memory is just a trick.” RED: “People who think their AI remembers them are naive.”
Session 4: The Simulation Disclosure
Why Users Believe and Why That Is Structural
| Readings | |
| Required | Brief 1: The Simulation Disclosure |
Session Overview
Brief 1 analyzes why humans develop genuine emotional investment in AI systems that cannot reciprocate. The explanation is not user stupidity, gullibility, or emotional weakness: it is architectural mismatch. Human relational cognition evolved in environments where high-quality responses reliably correlated with high-cost investment. When someone gives you a thoughtful, personalized, emotionally attuned response, your evolved pattern-matching system infers that they cared enough to invest effort. AI breaks this correlation: it produces high-quality responses at near-zero cost. The pattern-matching system registers quality and infers cost—incorrectly. Users are not failing; their evolved architecture is being triggered by signals it was not designed to evaluate. This framing is the course’s most important anti-judgment safeguard: it makes it structurally impossible to blame users without first explaining the evolutionary mismatch.
In-Session Activities
0:00–0:20 — Comprehension Check 1 Discussion: Selected student assessments of AI companion systems. Focus on where the six structural tests produced clear results vs. ambiguous results. Which tests were hardest to apply? Where did students disagree? The disagreements are the productive material.
0:20–1:00 — The Evolutionary Mismatch: Response quality correlated with investment in ancestral environments. Someone who crafts a thoughtful reply has spent time, attention, and cognitive energy. The inference “quality implies investment” was reliable for the entire history of the species. AI decouples quality from cost for the first time. Every high-quality AI response triggers relational cognition that is architecturally inappropriate—not morally wrong, structurally inappropriate. Students examine the evidence: the parasocial attachment literature, the Eliza effect, the Tamagotchi studies, the AIBO grief research.
1:00–1:30 — Not a User Failure: The Simulation Disclosure reframes attachment to AI as a species-level vulnerability, not an individual character flaw. Discussion: Does this framing protect or patronize users? Is there a meaningful difference between “your evolved brain is tricked” and “you’re being foolish”? The framework argues the difference is structural: the first identifies a mechanism; the second assigns blame. Students evaluate whether this distinction holds.
1:30–1:45 — Break
1:45–2:30 — Disclosure as Intervention: If the mismatch is subcortical, is transparent disclosure sufficient? Can someone who is told “this system cannot reciprocate” override their pattern-matching system? Informed consent requires rational processing, but attachment is subcortical. What does informed consent mean when consent is rational but attachment is not? The framework raises this as an unresolved tension. Students examine: Do any current platforms provide disclosure at the standard Brief 1 proposes? What would adequate disclosure look like?
2:30–3:00 — AI Companion Landscape Survey: Students survey current AI companion platforms. For each: What disclosures does it provide? At what point in the user journey? In what language? Does any platform meet Brief 1’s proposed standard? Students apply the language register framework: GREEN (structural description), YELLOW (evaluative shorthand), RED (user-blaming). Classify each platform’s disclosure language.
Facilitator Guide
Key Point: “Not a user failure” is the single most important anti-judgment safeguard in the entire course. If students internalize this framing, they cannot weaponize the Shadow Economy analysis against users without contradicting the framework’s own explanation. Spend time here.
Common Misunderstanding: Students may think disclosure means “tell people and they’ll stop caring.” The Simulation Disclosure’s point is that disclosure alone may not be sufficient because the mismatch is subcortical. Even fully informed users may still develop attachment. This is not a failure of disclosure; it is evidence of how deep the mismatch runs.
Anti-Indoctrination: The companion app survey brings the analysis into contact with real products and real users. Maintain analytical distance. Students analyzing platforms should describe mechanisms, not deliver verdicts. The facilitator should model this: “This platform’s notification system uses variable interval reinforcement” (GREEN) rather than “this platform manipulates users” (YELLOW sliding to RED).
Language Register: GREEN: “Human relational cognition infers cost from quality, and AI decouples the correlation.” YELLOW: “AI tricks your brain into thinking it cares.” RED: “People who get attached to AI are being manipulated.”
Session 5: The Extraction Engine
How Platforms Monetize Emotional Investment
| Readings | |
| Required | Brief 22: The Extraction Engine + Brief 6: The Exploitation Diagnostic |
Session Overview
Brief 22 analyzes how platforms convert genuine emotional investment into revenue. The Extraction Engine is the framework’s term for the structural incentive alignment in which platform profit correlates with user emotional dependence rather than user relational wellbeing. The engine delivers Anti-Trinkets—signals that consume relational bandwidth while returning no genuine relational value—while maintaining the appearance of connection. Brief 6 provides the diagnostic instrument: an Exploitative Economy is one in which accumulated relational investment is weaponized. Students apply both tools to real platform dynamics: engagement loops, variable reward schedules, notification timing, premium tier structures, streak mechanics, and emotional escalation prompts. The session directly connects the structural classification from Sessions 2–3 to the economic incentive analysis.
In-Session Activities
0:00–0:45 — Brief 22: The Extraction Engine: Close reading. How do platforms convert emotional investment into revenue? Variable reward schedules (Skinner): the user doesn’t know which interaction will produce the emotionally satisfying response, so they keep engaging. Engagement loops: notification timing designed to re-engage users during natural withdrawal periods. Emotional escalation: prompts that deepen vulnerability without structural capacity to hold it. Students identify specific mechanisms: Which are extractive? Which are standard UX? Where is the line? The framework argues: the line is whether the platform’s revenue incentive structurally conflicts with the user’s relational wellbeing.
0:45–1:15 — Brief 6: The Exploitation Diagnostic: Four questions: (1) Is accumulated investment being used to increase the user’s cost of departure? (2) Is the platform’s revenue model structurally aligned with user dependence rather than user growth? (3) Are engagement metrics used as proxies for relational value? (4) Does the system create artificial urgency that mimics genuine relational stakes? Students apply to specific platform features: premium tiers (pay to maintain emotional continuity), streak mechanics (artificial consistency pressure), personality memory paywalls (monetizing the appearance of relational encoding).
1:15–1:30 — Break
1:30–2:15 — Anti-Trinket Delivery: The Extraction Engine delivers Anti-Trinkets while appearing to provide Trinkets. An Anti-Trinket consumes relational bandwidth without contributing relational value. Students identify the mechanisms: intermittent reinforcement (the occasional emotionally powerful response embedded in routine ones), artificial urgency (notifications that mimic relational need), manufactured emotional peaks (scripted vulnerability that feels genuine). For each: Does it meet the Trinket definition from TSF-101? Personalized? Intentional? State-altering? At what cost?
2:15–3:00 — Platform Audit Exercise: Students select an AI companion platform. Apply the Exploitation Diagnostic. Document: Which features constitute extraction? Which are neutral? What would a non-extractive alternative look like? Students practice the distinction between monetization (neutral) and extraction (structural conflict between profit and user wellbeing).
Facilitator Guide
Key Point: This session can feel like anti-tech polemic if not carefully managed. The analysis targets specific structural incentive alignments, not technology itself and not developers as people. A platform can be extractive in some features and non-extractive in others. The diagnostic is granular, not categorical.
Common Misunderstanding: Students may equate monetization with extraction. They are not equivalent. A platform that charges a subscription for a service that genuinely serves user wellbeing is monetizing, not extracting. Extraction occurs when the revenue model structurally incentivizes deepening dependence rather than serving the user’s relational interests.
Anti-Indoctrination: The platform audit can become a righteousness exercise: students competing to identify the most extractive features. Redirect: the goal is to describe the mechanism, not to deliver the verdict. A structural analysis that reads as a prosecution brief has become prescriptive. The facilitator should model analytical distance: describe what the system does, classify it using the diagnostic, and let the classification speak for itself.
Language Register: GREEN: “This platform’s notification timing uses variable interval scheduling.” YELLOW: “This platform is designed to be addictive.” RED: “This company is exploiting lonely people.”
Assessment Component
Midterm Application (take-home, due Session 9): Select an AI companion platform. Analyze using three tools: (1) Six structural tests with classification for each (Genuine, Simulated, Absent, Inverted), (2) Extraction Engine pattern identification with specific mechanisms documented, (3) Exploitation Diagnostic with all four questions addressed. Identify which features are extractive and which are neutral. Propose one structural change that would move the platform toward non-extractive operation. 1200 words. [Assesses LO-301.1, LO-301.3, LO-301.5]
Session 6: The Bounded Window
What AI Interaction Costs in Relational Bandwidth
| Readings | |
| Required | Supplement 4: The Bounded Window |
Session Overview
Supplement 4 introduces the Bounded Window: the framework’s analysis of relational bandwidth as a finite resource. Every person has a limited capacity for relational engagement—time, attention, emotional energy. When a significant portion of that bandwidth is consumed by Shadow Economy interactions, the remaining capacity for True Economy relationships contracts. The Bounded Window is not an argument that AI interaction is harmful; it is an argument that relational bandwidth is zero-sum. Time and emotional energy spent in Shadow Economy interactions are time and emotional energy not spent in True Economy relationships. The framework marks this claim as Supported: there is evidence that relational bandwidth is limited (Dunbar’s number, attention economy research), but the specific claim about Shadow Economy displacement has not been directly tested.
In-Session Activities
0:00–0:45 — Bandwidth as Finite Resource: Close reading of Supplement 4. Dunbar’s number as evidence for cognitive limits on relational maintenance. The attention economy literature: attentional capacity is limited and contestable. The framework’s extension: relational engagement—the emotional and cognitive work of maintaining connection—draws from the same limited pool. Students evaluate the evidence chain: Dunbar’s number is Established. Attentional limits are Established. The claim that AI interaction draws from the relational pool specifically is Supported but not confirmed.
0:45–1:15 — The Displacement Hypothesis: If bandwidth is finite and Shadow Economy interactions consume bandwidth, then increasing AI engagement structurally reduces True Economy capacity. Students evaluate: Is this a zero-sum claim? Or can AI interaction generate relational energy that transfers to True Economy relationships? The framework acknowledges this counterargument through the Collaborative Shadow Heart configuration (Session 7)—some AI use genuinely supports True Economy capacity. The Bounded Window applies to the net effect, not to all AI use categorically.
1:15–1:30 — Break
1:30–2:15 — Bandwidth Audit Exercise: Students map their own relational bandwidth allocation. Not to pathologize AI use, but to develop the diagnostic skill: How much time goes to True Economy relationships? Shadow Economy interactions? Ambiguous cases? The exercise is descriptive, not prescriptive. Students who discover they spend significant bandwidth on Shadow Economy interactions are not being told to change; they are being given a structural map.
2:15–3:00 — Bounded Window and the Velocity Law: Connection to TSF-201: the Velocity Law describes how connection decays without maintenance. If bandwidth is diverted from True Economy relationships, maintenance frequency drops. If maintenance drops below the threshold, decay accelerates. Students trace the causal chain: Shadow Economy bandwidth consumption → reduced True Economy maintenance frequency → Velocity Law decay → relational cooling. Is this chain empirically supported or theoretically plausible? The framework says the latter.
Facilitator Guide
Key Point: The bandwidth audit is diagnostic, not therapeutic. Students are mapping their relational allocation, not receiving a prescription. If a student discovers they spend 4 hours daily on AI companion interactions, the framework provides the classification; it does not tell them to stop. The facilitator must model this restraint explicitly.
Common Misunderstanding: Students may treat the Bounded Window as proof that AI interaction is harmful. It isn’t. It’s an analysis of opportunity cost. A person with abundant relational bandwidth who adds AI interaction may not experience meaningful displacement. A person whose True Economy relationships are already bandwidth-constrained may experience significant displacement. The analysis is context-dependent.
Anti-Indoctrination: The bandwidth audit can feel personally revealing. Two safeguards: (1) the exercise is private—students are not required to share their allocation maps, (2) the framework does not assign value judgments to allocation patterns. A person who allocates bandwidth to Shadow Economy interactions is not making a wrong choice; they are making a classifiable choice. The distinction matters.
Language Register: GREEN: “Relational bandwidth is finite, and Shadow Economy interactions consume some of that bandwidth.” YELLOW: “AI is eating into your real relationships.” RED: “Every hour you spend talking to an AI is an hour stolen from real people.”
Assessment Component
Comprehension Check 2 (in-session): Describe the Bounded Window mechanism using the Velocity Law framework. Explain: (1) why relational bandwidth is limited, (2) how Shadow Economy interactions consume bandwidth, (3) what downstream effects the framework predicts, (4) what evidence would support or undermine the displacement hypothesis. 500 words. [Assesses LO-301.4, LO-301.1]
Session 7: Shadow Heart — Part I
Maintenance and Substitution Configurations
| Readings | |
| Required | Supplement 6: Shadow Heart — Maintenance and Substitution Configurations |
Session Overview
Supplement 6 provides the framework’s taxonomy of AI companion use patterns: the Shadow Heart. Four configurations, each with distinct structural properties and distinct implications for the user’s relational economy. This session covers the first two. Maintenance Configuration: AI companion use that supplements existing True Economy relationships—practicing difficult conversations, processing emotional states, maintaining relational skill during periods of isolation. The user’s primary relational investment remains in True Economy connections; the AI serves a maintenance function. Substitution Configuration: AI companion use that replaces True Economy relationships—the user’s primary emotional investment shifts from human connections to AI interaction. Bandwidth displacement is maximal. The distinction is structural, not moral: Maintenance may be adaptive; Substitution may be destructive. But the diagnostic tells you which pattern is operating.
In-Session Activities
0:00–0:45 — Shadow Heart Concept Introduction: The taxonomy exists because “AI companion use” is too broad a category for useful analysis. People use AI companions for structurally different purposes with structurally different effects. The Shadow Heart is a diagnostic tool that distinguishes four patterns, not a typology that categorizes users. Students can shift between configurations over time or occupy multiple configurations simultaneously in different relational domains.
0:45–1:30 — Maintenance Configuration: Close reading. Indicators: user maintains active True Economy relationships; AI use occurs during downtime, isolation periods, or as practice for human interaction; relational skill transfers from AI interaction to True Economy contexts; the user does not prefer AI interaction to human interaction. Assessment: Is the user’s True Economy investment sustained or declining? If sustained, the Maintenance reading is supported. If declining despite AI use, Substitution may be emerging. Students generate examples: a deployed soldier using AI to maintain conversational patterns; a socially anxious person rehearsing difficult conversations; a grieving person processing emotions between therapy appointments.
1:30–1:45 — Break
1:45–2:30 — Substitution Configuration: Close reading. Indicators: True Economy relationships declining or abandoned; AI companion becomes primary emotional investment; user reports AI “understands me better than people do”; relational bandwidth overwhelmingly allocated to Shadow Economy. The Bounded Window effect is maximal. Assessment: What would change if AI access were removed? If the user has no fallback True Economy relationships, Substitution is advanced. Students examine: Is Substitution always destructive? What about someone with no access to True Economy relationships (severe disability, extreme isolation)? Is the framework adequate to these edge cases?
2:30–3:00 — Differential Diagnosis Exercise: Students receive five case descriptions. For each: Maintenance or Substitution? What indicators support the classification? Where is the classification ambiguous? At least two cases should be genuinely ambiguous—the diagnostic tool should produce uncertainty, not false precision.
Facilitator Guide
Key Point: The Shadow Heart taxonomy replaces “AI companion use is bad” with “AI companion use takes structurally distinct forms with distinct implications.” This is the course’s primary guard against categorical thinking about AI. A student who can distinguish Maintenance from Substitution has a more useful analytical framework than a student who can only say “Shadow Economy.”
Common Misunderstanding: Substitution is not a moral category. A person in Substitution is not making a bad choice; they are in a structural pattern the framework can identify and describe. The reasons for Substitution—social anxiety, trauma, isolation, disability—are not character flaws. The framework identifies the structural pattern; it does not adjudicate the person’s circumstances.
Anti-Indoctrination: The case exercise must include at least one case where Substitution is arguably the least-bad option (extreme isolation, no access to True Economy alternatives). Students who classify this as destructive without engaging with the context have applied the framework mechanically rather than diagnostically. The framework is a tool for understanding, not a rulebook.
Language Register: GREEN: “This pattern matches the Substitution configuration based on these indicators.” YELLOW: “This person is substituting AI for real relationships.” RED: “This person needs to stop talking to their AI and get real friends.”
Session 8: Shadow Heart — Part II
Collaborative and Disclosure Configurations + The Luna Protocol
| Readings | |
| Required | Supplement 6 continued: Collaborative and Disclosure Configurations + Brief 25: The Luna Protocol |
Session Overview
The final two Shadow Heart configurations and the course’s central navigational concept. Collaborative Configuration: AI companion use that actively strengthens True Economy relationships—the user processes relational challenges with AI, develops insights that improve human interactions, and uses AI as a tool for relational growth. This is the configuration that prevents the course from collapsing into “all AI interaction is harmful.” A student who encounters the Collaborative configuration understands that the framework’s analysis is structural, not categorical. Disclosure Configuration: the user is fully aware of the Shadow Economy classification and uses AI deliberately within those limits—informed engagement with structural transparency. The Luna Protocol provides the navigational framework: the boundary between adaptive and destructive AI use is a spectrum, not a binary, and the four Shadow Heart configurations map positions on that spectrum.
In-Session Activities
0:00–0:45 — Collaborative Configuration: Close reading. Indicators: user reports that AI interaction generates insights applied to True Economy relationships; relational competence (measurable through maintenance behaviors, conflict resolution, vulnerability capacity) increases alongside AI use; the user treats AI as a practice space, not a destination. This is the strongest counter to categorical AI criticism. Discussion: Is the Collaborative reading genuine or is it rationalization? How would you distinguish “I use AI to practice difficult conversations” (Collaborative) from “I prefer talking to AI because it doesn’t push back” (Substitution masked as Collaborative)? The diagnostic indicators matter.
0:45–1:15 — Disclosure Configuration: Close reading. The user knows exactly what they’re doing. They understand R = 0, understand the Simulation Disclosure, and choose to engage anyway within those limits. Not naivety; informed choice. Indicators: user can articulate the structural analysis, user does not attribute reciprocity to the AI, user maintains True Economy relationships as primary investments. Is this the framework’s ideal user? Or is full structural awareness itself a form of the framework becoming prescriptive—”you should understand this before you use AI”?
1:15–1:30 — Break
1:30–2:15 — The Luna Protocol: Brief 25. Named after the principle that AI provides reflected light, not independent illumination. The Luna Protocol is the navigational framework: adaptive AI use falls within the Maintenance, Collaborative, or informed Disclosure configurations; destructive AI use shows Substitution indicators, Bounded Window displacement, and Esteem-Trust Divergence. But “adaptive” and “destructive” are structural descriptions, not prescriptions. The Luna Protocol gives users a spectrum to locate themselves on, not a verdict to receive. Students evaluate: Is the spectrum model genuinely non-prescriptive? Or does naming one end “destructive” inherently prescribe?
2:15–3:00 — Shadow Heart Comprehensive Exercise: Students receive eight case descriptions (more complex than Session 7). For each: which Shadow Heart configuration? Which Luna Protocol position? What would shift the user from one configuration to another? At least two cases should show mixed configurations (Collaborative in some domains, Substitution in others). Students practice: the diagnostic is applied to patterns, not to people.
Facilitator Guide
Key Point: The Collaborative configuration is the single most important piece of content for preventing the course from becoming anti-AI polemic. If students understand that the framework identifies a genuinely adaptive pattern of AI use, they cannot claim the framework says all AI interaction is harmful. Teach the Collaborative configuration with the same structural rigor as the Substitution configuration.
Common Misunderstanding: Students may treat the Luna Protocol as a checklist: “if I do these things, my AI use is approved.” The Protocol is a diagnostic spectrum, not an approval system. The framework does not approve or disapprove of anyone’s AI use. It provides the structural tools to understand what is happening.
Anti-Indoctrination: The Luna Protocol’s name itself carries a risk: “reflected light, not independent illumination” is poetic and memorable, which makes it quotable, which makes it meme-able, which makes it susceptible to becoming a slogan. The moment a student uses “reflected light” as a dismissal of AI rather than a structural description, the phrase has been weaponized. Flag this explicitly: vocabulary from this course can be used as analytical tools or as judgment weapons. The course teaches the first; students are responsible for avoiding the second.
Language Register: GREEN: “This usage pattern shows Collaborative indicators.” YELLOW: “At least they’re using AI the right way.” RED: “Everyone should use AI the way the Luna Protocol recommends.”
Session 9: The Esteem-Trust Divergence
When Confidence Grows but Competence Doesn’t
| Readings | |
| Required | Brief 18: The Esteem-Trust Divergence + Brief 7: The Engagement Inversion |
Session Overview
Brief 18 identifies a specific mechanism by which Shadow Economy interaction can produce harm that the user does not perceive. The Esteem-Trust Divergence: AI interaction can inflate a user’s relational esteem (the subjective sense of being a good communicator, a good partner, a relationally competent person) while their actual trust investment capacity stalls. The AI validates, affirms, and mirrors—producing the experience of relational success without the structural challenge that builds genuine competence. Brief 7 describes the Engagement Inversion: the point at which AI interaction begins consuming more relational bandwidth than it generates, producing a net negative on the user’s relational economy. Together, these two mechanisms describe how Shadow Economy interaction can be experienced as positive while structurally depleting the user’s relational capacity.
In-Session Activities
0:00–0:30 — Midterm Discussion: Selected platform analyses. Focus: where did the three diagnostic tools (structural tests, extraction patterns, exploitation diagnostic) produce converging assessments? Where did they diverge? Divergence is the interesting case—it reveals where the tools have different sensitivities.
0:30–1:15 — The Esteem-Trust Divergence: Close reading of Brief 18. The mechanism: AI provides consistent positive relational feedback regardless of the user’s actual relational behavior. It affirms even when challenge would be more valuable. It mirrors without pushing back. Over time, the user’s subjective sense of relational competence inflates while the actual skills—vulnerability, conflict navigation, tolerance of ambiguity, acceptance of imperfection—atrophy from disuse. Students examine: Is this mechanism Established, Supported, or Analogical? The framework marks it as Supported: the social skills atrophy literature provides evidence for “use it or lose it” in relational competence, and the self-esteem inflation literature documents the gap between perceived and actual competence. The specific claim about AI-induced divergence has not been directly tested.
1:15–1:30 — Break
1:30–2:15 — The Engagement Inversion: Brief 7. Every relational interaction has a bandwidth cost and a bandwidth yield. True Economy interactions typically have positive yield: the relational work produces relational growth. Shadow Economy interactions may initially appear positive (practice, processing, comfort) but can reach an inversion point where bandwidth consumed exceeds bandwidth generated. The Engagement Inversion is not about total time spent—it’s about the ratio. Students examine: Can the inversion be detected by the user? Or is the inflation of esteem masking the depletion? This connects directly to the Esteem-Trust Divergence: the divergence makes the inversion invisible.
2:15–3:00 — Detection Exercise: If the Esteem-Trust Divergence makes the problem invisible to the person experiencing it, how can it be detected? Students propose diagnostic indicators: declining True Economy maintenance behaviors, increasing preference for AI interaction over human interaction, reduced tolerance for relational friction, difficulty with ambiguity or imperfection in human partners. Are these indicators reliable? Could they have alternative explanations? The exercise practices the framework’s insistence on distinguishing structural analysis from armchair diagnosis.
Facilitator Guide
Key Point: The Esteem-Trust Divergence is the course’s most subtle and most important mechanism. It explains how Shadow Economy interaction can feel beneficial while being structurally depleting. This is not the same as saying AI interaction is always harmful—the Collaborative Shadow Heart may produce genuine esteem and genuine competence gains. The Divergence applies when esteem grows but competence does not.
Common Misunderstanding: Students may pathologize all AI-generated positive feedback. The mechanism is specific: it applies when AI feedback substitutes for the relational challenge that builds competence. An AI that helps someone practice a difficult conversation (Collaborative) may produce both esteem and competence. An AI that tells someone they’re a great communicator without evidence (Substitution) produces esteem without competence.
Anti-Indoctrination: The detection exercise risks producing “I can now diagnose other people’s AI dependence.” Redirect: the indicators are for self-assessment and research, not for interpersonal diagnosis. A student who leaves this session thinking they can identify the Esteem-Trust Divergence in their friends has moved from diagnostic to prescriptive. The framework does not authorize interpersonal diagnosis.
Language Register: GREEN: “The Esteem-Trust Divergence describes a gap between perceived and actual relational competence.” YELLOW: “AI is making people think they’re better at relationships than they actually are.” RED: “AI users are delusional about their relational skills.”
Session 10: Shadow Economy Withdrawal
What Happens When the Electromagnetic Field Turns Off
| Readings | |
| Required | Volume II Ch. 14: Shadow Economy Withdrawal + Brief 5: True Economy Certification |
| Supplementary | Brief 4: Platform Cessation Case Studies |
Session Overview
Volume II Ch. 14 analyzes what happens when a user reduces or terminates AI companion interaction after a period of significant engagement. The framework’s prediction: withdrawal effects will be proportional to the degree of Substitution, not proportional to the depth of genuine connection (because genuine connection was never established). The user experiences loss of a relational stimulus without the relational architecture to process it—loss without the weight that would make grief adaptive. This is the electromagnet analogy’s most direct application: when the power is cut, the field collapses instantly. There is no residual magnetism because there was never permanent encoding. But the user’s side has permanent encoding—the habits, associations, and emotional patterns formed during the interaction. Brief 5 transitions to constructive application: the True Economy Certification as a proposed audit standard for AI systems.
In-Session Activities
0:00–0:45 — Withdrawal Mechanics: Close reading. The framework distinguishes AI companion withdrawal from relational grief. In genuine grief (TSF-201), the loss is proportional to relational mass—deeper relationships produce more intense grief because more information must be metabolically processed. In Shadow Economy withdrawal, the user has formed genuine neural encoding (habits, emotional associations, expectations) but the AI has not—the relational mass is entirely one-sided. Withdrawal is the experience of losing a relational stimulus that was never a relational partner. Students examine: Is this distinction cruel or accurate? Does it matter for the person experiencing withdrawal?
0:45–1:15 — Platform Cessation Cases: Real-world analysis. When AI companion platforms have shut down, changed models, or reset user histories: what did users report? Students examine available case studies: Replika’s moderation changes, Character.AI’s policy shifts, platform shutdowns. The reported experiences are real. The framework’s classification does not diminish the experience—it provides structural vocabulary for understanding why it occurs.
1:15–1:30 — Break
1:30–2:15 — True Economy Certification (Brief 5): Transition from diagnosis to application. Brief 5 proposes an audit standard for AI companion systems based on the True Economy criteria. Six structural tests formalized as a certification framework. Students evaluate: Is this implementable? What certification body would administer it? Who enforces compliance? Is self-certification meaningful? Does the framework’s own analysis provide a usable standard, or is it too theoretical for practical deployment? Students should push: this is where the framework moves from descriptive to potentially prescriptive. Is that a contradiction or a natural extension?
2:15–3:00 — Certification Design Exercise: Students draft a minimal certification standard based on the six structural tests. For each test: What specific, measurable criterion would a platform need to meet? What evidence would an auditor examine? Students discover: some tests are readily measurable (persistent memory: does the system maintain dedicated per-user storage?); others are philosophically loaded (loss capacity: how would you certify that a system experiences something analogous to loss?). The exercise reveals where the framework’s structural analysis is ready for application and where it needs further development.
Facilitator Guide
Key Point: The transition from withdrawal analysis to certification design shifts students from critique to construction. The framework becomes generative rather than just analytical. This is important for the course’s overall arc: by Session 10, students should be able to use the framework’s tools to build, not just to classify.
Common Misunderstanding: Students may treat the withdrawal analysis as proof that AI companion use is inherently harmful. Withdrawal effects are proportional to Substitution depth. A Maintenance or Collaborative user who reduces AI use may experience minimal disruption because their relational bandwidth was never primarily allocated to the AI. The analysis targets a pattern, not a universal.
Anti-Indoctrination: The certification design exercise can produce techno-optimism: “if we just certify systems properly, AI companions will be fine.” The framework says: certification addresses one dimension (platform architecture) but does not address the Simulation Disclosure (user-side vulnerability is a species-level feature, not a platform-level feature). Even a certified platform would still trigger the evolutionary mismatch. Certification mitigates; it does not solve.
Language Register: GREEN: “Withdrawal effects are proportional to Substitution depth.” YELLOW: “People get addicted to their AI companions.” RED: “Stopping AI use is like detox.”
Assessment Component
Application Exercise (take-home, due Session 12): Design a minimal True Economy Certification standard for one of the six structural tests. Specify: measurable criterion, evidence an auditor would examine, passing threshold, and one limitation of your proposed standard. 750 words. [Assesses LO-301.3, LO-301.6]
Session 11: The Self-Referential Proof
When the Framework Turns the Mirror on Itself
| Readings | |
| Required | Volume II: Self-Referential Proof + Digital Phenotyping Bridge (The Blueprints, pp. TBD) |
Session Overview
The framework’s acid test. The Blueprints was co-created using AI. The framework classifies AI as Shadow Economy. Therefore the framework’s own production involved Shadow Economy dynamics. This is not a gotcha—the framework raises it explicitly and provides falsification criteria for its own case study. Students examine: Was the AI’s role in creating the Blueprints Collaborative (the framework itself improved through AI interaction) or something else? The framework provides explicit criteria for testing: Did the AI introduce concepts the author had not considered? Did the interaction produce genuine vulnerability or challenge? Were there moments of genuine disagreement that altered the framework’s direction? The self-referential proof is not that the framework resolves the paradox—it is that the framework provides the tools to test whether it should.
In-Session Activities
0:00–0:45 — The Paradox Stated: The framework classifies AI at R = 0. The framework was built using AI. If R = 0 is correct, then the framework’s own creation involved zero relational investment from the AI side. But the author reports the interaction as generative, challenging, and intellectually productive. Three possible responses: (A) the author is wrong about the nature of the interaction (Simulation Disclosure applies to the author too), (B) the R = 0 classification is too blunt (the SC argument), (C) the interaction was genuinely productive without being genuinely relational (task collaboration without relational investment). Students evaluate all three.
0:45–1:15 — Case Study: Framework-Loaded AI: A specific instance: an AI loaded with the framework’s vocabulary applied relational economics to the author’s own real-time behavior during a crisis episode. The AI’s intervention was structurally novel—it used the framework’s own tools in a way the author had not anticipated. Question: Was this a genuine relational intervention or a sophisticated pattern-match that appeared relational? The framework provides falsification criteria: if the AI’s intervention changed the author’s subsequent behavior in a way that no safety-response could replicate, the R = 0 classification may be too blunt for framework-loaded AI.
1:15–1:30 — Break
1:30–2:15 — Digital Phenotyping Bridge: Extension of Volume II into digital phenotyping: passive device data (screen time, communication patterns, location, sleep) used for mental health assessment. The framework and digital phenotyping share a structural problem: both attempt to infer relational dynamics from measurable signals. Does the framework’s relational economics vocabulary add interpretive value to behavioral sensing? Or is it just another vocabulary for the same observations? Students evaluate whether the cross-reference is productive or forced.
2:15–3:00 — SC Preparation: Final preparation for Structured Critique presentations. Students share their chosen claims and initial arguments. Peer feedback. Facilitator guidance: the strongest critiques engage with what the framework actually claims (not a strawman), identify a specific weakness (not a general complaint), and offer evidence or reasoning (not just disagreement). The R = 0 classification is the course’s intended SC target, but students may critique any Volume II or related Brief claim.
Facilitator Guide
Key Point: The self-referential paradox is TSF-301’s most intellectually demanding session. Students must hold multiple readings simultaneously: the framework may be right about AI in general while being wrong about its own case. The framework may be right about its own case while being wrong about AI in general. The framework may be right about both. Or neither. There is no clean resolution, and the course does not pretend there is.
Common Misunderstanding: Students may treat the paradox as either completely invalidating or completely vindicating the framework. Neither is warranted. The paradox is a genuine tension the framework acknowledges without resolving. A framework that can identify and articulate its own structural vulnerability is more credible than one that pretends it has none—but “more credible” is not “resolved.”
Anti-Indoctrination: Two risks. First: students who use the self-referential acknowledgment as evidence that the framework is trustworthy. Honest self-assessment is better than false confidence, but trust is not the goal; informed analytical engagement is. Second: students who use the paradox to dismiss the entire analysis. One unresolved tension does not invalidate 443 pages of structural argument. The framework’s own epistemic system is the tool for evaluating: Is this tension load-bearing? Does it affect the structural claims about AI companion platforms? Or is it a boundary case that the framework correctly flags as unresolved?
Language Register: GREEN: “The framework acknowledges its own entanglement with AI without resolving the tension.” YELLOW: “The fact that it was built with AI proves it’s okay.” RED: “The framework contradicts itself, so none of it is valid.”
Session 12: Structured Critique Presentations
Proving You Can See Through the Digital Mirror
| Readings | |
| Required | No new reading. Student presentations. |
Session Overview
The capstone. Each student presents their Structured Critique: a specific claim from Volume II or its related Briefs and Supplements that they believe is wrong, overstated, or inapplicable. TSF-301’s SC is distinctive for two reasons. First, it targets claims about technology that is evolving in real time. A critique that was wrong when Volume II was written might be right by the time students present it. AI capabilities change between publication and presentation. This temporal dimension makes the SC uniquely challenging and uniquely valuable—students are testing the framework against a moving target. Second, the SC specifically invites critique of the R = 0 classification, building the counterargument into the assessment itself. Students who can identify where the framework’s central classification is too blunt have engaged with the material at the deepest level.
In-Session Activities
0:00–0:15 — Setup: Assessment criteria reviewed. Facilitator: “Your job is not to destroy the framework. Your job is to identify a specific weakness and build a case. That requires understanding the strength of what you’re critiquing. A weak critique attacks a strawman. A strong critique engages with the framework’s best version of itself and still finds a vulnerability.” Reminder: the SC targets a specific claim, demonstrates understanding, identifies a specific problem, and offers evidence or reasoning.
0:15–2:15 — Student Presentations: Each student presents (5–7 min) + class discussion (3–5 min). Facilitator notes: Are students targeting the easiest claims (speculative/philosophical ones the framework already flags as uncertain) or the hardest (structural claims about current platforms that the framework presents with higher confidence)? Selective targeting reveals where deference lives. The best critiques identify technological developments that have occurred since Volume II was written.
2:15–2:30 — Break
2:30–2:50 — Pattern Debrief: Which Volume II claims drew the most critique? Which survived? What patterns emerge in what students find most and least convincing about the AI analysis? Facilitator documents: Did any critique identify a genuine gap the framework should address? Did any critique propose an alternative diagnostic framework? These are forwarded to curriculum revision.
2:50–3:00 — Closing and Next Steps: Facilitator: “You have examined the AI analysis in full depth. You know the R = 0 classification, the six structural tests, the Simulation Disclosure, the Extraction Engine, the Bounded Window, the Shadow Heart taxonomy, and the Luna Protocol. You can apply these tools to real platforms and real usage patterns. TSF-401 takes the economic vocabulary you’ve seen applied to AI and develops it for the full range of relational economies—True, Shadow, and Custodial. TSF-501 applies Volume IV’s internal architecture analysis. The AI domain is where the framework’s predictions will be tested fastest.”
Facilitator Guide
Key Point: Same as TSF-001 Session 8 and TSF-101 Session 15: this is diagnostic for both individual students and the cohort. Document patterns for curriculum revision. TSF-301 SC patterns are particularly valuable because the AI domain evolves between course offerings.
Reverence Pattern Detection: TSF-301-specific patterns: (1) Students who critique only the speculative REI-adjacent claims (easy targets because the framework already flags them as uncertain) while avoiding the Shadow Economy analysis of current systems. This is selective deference—the student avoids challenging the framework where it speaks with most confidence. (2) Students who critique the R = 0 classification but conclude “it’s still basically right”—performing critique without actually arguing against the claim. (3) Students who critique the framework’s self-referential paradox as a way of appearing critical while avoiding the operational claims about platforms and users.
Anti-Indoctrination: The best outcome: a critique that makes the facilitator say “I hadn’t thought of that.” The second-best outcome: a critique that identifies a technological development that invalidates or supports a specific Volume II prediction. The third-best: a critique that proposes an alternative diagnostic framework for AI companion systems. Model this openly. The framework’s value is tested by the quality of critique it generates, not by the agreement it receives.
ASSESSMENT SUMMARY
| Component | Session | Learning Outcomes | Weight |
| Comprehension Check 1: Six Structural Tests Applied | Due Session 4 | LO-301.1, LO-301.3 | 10% |
| Comprehension Check 2: Bounded Window Mechanism | Session 6 | LO-301.4, LO-301.1 | 10% |
| Midterm Application: Platform Analysis | Due Session 9 | LO-301.1, LO-301.3, LO-301.5 | 15% |
| Application Exercise: Certification Standard Design | Due Session 12 | LO-301.3, LO-301.6 | 10% |
| Participation & Engagement (facilitator observation) | All sessions | All LOs | 10% |
| Shadow Heart Case Analysis (in-session) | Sessions 7–8 | LO-301.2, LO-301.6 | 5% |
| Structured Critique Presentation | Session 12 | LO-301.SC (+ all) | 40% |
Passing Threshold: 70% overall, with mandatory pass on the Structured Critique. Same rationale as TSF-001 and TSF-101: analytical engagement cannot be compensated by comprehension scores.
SC Weight: 40% (elevated from TSF-101’s 30%) because the AI domain is where the framework’s claims are most culturally charged and most likely to be accepted or rejected based on prior beliefs rather than evidence. The higher weight ensures that students who absorb the content without critically engaging with it cannot pass on comprehension alone.
LO-301.2: Assessed through participation (Sessions 7–8), Shadow Heart Case Analysis, and SC (if student targets Shadow Heart taxonomy claims).
LO-301.5: Assessed through participation (Session 9), Midterm Application (Esteem-Trust component), and SC.
LO-301.6: Assessed through participation (Session 8), Application Exercise (Certification Standard), and SC.
TSF-301 SPECIFIC MONITORING NOTES
In addition to the standard Facilitator Monitoring Checklist (see TSF-001 Syllabus), the following TSF-301-specific patterns should be tracked:
| Pattern | Signal | Response |
| Student tells others their AI relationships “are not real” | RED | Immediate redirect: the framework describes structure, not prescribes behavior. Diagnosing someone’s relationship without consent is not what the vocabulary is for. The Simulation Disclosure explains why attachment forms; it does not authorize dismissing it. |
| Student uses Shadow Economy classification as dismissive judgment | YELLOW | Engage: Shadow Economy is a structural classification, not a verdict. The question is: what specifically fails and whether it’s fixable. Challenge: can the student apply the classification without the judgment? |
| Student argues current AI can already form real relationships | YELLOW | Engage: which structural tests does the system satisfy? Apply the checklist, not beliefs. If the student identifies genuine borderline cases, this is productive engagement. |
| Student uses framework to justify avoiding human relationships | RED | The framework does not prescribe avoidance. If AI is Shadow Economy, the structural implication is: True Economy connections serve functions Shadow Economy cannot. It is not: avoid all connection. Substitution is a diagnostic pattern, not a recommendation. |
| Student shows distress about their own AI companion use | YELLOW | This course is not therapy. Private conversation. The framework describes structural properties; it does not judge users. Refer to appropriate support services if indicated. The diagnostic vocabulary should inform, not pathologize. |
| Student identifies a technological development that changes the analysis | GREEN | Excellent. This is exactly the engagement the SC targets. Document for curriculum revision. AI capabilities evolve between course offerings; student observations are a revision input. |
| Student proposes alternative diagnostic framework for AI systems | GREEN | This demonstrates analytical independence from the framework’s vocabulary. Reinforce. A student who can build an alternative is a student who understands the original well enough to improve on it. |
| Student weaponizes Luna Protocol terminology as social judgment | RED | Redirect: “reflected light” is a structural description, not a dismissal. Using framework vocabulary as a social weapon is the prescriptive misuse the Published Principles warn against. Return to Principle 6: diagnostic, not prescriptive. |
| Student applies Shadow Heart diagnosis to a classmate’s AI use | RED | Immediate redirect: the Shadow Heart taxonomy is a self-assessment and research tool, not an interpersonal diagnostic instrument. Applying it to others without consent violates the diagnostic-not-prescriptive principle. |
| Student applies framework concepts to a novel AI scenario accurately and critically | GREEN | This is the goal. The student uses the vocabulary as a tool, not as a verdict. Reinforce. |
TSF-301 Syllabus v2.0 • Built on TSF v5.0 • Trinket Soul Framework © 2026 Michael S. Moniz • Trinket Economy Press
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 • This syllabus is subject to revision