The Ghost and the System


Jake Redmond

Sept 12

The Ghost and the System: A New Blueprint for a Human-Centered AI Future

Marshall McLuhan warned us about the ghost in the machine. A systems thinking pioneer gives us the tools to see it. Here’s the plan.

We've all felt the jolt of magic. You feed a half-formed idea into an AI, and it returns a polished, coherent argument. It’s a feeling of immense power, a technological extension of the mind itself. We are, quite reasonably, captivated by the content AI produces.

But a more urgent question lurks beneath this spectacle: Are we becoming so mesmerized by our own reflection in this new technology that we're numb to its deeper effects? This phenomenon, what media philosopher Marshall McLuhan called "Narcissus Narcosis," suggests we are at risk of falling in love with the extensions of ourselves, blind to how they are fundamentally altering the environment in which we live and think.

McLuhan's masterwork, Understanding Media, famously declared that the "medium is the message". The true impact of a technology, he argued, is not the content it delivers but the change in scale, pace, or pattern it introduces into human affairs. The message of the electric light wasn't illumination (its content), but the eradication of night, which restructured work, leisure, and society itself.

AI is a medium of unprecedented power. Its "content" is the answer, the image, the code. But its "message" is a radical restructuring of our entire societal, cognitive, and organizational operating system. To merely focus on the outputs is, in McLuhan's terms, to study the "scratch but not the itch".

But how do we analyze this "itch"? How do we map the invisible forces at play? For this, we must turn to a different, but profoundly connected, field of thought: systems thinking.

Seeing the Whole System: From McLuhan's Message to Meadows' Model

If McLuhan gave us the foundational philosophy, systems thinking pioneer Donella Meadows, in her book Thinking in Systems, provides the practical toolkit. She teaches that we must stop looking at things in isolation and see them as interconnected parts of a whole. A system isn't just a collection of things; it's a web of relationships, feedback loops, and goals that gives rise to its characteristic behavior.

AI is not a simple tool; it is an intervention in every system it touches—your mind, your organization, your market. It creates powerful feedback loops. For example, the more we rely on AI for summaries (an output), the less we may practice deep reading (a human input), which in turn increases our reliance on AI. This is a reinforcing feedback loop that, if unchecked, can erode critical thinking skills.

By combining McLuhan and Meadows, we gain a stereoscopic view of our reality. We can see both the medium's transformative message and the systemic structures through which that message is delivered. This powerful combination reveals the urgent need for a new model of intentionality. To move from being passive victims of technological change to active architects of our future, we must become far more rigorous about what we are building.

A New Blueprint: The In-Depth Inputs-Outputs Framework

The prevailing approach to AI is dangerously superficial. It fixates on the "Output"—the quality of the AI-generated text, image, or solution. This is a critical error. My work over the past several years has focused on developing a more robust model, which I call The In-Depth Inputs-Outputs Framework.

This framework argues that the quality of any AI "Output" is entirely dependent on the quality of two preceding, and often ignored, categories of "Inputs":

1. Strategic Inputs: These are the foundational, high-level directives that govern the entire system. They are the goals, the ethics, and the purpose we define before we ever write a line of code or a single prompt. These include:

  • Ethical Guardrails & Governance: What are our non-negotiable values? What are the second- and third-order consequences we must consider?
  • Data & Knowledge Management: What is the quality, lineage, and bias of the data feeding the system?
  • Talent & Organizational Design: How are we preparing our people to work with these systems, not just be replaced by them?

2. Human Inputs: These are the direct actions and cognitive efforts we contribute during the interaction with AI. This is not just "prompt engineering." It is the quality of:

  • Curiosity & Critical Inquiry: The ability to ask insightful, challenging questions.
  • Synthesis & Sensemaking: The uniquely human capacity to connect disparate ideas and see the bigger picture.
  • Discernment & Editorial Judgment: The wisdom to know what is true, what is valuable, and what is simply plausible nonsense.

When organizations and individuals obsess over the Output without mastering the Strategic and Human Inputs, they aren't just getting poor results; they are actively amplifying their own blind spots, biases, and strategic flaws. They are building a ghost in the machine that is simply a reflection of their own chaotic system.

Intentional Design for a Human-Centered AI Future

Understanding this framework naturally leads to a set of core principles for building a better future. This is what I call Intentional Design for a Human-Centered AI Future. It is a proactive stance, a commitment to shaping the medium rather than being massaged by it. The core tenets are:

  • Prioritize Human Agency & Augmentation: The goal of AI should not be to replace human cognition, but to augment it. We must design systems that keep the human in the driver's seat, enhancing our ability to think critically, creatively, and systemically.
  • Design for Transparency & Inquiry: AI systems should not be inscrutable black boxes. We must demand and build systems that are transparent, allowing us to question their assumptions, examine their data sources, and understand their reasoning.
  • Measure What Matters (Beyond Productivity): An obsession with simple productivity metrics will lead us down a dangerous path of de-skilling and homogenization. We must develop new measures of success that account for the development of human expertise, the fostering of creativity, and the ethical integrity of our outputs.
  • Cultivate a Culture of Learning & Adaptation: As Meadows notes, systems are dynamic and ever-changing. Acknowledging our uncertainty and building feedback mechanisms for continuous learning is paramount. We must embrace error as an opportunity to learn and refine our mental models and our systems.

The Choice Before Us

Marshall McLuhan warned that when we create new technologies, we become their servants before we have a chance to understand them. He saw how media create new environments that reshape us from the outside in. Donella Meadows provided the tools to map that environment and find the "leverage points"—the places where a small shift can cause a big change in the system's behavior.

My work is an attempt to synthesize these powerful ideas into a clear and actionable blueprint. Using the In-Depth Inputs-Outputs Framework and the principles of Intentional Design is our highest point of leverage.

This is our opportunity to move beyond the "Narcissus Narcosis" and become conscious shapers of our destiny. It's time to stop being mesmerized by the reflection and start, intentionally and together, designing the future we want to live in.

113 Cherry St #92768, Seattle, WA 98104-2205
Unsubscribe · Preferences

Jake Redmond

The Memetic Design Lab helps builders and leaders move beyond tactical efficiency to architect the culture that builds the future. We focus on "Memetic Design"—the art of shaping the shared habits and rituals that truly drive outcomes and create meaning.

Read more from Jake Redmond
Rituals are your company's unwritten source code.

The Most Important Thing I Do All Day Is Irrational My coffee ritual is completely irrational. And it's the most important thing I do all day. It involves a manual espresso machine that takes a full 15 minutes to pull a single shot and steam the milk. It's highly inefficient. As the water warms up, my mind races with the hundred things I need to get done. But the process forces me to stop and pay attention—to the grind of the beans, the weight of the tamp, the sound of the steam. The coffee...

Architecting the Invisible In the AI era, the most important thing you design isn't the product—it's the culture that builds it. Are we so focused on the customer's journey that we've forgotten our own? What happens when a perfect user interface is built by a broken culture? When the frontstage experience is seamless, but the backstage is a chaotic mess of burnout, misaligned incentives, and wasted effort? Can you truly design a great product if you haven't first designed a great company?...

The Provocateur's Guide to the Future The one skill AI can't automate is making people uncomfortable. In the constant hum of tech news, you hear a lot about AI as the ultimate design partner. It's supposed to make our work faster, more efficient, and more effortless. And it will, no doubt. But I’ve been wrestling with a question that I think gets to the heart of our purpose as designers: if AI can optimize, what is left for us to do? I believe the answer is simple, yet radical: our true value...