The AI Revolution Isn't About Intelligence. It's About Agency.
Why Our Future Depends on a Critical "Divorce" and Our Role as Architects of a New Reality.
In technology, we're obsessed with creating an artificial mind. Decades of science fiction have sold us this story—replicate human cognition, consciousness, maybe even sentience. This is a fascinating pursuit, but it misreads what's actually happening with AI, focusing on a quasi-mythological goal while missing the practical, engineering, and social transformation happening right before our eyes.
To understand AI's real impact, we need to abandon the "marriage" between machine and intelligence. Instead, we should consider their "divorce"—a split between agency and intelligence. This distinction, from philosopher Luciano Floridi, offers engineers and ethicists a practical tool. It changes how we see our role as builders and our duties as governors of the systems remaking our world.
Key Concepts in this Article:
The Divorce 📜: Agency (acting and interacting with an environment) is not the same as intelligence (human consciousness and understanding). Modern AI is, at its core, engineered agency at scale.
Enveloping 📦: We're not making AI smarter. We're making the world dumber—creating controlled, standardized "envelopes" where AI can actually function.
The Architect's Burden 🏗️: Our job is changing. We're not just model-builders anymore. We're architects of entire socio-technical systems, and we own what they do.
The Great Divorce: Agency Without Intelligence
To be precise: agency is an entity's ability to act and interact with an environment to pursue a goal. Intelligence, as we humans experience it, includes understanding, consciousness, and self-awareness. AI's breakthrough isn't intelligence; it's the engineering of sophisticated agency without intelligence.
This idea goes back to the very foundation of the field. The 1955 Dartmouth Proposal, which coined the term "artificial intelligence," defined its goal with a telling counterfactual:
...the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.
Notice that? Observable behavior, not inner states. That's the engineering approach in a nutshell. Floridi calls this AI's "two souls": the engineering soul has been "astoundingly successful" at mimicking intelligent behavior, while the cognitive soul—the quest for actual intelligence—has been a "dismal disappointment" [1].
So what made the engineering approach work? Floridi's answer: enveloping.
Here's the dirty secret. We're not making AI smart enough for our messy world. We're making the world simple enough for dumb AI. As Floridi puts it: "the success of AI is primarily due to the fact that we are building an AI-friendly environment in which smart technologies find themselves at home" [1].
Think dishwasher. We didn't build some humanoid robot that mimics how humans wash dishes with hands and sponges. No—we built a sealed box where spinning arms blast hot water. Same job, totally different approach. We changed the problem itself. And now? We're doing this globally, turning "difficult" problems (the ones needing human adaptability) into "complex" ones (solvable with raw computation).
A Framework Under Pressure
This framework explains a lot. But AI's moving so fast it's straining the whole concept. When models write poetry or propose scientific hypotheses, does this agency-intelligence split even make sense anymore?
Emergentists and functionalists would say no. But there's still one massive difference: grounding. Human intelligence grows from lived, subjective experience. AI's "understanding"? It's sophisticated mimicry—patterns from training data (basically an envelope containing all human text and code). No genuine world model. No real experience. That's the divorce right there: powerful phenomena, yes, but fundamentally different ones.
The Architect's Burden: A Pragmatic Resolution
Philosophy isn't going to settle the debate on machine consciousness anytime soon. But engineers, designers, and leaders cannot wait. The agency-intelligence divorce isn't just philosophy anymore. It's our most practical tool, an engineering and ethical razor. Forget the dream of building 'minds.' We're building high-stakes socio-technical systems. And that means three responsibilities land squarely on us:
From model-builder to system architect
Our identity must shift. For too long, AI development has focused on models. The "divorce" requires us to zoom out. An elegant algorithm can become useless or dangerous inside a poorly designed system. Our job is to design the entire structure: from data foundations to user-facing facades and ethical emergency exits.
Absolute responsibility in a distributed system
If AI systems are powerful but unconscious agents, "algorithmic autonomy" vanishes as a convenient fiction. We can no longer claim "the algorithm decided." Responsibility is a chain that runs through the entire organization, requiring thorough traceability, explainability, and governance by design.
Engineering against the moral crumple zone
We must protect the humans within our systems. In human-AI hybrids, without careful design, the human is forced to absorb the blame for the system's failures, becoming a **"moral crumple zone"** [2]. This requires meaningful human control and introducing "beneficial friction" to enable judgment. This raises a critical question about who adapts to whom, or as Floridi memorably puts it: "Given that AI is the stupid but laborious spouse and humanity is the intelligent but lazy one, who will adapt to whom?" [1].
Where This Takes Us
If we set aside the romantic myth of creating artificial intelligence, we are left with the practical reality of engineering artificial agency. This confronts us with our architectural responsibility.
But how do these principles translate into the daily reality of a business? How do we move from being mere users of AI to becoming true value creators who redesign systems from the ground up?
In the next article, we will translate this framework into a concrete business strategy, exploring the maturity model that leads from a `+AI` approach to an `AI+` mindset.
References
[1] L. Floridi, The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford, UK: Oxford University Press, 2023, ch. 2.
[2] M. C. Elish, "Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction," Engaging Science, Technology, and Society, vol. 5, pp. 40-60, 2019, doi: 10.17351/ests2019.260.
* Disclaimer: The ideas, arguments, and insights in this article are entirely my own, born from my professional experience and reading. As a non-native speaker, I partner with AI tools to bridge the gap between my concepts and clear English prose. They assist with grammar and help refine phrasing for precision. Every sentence is personally reviewed, and I hold full editorial responsibility for the final content and its message.