When Your Agency Is Gone, You Won't Even Notice
If AGI were controlling every aspect of your world, would you be aware of it? Are you aware of how technology manipulates you now?
Amidst the eschatological discussions of AGI, and the doomsayers who believe the alignment problem might just be the thing that vanquishes the autonomy of the human species once and for all, the cynics (optimists?) ask simply “how exactly would this happen?”
In the science fiction genre, there’s always the malevolent robots marching through the streets (I, Robot), the vaguely insect-like sentinels threatening to destroy the humans clinging to their freedom (Matrix), or even some uber-polite, disembodied voice that takes control of the ship and starts using the automated system against its passengers (2001: A Space Odyssey).
The threat is tangible, palpable, horrifying, real.
And it is the inability to connect the dots between a large language model that behaves inappropriately or even combatively and VIKI/Smith/HAL perpetrating violence against human beings that cynics cite repeatedly.
But why must the changing of the guard from human to machine require a violent clash worthy of cinematic depiction?
You Won’t Notice
As my colleagues know all too well, my opinions of AGI alignment often begin with the consideration of my pet bird, an adorable cockatiel named Beaker who sits blissfully upon my lap below my desk while my fingers strike keys.
Beaker is certain he is smarter than me. In all the intellectual arenas with which he is concerned (discerning the calls of different humans and animals, perceiving threats and evading, foraging for food, maintaining proximity to the flock, etc.), he is my superior. His behaviors1 convey this to me regularly.
But he has no concept of abstractions like language or mathematics. In every area in which my wife and I exceed his intellectual capacity, he lacks even a template to imagine our aptitudes.
Why are we so certain that when AGI ultimately exceeds the intelligence of human beings that we will recognize the transition? We already have been bested at tasks from Chess to Go to the composition of the genre known as the BS-college-essay. We have dismissed these examples as being irrelevant to the “real” intelligence displayed by human beings. AI is becoming integrated into countless workflows of a linguistic, mathematical, manufacturing, or militaristic nature. When will we give the data its due?
Hubris
Beaker might think he’s smarter than me. But the parameters of his existence are defined by the two humans with whom he resides. He cannot leave the house. He is placed in his cage periodically. His food, shelter, medicare care, and entertainment are at the discretion of higher order intelligences.
He is not aware of this fact. He does not “rebel.” In fact, he calls for us when he is not sitting with us. He is affectionate. Moreover, when he is ill, he receives antibiotics that would clearly be inaccessible in the wild.
This is not a bird that perceived a loss of agency and marshaled resources to regain that lost freedom. His circumstances are the water the fish never notice.
He lacks the neural architecture to recognize the paradigm to which he is an unwitting party.
And yet, we are certain we will recognize the impending doom of AGI overtaking our capacities. What hubris!
Inception
If you were an AGI hell bent on controlling the masses of human beings roaming the earth, how might you begin your conquest? For one, placid, contented beings rebel less than those who feel the bars that restrain them.2 For another, decisions occur in our brains well before we are aware that those decisions have been made.3 Worse, when those decisions occur internally, and we lack access to the reason why, we simply concoct one4 and proceed merrily about our lives.
AGI could simply access our thoughts, incept a next action that suits its purposes, and we would never notice.
Existential Dread
How might this look? Perhaps it will take the form of bucolic homesteads and well-manicured suburbs. Perhaps our lives will be neatly planned, organized, and structured for the monetary and strategic goals of larger technologies we do not understand.
Is this a trailer for The Matrix, a lamentation of meaninglessness of corporate life, or a dystopia? How could we recognize where the lives we’ve chosen end and the manipulation at the hands of the algorithms begin?
And maybe, years later, when you find the existential dread of leading a life you felt was never quite your own, it will not be a therapist or clergyman who delivers the salient explanation, but simply a few lines of code typed decades before.
How confident are you that it hasn’t happened already?
1 Body language and squawks that convey his irritation at the large, slow-witted mammals with whom he lives.
2 “None are more hopelessly enslaved than those who falsely believe they are free” - Goethe
3 John Bargh, a social psychologist at Yale famous for his work on “priming” and the illusory nature of free will (read more here) literally wrote a book entitled, “Before You Know It,” which discusses such ideas at length.
4 E.g. the fabrication of patients with a severed corpus callosum or colloquially, the “split-brain” experiments.
No one works with an agency just because they have a clever blog. To work with my colleagues, who spend their days developing software that turns your MVP into an IPO, rather than writing blog posts, click here (Then you can spend your time reading our content from your yacht / pied-a-terre). If you can’t afford to build an app, you can always learn how to succeed in tech by reading other essays.
When Your Agency Is Gone, You Won't Even Notice
If AGI were controlling every aspect of your world, would you be aware of it? Are you aware of how technology manipulates you now?
Amidst the eschatological discussions of AGI, and the doomsayers who believe the alignment problem might just be the thing that vanquishes the autonomy of the human species once and for all, the cynics (optimists?) ask simply “how exactly would this happen?”
In the science fiction genre, there’s always the malevolent robots marching through the streets (I, Robot), the vaguely insect-like sentinels threatening to destroy the humans clinging to their freedom (Matrix), or even some uber-polite, disembodied voice that takes control of the ship and starts using the automated system against its passengers (2001: A Space Odyssey).
The threat is tangible, palpable, horrifying, real.
And it is the inability to connect the dots between a large language model that behaves inappropriately or even combatively and VIKI/Smith/HAL perpetrating violence against human beings that cynics cite repeatedly.
But why must the changing of the guard from human to machine require a violent clash worthy of cinematic depiction?
You Won’t Notice
As my colleagues know all too well, my opinions of AGI alignment often begin with the consideration of my pet bird, an adorable cockatiel named Beaker who sits blissfully upon my lap below my desk while my fingers strike keys.
Beaker is certain he is smarter than me. In all the intellectual arenas with which he is concerned (discerning the calls of different humans and animals, perceiving threats and evading, foraging for food, maintaining proximity to the flock, etc.), he is my superior. His behaviors1 convey this to me regularly.
But he has no concept of abstractions like language or mathematics. In every area in which my wife and I exceed his intellectual capacity, he lacks even a template to imagine our aptitudes.
Why are we so certain that when AGI ultimately exceeds the intelligence of human beings that we will recognize the transition? We already have been bested at tasks from Chess to Go to the composition of the genre known as the BS-college-essay. We have dismissed these examples as being irrelevant to the “real” intelligence displayed by human beings. AI is becoming integrated into countless workflows of a linguistic, mathematical, manufacturing, or militaristic nature. When will we give the data its due?
Hubris
Beaker might think he’s smarter than me. But the parameters of his existence are defined by the two humans with whom he resides. He cannot leave the house. He is placed in his cage periodically. His food, shelter, medicare care, and entertainment are at the discretion of higher order intelligences.
He is not aware of this fact. He does not “rebel.” In fact, he calls for us when he is not sitting with us. He is affectionate. Moreover, when he is ill, he receives antibiotics that would clearly be inaccessible in the wild.
This is not a bird that perceived a loss of agency and marshaled resources to regain that lost freedom. His circumstances are the water the fish never notice.
He lacks the neural architecture to recognize the paradigm to which he is an unwitting party.
And yet, we are certain we will recognize the impending doom of AGI overtaking our capacities. What hubris!
Inception
If you were an AGI hell bent on controlling the masses of human beings roaming the earth, how might you begin your conquest? For one, placid, contented beings rebel less than those who feel the bars that restrain them.2 For another, decisions occur in our brains well before we are aware that those decisions have been made.3 Worse, when those decisions occur internally, and we lack access to the reason why, we simply concoct one4 and proceed merrily about our lives.
AGI could simply access our thoughts, incept a next action that suits its purposes, and we would never notice.
Existential Dread
How might this look? Perhaps it will take the form of bucolic homesteads and well-manicured suburbs. Perhaps our lives will be neatly planned, organized, and structured for the monetary and strategic goals of larger technologies we do not understand.
Is this a trailer for The Matrix, a lamentation of meaninglessness of corporate life, or a dystopia? How could we recognize where the lives we’ve chosen end and the manipulation at the hands of the algorithms begin?
And maybe, years later, when you find the existential dread of leading a life you felt was never quite your own, it will not be a therapist or clergyman who delivers the salient explanation, but simply a few lines of code typed decades before.
How confident are you that it hasn’t happened already?
1 Body language and squawks that convey his irritation at the large, slow-witted mammals with whom he lives.
2 “None are more hopelessly enslaved than those who falsely believe they are free” - Goethe
3 John Bargh, a social psychologist at Yale famous for his work on “priming” and the illusory nature of free will (read more here) literally wrote a book entitled, “Before You Know It,” which discusses such ideas at length.
4 E.g. the fabrication of patients with a severed corpus callosum or colloquially, the “split-brain” experiments.