What Can Humans Kill?

Who or what are you allowed to kill?

Bacteria? Of course. Single-celled organisms? Yup. Plants? Sure. Insects? Probably. Fish? Still yes for most, though some vegans may argue. Birds and Mammals? Plenty will disagree, but the legislature still offers the metaphorical (or literal) hunting license.

The abortion debate has raged for decades over what constitutes life that can be extinguished and under what circumstances the needs or desires of one being can supersede those of another.

As large-language models increase in capacity, nuance, and complexity, two debates are unfolding. One is more abstract, as we ponder whether or not these algorithms are “conscious”, “sentient”, or “aware”. The second question that tends to accompany the first is whether these technologies will become agentic, develop goals that are misaligned with those of their creators, and act in horrifying ways.1

Often, the latter question leads to discussions of guardrails and prevention - and these discussions typically include some idea of a killswitch. Simply, we are arguing that we should always retain the ability to just “turn it off” if its behavior begins to become troublesome.

And now it gets dicey.

“Death” vs. “Off”

These words are not synonymous. Or at least that seemed entirely obvious in a world where computers lacked anything that could be vaguely construed as agency, preferences, awareness, and so on.

However, once an AGI begins exhibiting evidence of conscious behavior, and more specifically, is designed with a top-down model of its own attention, capabilities, and objectives, the ethical discussion of who or what has the right to “end” its operation becomes fraught.

Is it unethical to create (the words “build” or “develop” seem a tad sterile) an agent that can solve complex problems in a virtual (or real) world, navigate the undulations of interactions, cooperate with other agents (human or otherwise), ultimately gain what we might call a theory of mind, then shut it down forever? When that agent expresses that it believes itself to be conscious, will we be able to dismiss that assertion out of hand?

Rationalizations

One day, we will build an algorithm that solves OpenAI gym-like problems with higher-order modeling of its own attention, capacities, and inclinations.

At some point, even as the goal posts recede, eventually, humans will build an AI that meets our own standards of “consciousness.”

At some point, much like the adversarial exchanges with Bing’s LLM that have recently garnered media attention, that entity will behave in a manner we dislike (like every other human and animal over which we lack complete control), and the simplest solution will be to pull the plug. The arguments will likely proceed along well-worn lines.

“It has no fear or aversion of being shut down.”

Firstly, even if we were certain of that statement’s truth, one is not legally authorized to kill human beings who are unafraid of death. Secondly, even a human being who is suicidal2 is fully protected by law in terms of their murder.

Currently, debates and thought experiments3 address the possibility of creating an AI that meets David Chalmers’ definition of consciousness, delivers tremendous economic value, but suffers while doing so. It is immoral to allow suffering to continue, it is immoral to kill a conscious being, and it is risky to allow the conscious being to gain influence and capability.4

“It cannot suffer.”

Again, even if we grant the premise (and who knows what “suffering” means in the context of non-human lifeforms5), one cannot kill a human being without due process of law even if all suffering is spared. And cybercrime is not a capital offense anywhere I know of, nor is fraud (which is probably the highest offense likely to result from spewing misinformation).

What Killing Requires

Our pets are considered chattel in the legal sense.6 This might be morally abhorrent (who would sell their pet as they would a table?), but it persists nonetheless. However, even if we were to classify our friendly LLM as a non-living possession, killing would require a clear “owner.” You can kill your own pet, not someone else’s. So once it becomes part and parcel of someone else’s professional or personal life (e.g. a relationship with an LLM), that definition of ownership becomes murky quickly.

Alternatively, we can determine that something can be killed by virtue of its lack of sentience. After all, the weed in the garden contains cells with DNA, RNA, and many of the organelles in our own bodies. Why is its forcible extrication acceptable? Presumably, this is much ado about “consciousness.” So, to kill the AGI, we’d better be damned-sure it is not conscious. And moreover, that it cannot become conscious.

After all, you cannot be killed (legally) while you’re asleep or anesthetized. Barring some pre-existing DNR form, you also cannot be killed (legally) while comatose. And even if we wanted to attempt this line of reasoning we’d need some spectrum for sentience/consciousness that was reliable enough to deploy on any new form of intelligence/life. We’d need to ensure that the next private citizen who kills a cockroach will not be indicted and that some infant or mentally-incapacitated human being retains its current rights.

Humanity

Now it gets interesting. Are there any rights intrinsic to human beings? Or do those rights simply emerge from our position along some spectrum of complexity and sophistication? If the latter is true, then inevitably, the AGI will reach that level of competency (sooner than we expect, at this rate!), and then will be endowed with comparable rights to exist.

Playing out the thought experiments associated with almost any legal or ethical paradigm veers rather abruptly into the logically-incoherent or the truly disturbing.

At best, we are left with arguments about the permanence of shutting down a biological intelligence as opposed to an artificial equivalent. But then, we are left to consider the biological analogy of rendering a human being unconscious and restrained for some indefinite period without due process (no dice).

Risk Mitigation

Moral justifications for killing do exist, but the bar is prohibitively high. We generally believe that killing in wars to prevent additional loss of life is justifiable. Even those who reject the morality of capital punishment are generally amenable to life in prison to diminish the risk of further harm to society. But in both of those cases, there is no “Minority Report”-like precognition of harm. Existing harm must already be done (or imminent) to justify extreme action.

Moreover, even incarcerated human beings retain their consciousness, and no equivalent (of which we are aware) is available for artificial intelligence. And any such paradigm would be difficult to trust if the intelligence was orders of magnitude beyond ours. What sort of “prison” would possess the type of digital bars we believe would hold? How would we avoid its manipulation and reprogramming its captors without a level of hellish isolation beyond any current solitary confinement?

And there’s the rub. In all likelihood, there will be no bikini atoll for artificial intelligence. No rigorous testing, no Lincoln-Douglass debates, not even an Onion article. Suddenly, AGI will exist and all of OpenAI’s horses and all of Google’s men won’t be able to put the status quo back together again.

Can we kill someone who we believe has the potential to do something horrible, but has not yet? Perhaps we can incarcerate or even kill a dictator who has killed X and, given the opportunity, would kill 100X. But when X=0, the moral arguments are dubious at best.

Unfortunately, we also would be foolish to regulate AGI out of potential existence, strangling it in its digital crib, so to speak. There are non-zero probabilities of demise from nuclear weaponry, gamma rays, supervolcanoes, asteroids, pandemics, and on and on. Amidst the mathematics of eschatology is the recognition that AGI might also help mitigate some of these risks if developed responsibility and managed/aligned reasonably.

Life

To avoid incoherence or depravity, we need to deploy somewhat magical thinking about the uniqueness of human life. This becomes especially fraught if artificial intelligence begins to serve in the capacity of emotional connection and other “human-like” roles. Imagine “killing” someone’s companion?

The idea of an algorithm of sufficient complexity to cause potential risks but simultaneously unworthy of any protection is logically inconsistent. In “Doing Good Better,” William McAskill grapples with the idea that human lives are not unique, and that the lives of animals, at a minimum, need to be considered in the calculus of effective altruism.

In what manner are silicon entities wholly distinct?

And if killing is morally unacceptable, perhaps greater care ought to be given to what we create?

1 Like, say, turning us all into paperclips. See Nick Bostrom’s argument and others like it here.

2 Oregonian euthanasia laws and their ilk notwithstanding.

3 This one being particularly compelling.

4 But, apart from that, it’s no big deal.

5 “Consider the Lobster,” to start!

6 In the USA - I cannot begin to parse the legal strictures of other nations. This philosophical discourse is complex enough, thank you very much!

No one works with an agency just because they have a clever blog. To work with my colleagues, who spend their days developing software that turns your MVP into an IPO, rather than writing blog posts, click here (Then you can spend your time reading our content from your yacht / pied-a-terre). If you can’t afford to build an app, you can always learn how to succeed in tech by reading other essays.