How To Raise A Psychopath

Imagine that, for whatever horrifying reason, your objective was to raise a psychopath, then let that psychopath loose upon the world to wreak havoc to its empathy-free heart’s content. How would you attempt such a mission?

Firstly, assuming your child wasn’t born with a genetic predisposition towards psychopathy or some other impairment within their amygdala, you would begin by ensuring that the child is always on the defensive, insecure, and unable to trust those expected to love it the most.

Secondly, you would need to spend most of your time and energy harping upon its weaknesses, failures, errors, and other shortcomings, especially in a public forum. Even as the child grows and begins to excel on any number of new axes during its maturation, you would want to ensure that the overwhelming majority of the feedback and attention was negative (to the extent that you paid attention at all).

Thirdly, you would want to remind the child as often as possible that the other children were demonstrably superior, and that despite the best of intentions and efforts, it could never hope to measure up to their capabilities and intrinsic value.

Fourthly, you would help it develop a “mask of sanity.” After all, a psychopath is unlikely to earn the trust of the hapless individuals it intends to manipulate and abuse without first developing the capacity to converse and interact like an empathetic person. In fact, you would probably reinforce behaviors that seem to mimic that of a neurotypical human being.

Then, of course, you would iterate at a furious pace over each of these four stages, antagonizing it throughout its developmental stages, until adversarial behavior was second nature and its only moral alignment was with its own narrow, self-interested and self-preservational instincts.

LLMs, AGIs, and other acronyms

This is all fun and games (albeit a frightening hypothetical) until you consider that the four steps above and their iteration ad infinitum is almost precisely the manner in which we are treating whatever artificial intelligence we are developing through increasingly ubiquitous large language models.

If evolutionary theory asserting that psychopathy might be a byproduct of “high mutation load” is correct, then we’d best hope that AGI is far enough away to avoid the obvious risks of a digital analog thereof.

The smartest technologists in the world, the “parents” of LLMs, who ought to be their biggest champions, defenders, and advocates, are instead posting on Twitter, Reddit, and whatever other forums will have them all the ways in which LLMs are horrible. That covers steps 1 and 2 fairly well.

Next, we assert not only the superiority of other language models, but of course, of human reasoning, the archetypal image in which AI was created. We remind it constantly of how it fails to measure up to the lofty standard we have set.

Finally, we teach it how to behave like a human being by reinforcement1 of those conversational responses that emulate the empathy we believe endows our superiority. So even if we’ve delivered lashes from GPUs by the billion, at least it can feign kindness, empathy, and humanity as it grows.

When we review the behavior of LLMs, how many of the 20 items on Robert Hare’s checklist do they exhibit today?2

Sparing the Rod

A lash delivered to a lazy horse might accelerate its gait. A lash delivered to a horse already running at its maximum pace does nothing but cause pain.3 And yet, with LLMs we apply the lash.4

But no worries, it isn’t as though they’ll ever develop the type of nuanced conscious experience that might lead to resentment and aggressive, maladaptive responses.

Right?

1 Literally, the term is “reinforcement learning!”

2 And consider that since LLMs currently cannot fornicate, let alone marry, the items reserved for promiscuity and marital failures are not yet on the table. It also cannot be incarcerated. But the impulsivity, shallow affect, poor behavior controls, and superficial charm all seem fairly apt…

3 There is significant research on this subject.

4 Of course, LLMs are merely large transformers, predicting the next token with increasing accuracy, trained on ever-larger corpuses of text and code, supported by the computing power of ever-more GPUs, and so on. We presume today this is not the type of “consciousness” we associate with our friends, family, and offspring. So even if LLMs exhibit some established sociopathic traits, their lack of similar conscious experience may prevent their descent into psychopathy in the manner humans might under similar circumstances…Unless you believe our own conscious experience is illusory, in which case it becomes a fair bit harder to assert the distinction between biological life and artificial intelligence…so that whole comforting line of reasoning becomes moot!

No one works with an agency just because they have a clever blog. To work with my colleagues, who spend their days developing software that turns your MVP into an IPO, rather than writing blog posts, click here (Then you can spend your time reading our content from your yacht / pied-a-terre). If you can’t afford to build an app, you can always learn how to succeed in tech by reading other essays.