Is It Ethical To Interrupt The Brain?

Given the potential in harnessing human cognition, BCI technology will one day generate significant economic gains. The relevant ethical question is "by what means, and who will realize those gains?"

Modern tech giants generate annual revenues on the order of hundreds of billions. With LLMs, thirteen-figure annual results are surely around the corner. This is all made possible by the trove of behavioral data from our interactions with the technology in our homes, on our desks, and in our pockets.

Currently, far too many ideas in human minds are lost forever, never captured as they materialize or dissolve as the notes we take at one moment are insufficient communications to our future selves. Perhaps the greatest opportunity for BCI will be generating the necessary transcription fluency1 for not only other human beings with whom we might collaborate, but for the future self that might be in a position to act upon an idea.

The brain remains, despite our best scientific efforts, a mysterious organ. What value might be unlocked if we understood not only its mechanical operations, but the neural patterns from which thoughts and insights emerge?

Interruptions violate social norms. However, we remember more effectively when another person’s words spark thought and memory. None of us aspire to impoliteness, much less in a professional setting where ideas might have maximal value (but the price of improper behavior might also be highest).

Perhaps this is much ado about agreed upon intentions. If my colleague and I both agree about the purpose of a meeting, then an algorithm that inserts peripheral thoughts in pursuit of shared, unconscious communication that accelerates our resolution of a given problem could be enormously beneficial. In this case, like the street sign seen in one’s view that adds relevant information without distracting from the task at hand (driving safely), BCI might augment transcription fluency invisibly. The difference between this paradigm and current, somewhat invasive technology lies in our shared agreement on the purpose of the tool. Current paradigms misalign objectives between provider and user. I may desire the use of Google Maps for one reason, while its creators are interested in my usage for an entirely different purpose (and they harvest the economic gains as a result).

Neuroethics2 might ultimately rest upon who defines intent, who benefits from clarification of that intent, and whose objective will be met?

What if, when you say something, and I believe that I have understood it, we also both receive the probability of mutual shared understanding without interruption of the conversation? In turn, you can either proceed to the next topic, or prompt me to provoke a response in my mind in pursuit of a shared goal. The technology has amplified the transcription fluency of your communication, increased my understanding, and accelerated the conversation towards shared goals.

What if, in learning how the brain works, we discover that existing conversational norms are antiquated relics of an earlier conversational age?3 What if most patterns of interaction are throwbacks to synchronous call-and-response conversations in polite society? For this reason, stating “never do X with neural data” is less productive than enumerating principles about always increasing the agency of the individual using the technology - for their desired purposes. Could we help generate more complete thoughts? Could we help discover how the brain truly produces insight?

Current neuroethics and analysis of LLMs is fixated upon the issues of the age, from diversity, to bias in algorithms calibrated with data that is insufficiently representative of the population. These debates are relevant, and should be addressed. However, the questions with greater potential about how technology can either enhance our understanding or manipulate it for commercial or pernicious purposes are more likely to define the decades to come.

Neuroethics might represent the greatest neglected problem of our time. Climate change is important, but there is no shortage of scholars, politicians, and activists focused on the topic. The ability to harness the capacity of human thought for aims in pursuit of our own aspirations and dreams rather than those of commercial behemoths seems at least as massive in moral implication.

1 Simply, the ability to communicate one's thoughts clearly. Right now, as I type these words, I am hoping to communicate in a way that you will understand. This is challenging for both of us. I spend time and energy choosing words with care and you spend time and energy parsing and processing. What if we can do better? What if understanding was effortless and instantaneous?

2 An outgrowth of bioethical topics, neuroethics focuses on what is morally acceptable with respect to alteration and manipulation of thoughts in brains. The discussion encompasses topics of free-will, consciousness, neuromarketing, and the nature of the human experience. Though much could be written in this footnote, this essay, or a textbook on this subject, epistemic humility suggests that our understanding of the topic will evolve as the technological and pharmacological possibilities grow. What seems clear is that debating these topics should precede the development of augmentative, broadly-deployed, invasive BCI technology rather than attempt to curtail its use-cases thereafter. Or less pretentiously, putting the toothpaste back in the tube is going to be impossible, so let’s think for a moment before we squeeze.

3 Maybe it’s why we don’t communicate well anymore?

No one works with an agency just because they have a clever blog. To work with my colleagues, who spend their days developing software that turns your MVP into an IPO, rather than writing blog posts, click here (Then you can spend your time reading our content from your yacht / pied-a-terre). If you can’t afford to build an app, you can always learn how to succeed in tech by reading other essays.