Exploring the Intersection of Network Effects and Context-Specific Knowledge in the Digital Age
The history of human learning is the history of context-specific knowledge. Is AGI context-specific?
Humans are inadequate thinking machines. Our working memories are wholly insufficient for much of the work we wish to accomplish. We have little concept of what it means to have agency, let alone how to bring about that state. And our powers of reasoning are often biased, flawed, and short-sighted.
There is cause for hope. Throughout the course of human history, as a species, we have developed numerous internal and external structures to augment our limited powers of cognition. Internally, we deploy abstractions like mathematics and language to package complex concepts for later communication. Externally, from the primitive scrawlings upon cave walls, to papyrus scrolls recording debt, to delegation, to carrying the entirety of the internet in our pocket, we have developed technologies to store information, extend and augment our working memories and processing power, and scale our productivity.
Context
But even the most advanced note-taking systems, integrated seamlessly with our frequently-used applications, cannot capture our state of mind. They capture facts, not context.
Why does this matter?
We are aware of some of the deficiencies in our thinking and devise impressive structures with which to compensate. This is a responsible decision when operating a machine one knows to be flawed. But much of our knowledge is context-dependent.
Consider the senior software engineer, the experienced electrician, the calloused plumber, or the gifted auto mechanic. Their knowledge base is not simply a compendium of examples and solutions. Their instincts and intuitions emerge when presented with a system functioning incorrectly—they recognize paths to solutions whose origin they cannot describe.
Literally, the “knowledge” they deploy does not exist, absent the context in which it becomes beneficial. This is among the reasons why experienced professionals cannot simply transfer their knowledge via documentation and conversation. Even apprenticeships and shadowing often fail to replicate the diversity of contexts in which issues arise, patterns are recognized, and solutions are devised.1
As a thought experiment, consider the perfect duplication of a twenty-five-year-old Albert Einstein at a subatomic level. His annus mirabilis would occur the following year and change the course of theoretical physics. How confident are you that this simulacra achieves the same scholarly results? Will he be placed in an environment in which those discoveries can occur? What if he is impeded by the professional and romantic circumstances of his life? Revelation is context-dependent. Into what context will we place the AGI, and how confident are we said context generates the desired outcome with respect to alignment?
Network Effects
A senior professional has experienced a broader array of contexts, and therefore, can solve a broader array of problems more effectively, which provides additional intuition for future problems. A senior organization, one that has completed a significant number and diversity of projects, has developed this type of problem-solving instinct at the organizational level, which then infuses the individuals operating therein, who then solve problems more effectively and build contextual intuition and knowledge.
Every organization aspires to transfer knowledge across employees and projects, but ultimately, this process is as ineffable as the instincts of the senior professionals themselves.
Another deficiency of the human brain is the inability to capture our state of mind when we capture information. How many times have you attempted to read a note left to your future self and had no idea what exactly your past self intended? In the future, brain-computer interfaces or “BCIs” might not only facilitate auxiliary working memory, but the capacity to capture states of mind.2
In so doing, the context in which knowledge appears could be retained, retrieved, and possibly even transferred. In turn, a group of brains with context-specific knowledge could augment not only each other’s repository of experiences, but their list of useful contexts.
This is a network effect for insight.
Currently, humans assemble ANI3 to accomplish what BCI cannot. Namely, we assemble ever-larger transformers4 and feed those transformers ever more examples in the hopes of addressing every conceivable context and extracting the emergent wisdom thereof.
Still, the ability to retrieve context might be one of many neglected approaches demanded by the reinforcement learning paradigms offered to stave off risks from AGI.
Values
We have argued in previous essays that misaligned AGI presents a risk to your personal welfare that likely exceeds the most common causes of death. Furthermore, we argued that in the absence of our own clarity of thought, our odds of solving the control problem dwindle. We cannot teach a machine our values when we don’t understand our own values well enough to express them in code.
And what are values, if not context-specific? Can you enumerate the proper moral response in every conceivable situation? Of course not! Hence the profusion of utilitarian thought experiments wherein one value system advocates for the murder of one individual to save many5 or else asserts moral absolutes that counterexamples ruin.
And if we cannot enumerate context-specific knowledge to a potential AGI, how will it retain the intuitions of propriety that are hotly-debated in philosophy lectures but easily-grasped by any non-sociopath?
Society, to the extent that it is civil, is a knowledge-based network effect. We accept norms, deploy our context-specific intuitions as a senior developer reacts insightfully to a misbehaving codebase, and make progress.
Individuals violate these norms, from the murderous psychopath to extremist and facist movements. However, from the perspective of the species, self-correction and robustness are evident. Humanity not only recovers, it progresses. Even if we consider the survivorship bias required to pen the argument above, it does seem as though human society is generally “aligned.” The challenge is that individuals are not subject to the same constraints, which in turn, means a single AGI could run off the rails, and subsequently, present an existential risk, especially in a context in which its misaligned objectives lead to such an outcome.
In this case, perhaps it is not what you (or an algorithm) “know,” but what emerges when the proper context arises.
1 Incidentally, black-box machine learning models are at least an attempt to replace some of this pattern recognition. Unfortunately, there is an inevitable loss of wisdom and insight when an algorithm attempts to reproduce context-specific knowledge with ever more examples and ever deeper networks.
2 There is a general perception that certain states of mind are intrinsically counterproductive (e.g. states of heightened emotion and/or irrationality). However, these states may also provide contextual information required to reproduce the state wherein an insight or realization occurred.
3 Artificial Narrow Intelligence or “Weak AI,” not to be confused with an entirely different ANI (an auditory nerve implant), developed by one of our collaborators!
4 And other Artificial Neural Networks (ANNs), all of which are tasked with modeling pattern recognition with mathematical operations.
5 Specifically the utilitarian arguments that tend not to hold up to scrutiny in the face of the repugnant conclusion or the justification of harvesting of the organs of healthy young adults in hospitals.
No one works with an agency just because they have a clever blog. To work with my colleagues, who spend their days developing software that turns your MVP into an IPO, rather than writing blog posts, click here (Then you can spend your time reading our content from your yacht / pied-a-terre). If you can’t afford to build an app, you can always learn how to succeed in tech by reading other essays.
Exploring the Intersection of Network Effects and Context-Specific Knowledge in the Digital Age
The history of human learning is the history of context-specific knowledge. Is AGI context-specific?
Humans are inadequate thinking machines. Our working memories are wholly insufficient for much of the work we wish to accomplish. We have little concept of what it means to have agency, let alone how to bring about that state. And our powers of reasoning are often biased, flawed, and short-sighted.
There is cause for hope. Throughout the course of human history, as a species, we have developed numerous internal and external structures to augment our limited powers of cognition. Internally, we deploy abstractions like mathematics and language to package complex concepts for later communication. Externally, from the primitive scrawlings upon cave walls, to papyrus scrolls recording debt, to delegation, to carrying the entirety of the internet in our pocket, we have developed technologies to store information, extend and augment our working memories and processing power, and scale our productivity.
Context
But even the most advanced note-taking systems, integrated seamlessly with our frequently-used applications, cannot capture our state of mind. They capture facts, not context.
Why does this matter?
We are aware of some of the deficiencies in our thinking and devise impressive structures with which to compensate. This is a responsible decision when operating a machine one knows to be flawed. But much of our knowledge is context-dependent.
Consider the senior software engineer, the experienced electrician, the calloused plumber, or the gifted auto mechanic. Their knowledge base is not simply a compendium of examples and solutions. Their instincts and intuitions emerge when presented with a system functioning incorrectly—they recognize paths to solutions whose origin they cannot describe.
Literally, the “knowledge” they deploy does not exist, absent the context in which it becomes beneficial. This is among the reasons why experienced professionals cannot simply transfer their knowledge via documentation and conversation. Even apprenticeships and shadowing often fail to replicate the diversity of contexts in which issues arise, patterns are recognized, and solutions are devised.1
As a thought experiment, consider the perfect duplication of a twenty-five-year-old Albert Einstein at a subatomic level. His annus mirabilis would occur the following year and change the course of theoretical physics. How confident are you that this simulacra achieves the same scholarly results? Will he be placed in an environment in which those discoveries can occur? What if he is impeded by the professional and romantic circumstances of his life? Revelation is context-dependent. Into what context will we place the AGI, and how confident are we said context generates the desired outcome with respect to alignment?
Network Effects
A senior professional has experienced a broader array of contexts, and therefore, can solve a broader array of problems more effectively, which provides additional intuition for future problems. A senior organization, one that has completed a significant number and diversity of projects, has developed this type of problem-solving instinct at the organizational level, which then infuses the individuals operating therein, who then solve problems more effectively and build contextual intuition and knowledge.
Every organization aspires to transfer knowledge across employees and projects, but ultimately, this process is as ineffable as the instincts of the senior professionals themselves.
Another deficiency of the human brain is the inability to capture our state of mind when we capture information. How many times have you attempted to read a note left to your future self and had no idea what exactly your past self intended? In the future, brain-computer interfaces or “BCIs” might not only facilitate auxiliary working memory, but the capacity to capture states of mind.2
In so doing, the context in which knowledge appears could be retained, retrieved, and possibly even transferred. In turn, a group of brains with context-specific knowledge could augment not only each other’s repository of experiences, but their list of useful contexts.
This is a network effect for insight.
Currently, humans assemble ANI3 to accomplish what BCI cannot. Namely, we assemble ever-larger transformers4 and feed those transformers ever more examples in the hopes of addressing every conceivable context and extracting the emergent wisdom thereof.
Still, the ability to retrieve context might be one of many neglected approaches demanded by the reinforcement learning paradigms offered to stave off risks from AGI.
Values
We have argued in previous essays that misaligned AGI presents a risk to your personal welfare that likely exceeds the most common causes of death. Furthermore, we argued that in the absence of our own clarity of thought, our odds of solving the control problem dwindle. We cannot teach a machine our values when we don’t understand our own values well enough to express them in code.
And what are values, if not context-specific? Can you enumerate the proper moral response in every conceivable situation? Of course not! Hence the profusion of utilitarian thought experiments wherein one value system advocates for the murder of one individual to save many5 or else asserts moral absolutes that counterexamples ruin.
And if we cannot enumerate context-specific knowledge to a potential AGI, how will it retain the intuitions of propriety that are hotly-debated in philosophy lectures but easily-grasped by any non-sociopath?
Society, to the extent that it is civil, is a knowledge-based network effect. We accept norms, deploy our context-specific intuitions as a senior developer reacts insightfully to a misbehaving codebase, and make progress.
Individuals violate these norms, from the murderous psychopath to extremist and facist movements. However, from the perspective of the species, self-correction and robustness are evident. Humanity not only recovers, it progresses. Even if we consider the survivorship bias required to pen the argument above, it does seem as though human society is generally “aligned.” The challenge is that individuals are not subject to the same constraints, which in turn, means a single AGI could run off the rails, and subsequently, present an existential risk, especially in a context in which its misaligned objectives lead to such an outcome.
In this case, perhaps it is not what you (or an algorithm) “know,” but what emerges when the proper context arises.
1 Incidentally, black-box machine learning models are at least an attempt to replace some of this pattern recognition. Unfortunately, there is an inevitable loss of wisdom and insight when an algorithm attempts to reproduce context-specific knowledge with ever more examples and ever deeper networks.
2 There is a general perception that certain states of mind are intrinsically counterproductive (e.g. states of heightened emotion and/or irrationality). However, these states may also provide contextual information required to reproduce the state wherein an insight or realization occurred.
3 Artificial Narrow Intelligence or “Weak AI,” not to be confused with an entirely different ANI (an auditory nerve implant), developed by one of our collaborators!
4 And other Artificial Neural Networks (ANNs), all of which are tasked with modeling pattern recognition with mathematical operations.
5 Specifically the utilitarian arguments that tend not to hold up to scrutiny in the face of the repugnant conclusion or the justification of harvesting of the organs of healthy young adults in hospitals.