Nobody Knows Anything About Machine Learning Right Now

The opposite of the Dunning-Kruger effect is the phenomenon where someone with a bit of knowledge in a field becomes acutely aware of how little they know. From language models to AI-generated artwork, transformers of increasing scale and complexity are deployed by engineers with minimal (if any) intuitive understanding of their operation.

Perhaps this is unsurprising. After all, neural network architectures, theoretically, are an attempt to emulate the structures of the human brain used to recognize patterns, think, and learn. But then, how much do we really understand about how learning let alone thinking really work? As John Bargh argues in “Before You Know It,” much of what we perceive as our own thoughtful decision-making is little more than an explanation we provide ourselves retroactively after our own neural networks have returned the needed response.

The nature of consciousness, the human experience, and the meaning of our own existence itself is deeply elusive. In Scott Alexander’s essay, “Universal Love, Said the Cactus Person,” a protagonist in a psychedelic-induced dream evaluates superhuman beings by asking them to factor inordinately large numbers to verify that he’s actually talking to superhuman beings rather than hallucinations. Eventually, the beings describe the nature of the dreamer’s existence as akin to a life spent in a car without any sense of the possibility of getting out of the car… and that most human exploration misses countless opportunities to gain knowledge and understanding because we’re too busy trying different buttons and features and, well, anything but getting out of the car.

Quite possibly, we are building algorithms we do not really understand to simulate a process of thinking and learning we do not really understand to help enrich an existence whose nature we do not really understand.

So while Google and Meta apply ever larger numbers of GPUs to ever more complex transformers, it is possible (even likely) that we are optimizing the driving experience when the destination is a short walk up the trail, if only we would get out of the car.

What Is Intelligence?

At best, we struggle to understand the nature of higher-order intelligence. At worst, we’re entirely ignorant of its nature. As discussed in a previous essay, my bird believes he’s smarter than me because he is my superior in every form of intelligence with which he is familiar. He notes possible predators, recognizes individuals by the sounds they make, and remembers spatial relationships. He’s better at this than me. Of course, he lacks even the basic templates and abstractions to understand language and mathematics.

So why do we think we grok the nature of AGI?

Perhaps BCI offers a mechanism to help understand if not how to bring about AGI, then at least some of the elements of its nature. Currently, reinforcement learning builds algorithms that learn to reward desired behavior and punish undesired outcomes - but this requires some a priori notion of the objectives it should pursue and the types of results we should reward… Even inverse reinforcement learning, where a model attempts to learn objectives and values by observing behavior is similarly limited insofar as it requires an a priori assumption that our behavior is actually the proper way to pursue our objectives.

But we already know our cognition is suboptimal at best and perhaps we’re just not great thinkers yet.1 So we’re still not getting out of the car - we’re still adding buttons and dials to the dashboard and trying to improve aerodynamics and fuel efficiency. There aren’t bad ideas per se, but we’re still in the car wondering why that mountaintop in the distance is inaccessible.

It is entirely plausible, even likely, that AGI, and any hopes of AGI alignment, will be the byproduct of a neglected approach.

“Easy” Problems

Many of my friends and colleagues have young children. We observe the natural ease with which these neural networks learn and generalize. Show the child a few drawings of their favorite animals, then an animated cartoon of those animals, then take ‘em to the zoo. A few examples of each are plenty and without any particular difficulty, they figure out that the bears on the page, in the cartoon, and at the zoo are all examples of the same class.

The fact that we all mastered this extraordinary type of generalization of before we learned how not to pick our noses in public2 suggests our current approach is replacing some missing insight with computational force.

Now let's consider another simple example of human intelligence. I toss you the keys to my 2010 Toyota Prius and tell you to run to Trader Joe’s and buy me a few boxes of frozen mac and cheese.3 Let’s say you’ve never driven a Prius, but you have driven an automobile. This task is not only manageable, but simple. You know how to drive, you know how to use your phone to find the nearest Trader Joe’s,4 you know how to find the frozen food aisle. Even without ever having driven my vehicle and possibly, without ever having visited the Trader Joe’s near your current location (or any other for that matter), the task of comprehension, navigation, driving, item location, purchasing, and returning is trivial. No algorithm will succeed with any regularity on any one of these tasks.

Folks who could not pass an algebra exam would ace this test, and a machine adept with multidimensional tensors cannot. Again, is the limitation much ado about computational power?

The brain is characterized by conglomerates of neurons, where some conglomerates are connected to others, perhaps representing some higher level structure. Sure, a neural network could theoretically represent this with the proper set of hidden layers and mathematical architecture, but does this actually occur when the models are trained? Are our brains really executing some backpropagation algorithm or is something far simpler and far more elegant the explanation for why the child immediately grasps what the supercomputer cannot?

Getting Better All The Time

The best tennis player in the world still has a coach. That coach, by definition, is a less skilled tennis player than the person he coaches. And yet, we all recognize the value of the external perspective and the insight it provides. In addition to our flawed thinking, metacognition remains a challenge.

Imagine performing a task to the best of your ability. Imagine that at its completion, you feel secure in the knowledge that you could not have done any better. Now imagine the benefit of an observer who notes opportunities to improve that you could not discern during the completion of the task. This is why athletes, surgeons, and professionals are (or should be) taught, mentored, and coached.

With respect to current approaches to AI (and the pursuit of AGI), we’re in desperate need of a coach. We need someone, anyone outside of the vehicle to help us course correct. We do not know what we do not know, only that we are, in many cases, performing the task as well as we know how, lack insight regarding how to improve, and thus, continue to add computational power because, well, it’s all there is.

Effective altruists often comment on the limitations of legibility. How can we value improvements we cannot measure? How can we attack problems we struggle to articulate? What if traditional ML is not the path to AGI and we cannot enumerate alternatives?

And so we advocate for neglected approaches. We fund neurotech development to better understand the nature of thinking, learning, intention, and agency, in the hopes that we might better address the limitations of ML in terms of AGI and alignment.

But most importantly, we accept the fact that, in so many ways, no one knows anything about machine learning now. The focus of our time and dollars should reflect this humility, rather than doubling and quadrupling-down on the idea that we do!

1 Inability to multitask, inability to communicate thoughts to future selves, interruptions in thought cause loss of insight, poor transcription fluency (ability to convey what we truly mean in words), and susceptibility to biases and manipulation.

2 Just me? Ok, fine.

3 My wife loves them, and I have discovered that the keys to a successful marriage begin with ensuring that she has coffee brewing before she awakens in the morning and ample access to goodies from Trader Joe’s. If I ever become a couple’s therapist (and my wife actually is, so she knows just how ridiculous that sounds), that will basically be my first session - “have you tried buying snacks for them from Trader Joe’s?”

4 You know that some Jewel-Osco knock-off is wholly unacceptable. I mean, maybe you don’t, but dammit, you should!

No one works with an agency just because they have a clever blog. To work with my colleagues, who spend their days developing software that turns your MVP into an IPO, rather than writing blog posts, click here (Then you can spend your time reading our content from your yacht / pied-a-terre). If you can’t afford to build an app, you can always learn how to succeed in tech by reading other essays.