BCI: Winning the NLB Challenge was only the first step

The integration and augmentation of the human brain with technology is centuries old.  Entire courses are devoted to the study of how humans and technology evolve in tandem.1  For AE and many research labs around the world, this means brain computer interface (BCI) research.  The Neural Latents Benchmark (NLB) Challenge (publication) was created as a forum in which participants deployed their algorithms on sets of neurophysiological data, primarily gathered from Utah Arrays implanted in the brains of monkeys.

Back in November, we described our early lead in this competition.  Now, we are proud to announce our victory - check out the leaderboard!

As the title suggests, the path to using one’s computer with only the intentions embedded in human thought is time-consuming and complex.  The NLB challenge is the first 100 yards of a marathon.  While we are certainly thrilled to take the lead, it is most important to encourage others to partake in that journey, collaborate, and ultimately, ensure that human agency is at the forefront of brain computer interfaces.

To catalyze such collaborations, it’s important to understand what we have done well, and where opportunities for improvements remain. At AE, though we work closely with neuroscientists, our expertise lies in software engineering and data science.  

We help startups build MVPs and later stage startups raise their next round.  We help enterprise clients build best-in-class software. This has taught us valuable lessons.  We are methodical, efficient, and well-practiced in data science and software development best practices.  We make strategic decisions based on timelines like the competition’s.  We ingest new ideas, assess the state-of-the-art2, organize our efforts, and execute.

As the name suggests, NLB is a benchmark for the state of the science.  Beyond the competition, the benchmark represents a tool to understand what current human understanding of neurological activity can and cannot solve.  

An unthinkably complex tangle of 80 billion neurons and each one’s thousands of connections begs for the identification of simplifying patterns that can minimize the number of variables required to explain their activity3.  Our innovation was less about shrinking the dataset and/or location of explicit latent representations (brilliant minds already did this)4 and more about pulling together the insights of multiple existing models5 and tuning our mathematical instruments.6

AE applied excellent data science tools and techniques, chose an approach that suited the timeline, and posted the top scores.  This is encouraging for us, our clients, and the broader community.  However, this also leaves tremendous opportunity for continued fundamental research in neuroscience.  There are novel methods of extracting latent patterns we still wish to explore.  Stay tuned.

As is often the case, the bar is raised when those outside a given field lend their perspective.  The AE researchers who helped post the winning scores completed doctorates in quantum physics and computational data science.  These are brilliant minds and gifted data scientists, but not neuroscientists.  They were able to assimilate the best of that field, while offering something new of their own.  This is why we hope this competition is the first baby step along a rich intellectual and technological journey.

Together with collaborators, we aim to explore new methods and new models.  We will implement them better, push their limits, and assess where models succeed and fail.

This is the tip of a scientific iceberg, and we’re exploring academic and private partnerships to help us take this work to the next level.  We are futurists.  We are optimists. We are longtermists.  In the marathon that is BCI research, we aren’t at the halfway point, or even the first mile marker.  We lead after the first hundred yards.  Please join us along the twisting, winding road to increasing human agency.  And if you have a cup of water7, that’d be great.

AE thanks Joel Ye and Chethan Pandarinath for their work on Neural Data Transformers.  Their open-source, easy-to-use codebase was foundational for the work described above (in our repo).

1

The academics would refer to “posthumanism.”

2

We can hack up implementations of the latest and greatest papers with the best of ‘em!

3

The technical term is “dimensionality reduction.”

4

LFADS (Latent Factor Analysis via Dynamical Systems, see publication).  For general interest, read about Latent Factor Analysis.  Latent variables are those we cannot observe, but rather, infer (with a model) from other variables we can measure.  Factor Analysis consists of locating variables whose variance are all described by some unobserved (latent) factors.

5

Ensembling

6

The math folks in the crowd will recognize this as hyperparameter optimization.

7

Orange slices are also welcome.

No one works with an agency just because they have a clever blog. To work with my colleagues, who spend their days developing software that turns your MVP into an IPO, rather than writing blog posts, click here (Then you can spend your time reading our content from your yacht / pied-a-terre). If you can’t afford to build an app, you can always learn how to succeed in tech by reading other essays.