Actually, much sooner than we originally expected.
First, we have identified that one of the greatest threats to the future of human agency is artificial general intelligence (AGI), and the expected time to AGI inception is shorter than we initially anticipated. Second, our original plan worked better and faster than we expected, with the consulting business scaling to 200 people and having completed a successful startup exit. Third, our progress with BCI has already surpassed our initial long-term goals outlined in our original theory of change.
We had already achieved the main goals of our original plan: to build a scaling business, and our progress with BCI had surpassed our initial long-term goals. Now, with AGI imminent, AI alignment has become a vital aspect of that mission; after all, we want to make sure the human part of human agency still exists.
Our theory of change now involves taking steadfast steps towards promoting greater agency for all intelligent beings by aligning AGI systems with human interests.
At AE Studio, we have the expertise and the right incentives to solve the problem. Unlike other organizations, the financial incentive to expedite AGI development doesn’t drive us. We have a profitable consulting business that allows us to rely on other areas that don’t necessarily attract significant research funding or other sources of revenue.
Our business model allows us to focus on neglected approaches that may have a low probability of success but nevertheless could actually solve the problem.
One of these neglected approaches is creating prosocial AI that is more performant due to its prosociality, which we believe can drive superintelligent systems to be aligned with humans. However, we want to tackle this problem from different perspectives, as that will increase our chances of solving the problem, so we’re also actively looking for other similarly neglected approaches.
We also want to steer neurotechnology, in particular in the context of brain-computer interfaces, to enhance human capacity and enable us to develop aligned superintelligent systems. We’re also looking at other potential areas related to neurotechnology, like cyborgism and whole-brain emulation.
At AE Studio, we strive to empower innovators and scientists to create the next generation of neurotechnology and responsible AI that enhances human agency. We are here to provide support, resources, and open-source software to the brightest and most creative minds in the community. One recipient of AE Grants is Professor Michael Graziano from Princeton, who collaborates with AE data scientists to investigate the hypothesis that RL agents that implement his theory of attention schema could exhibit superior learning capabilities and a tendency towards more prosocial behavior in multi-agent scenarios. Other recipients of AE Grants include Joel Saarinen, Jaeson Booker, and Steve Petersen, who have also used these resources for alignment projects.
AE has been highly involved in contributing and open-sourcing high-impact software for the development of neurotechnology, such as:
We have also recently started accepting grant money ourselves. One of these grants is used to support the work of Marc Carauleanu, a member of our alignment team. He is working on fine-tuning techniques to reliably induce self-other overlap in AI agents as a means of fostering prosocial behavior.
We're proud of our past accomplishments—a flourishing consulting business boasting partnerships with world-leading organizations, internally developed successful startups, and pioneering work in neurotechnology. Our teams excel at tackling complex technical challenges across various industries, guaranteeing valuable solutions for our partners.
Our journey is fueled by a team of passionate and skilled individuals with a collective love for cutting-edge science and technology. Guided by a growth mindset and a commitment to effective communication, we're optimistic about ability to have an impact humanity's chances.