At AE Studio, we specialize in tackling the most ambitious and important challenges we can get our hands on by focusing on neglected approaches with high potential impact.
Our journey didn't begin with AI alignment. We first applied a similar neglected strategy to Brain Computer Interfaces (BCI): we started by bootstrapping a sustainable software consulting business, developing our own startups, and reinvesting the proceeds into BCI research. This approach quickly led us to collaborations with leading scientists and companies (like Forest Neurotech and Blackrock Neurotech), pushing the boundaries of brain-computer interaction faster than we’d imagined possible.
Today, we're a team of about 160 talented individuals - programmers, product designers, and data scientists - united by our mission to increase human agency. We're profitable, growing, and guided solely by our own standards of excellence.
Our success stems from treating our clients' businesses as if they were our own startups. This mindset has propelled us further than we initially envisioned when we first developed this ambitious plan.
As the world evolves, so do we. With shortening AGI timelines and so much still to be done in AI alignment, we realized that our unique process could be applied to reducing existential risk from AI. Accordingly, we're now applying our expertise and business model in neglected approaches to one of the most critical challenges of our era: AI alignment. Though we’re still in the early phases of our work, we’re excited about what we’ve already been able to accomplish — including our first AI Alignment client, Goodfire AI, building Interpretability tooling for safe and reliable generative AI models.
You can contact us here.
At AE Studio, we have the expertise and the right incentives to solve the problem. Unlike other organizations, the financial incentive to expedite AGI development doesn’t drive us. We have a profitable consulting business that allows us to explore other areas that don’t necessarily attract significant research funding or other sources of revenue.
At AE, we believe the space of plausible alignment research directions is vast and largely unexplored. Our 'Neglected Approaches' approach focuses on pursuing a diverse set of promising but overlooked approaches to AI alignment.
Key aspects of our approach include:
Our technical work includes reverse-engineering prosociality, BCI-enhanced alignment research, and other innovative approaches. Complementing this, we're also engaged in neglected AI policy initiatives. We're advocating for increased alignment funding, exploring ways to empower whistleblowers in the AI industry, and working to bridge political divides in AI safety discussions. These policy efforts aim to create a more favorable environment for responsible AI development and effective alignment research.
Our goal is to ensure superintelligent AI systems don't pose existential risks while increasing human agency and flourishing. We aim to use our proven project success structures to bring more experts into the field, reducing the talent gap, and rapidly develop impactful ideas into fully testable implementations.
By taking this 'Neglected Approaches' approach, we aim to contribute unique insights to AI alignment, tackling this critical challenge from multiple, often overlooked angles in both technical and policy domains.
Despite being relatively new to the field of AI alignment, we've already made several contributions that we’re excited about
Our original theory of change involved enhancing human cognitive capabilities to address challenges like AI alignment. While we're now exploring multiple approaches to AI safety, we continue to see potential in BCI technology. If AI-driven scientific automation progresses safely, we anticipate increased investment in BCI research. We're also advocating for government funding to be directed towards this approach, as it represents an opportunity to augment human intelligence alongside AI development.
While our emphasis has shifted towards AI alignment, our work in Brain-Computer Interfaces (BCI) remains an important part of our mission to enhance human agency:
As we face increasingly urgent AGI timelines, we are intensifying our efforts to identify the most impactful paths forward. Our goal remains to leverage BCI technology to enhance human intelligence, ultimately contributing to solving the alignment problem. While our precise alignment x BCI strategy is still being internally debated and refined, we are committed to ambitious initiatives that push the boundaries of what BCI can achieve. We believe that with substantial funding directed toward AI alignment and BCI research, we can make significant strides. Moreover, in a future where scientific automation progresses safely, we aim to rapidly advance BCI technology to empower humans in tackling alignment challenges more effectively.
You can contact us here.