Organizations and individuals generate an immense amount of data. How we decide to utilize and protect that data shapes the future of technology and society. Techniques like machine learning and AI have revolutionized how we think about healthcare, medicine, transportation, financial services, and which television show we plan to binge this weekend. We believe in making data maximally useful to benefit society and increase human agency. We believe that privacy should be central to that consideration. Technology designed to empower users, corrupted by the wrong financial incentives, can ultimately diminish their agency. At AE, we are focused on researching and developing best privacy practices for unlocking the potential of your private data on your terms.
We are researching, building, and testing privacy-enhancing techniques to protect sensitive user data. We are creating and implementing the tools needed to use that data responsibly.
1. Researching state-of-the-art, privacy-preserving machine learning techniques. All of the utility, none of the nefarious profit-seeking with other people’s data.
2. Applying privacy-enhancing technologies, such as federated learning, differential privacy, and homomorphic encryption to novel applications such as brain-computer-interfaces (BCI). Ensuring that developing technologies are built responsibly from the start…or not at all.
3. Investigating and testing machine learning techniques that are interpretable. We don’t like trusting black boxes any more than you do.
4.Contributing to open-source software in this field. By supporting each other, we can bring these technologies into the mainstream.
At AE, we believe that when it comes to data, you should be able to have your cake and eat it too. We believe that sensitive and private data should remain private – full stop. We also believe that even with that constraint, users should also be able to benefit from technologies and tools that increase human agency. For example, medical devices can record signals from the body that provide unique insight into states of disease, dysfunction, or unlock new functionality. But building tools using machine learning should not require closing your eyes and clicking “Accept all” to 100 pages of inscrutable terms and conditions.
1. We want to protect your data. In an age where storing and processing mountains of data is the norm, we ensure that your data remain private.
2. We want to increase opportunities to collaborate. Collaboration among medical, research, and financial institutions comes with red tape to protect data. At AE we build the tools to cut that tape responsibly.
3. We want state-of-the-art tools in machine learning to benefit individuals. Personalized medicine? Yes, please. BCI? Absolutely. Building those tools cannot require forsaking data privacy.
4. We want to think long term. Maximizing long-term good requires a recognition of where technology often leads. This requires consideration of how incentives and deadlines will challenge our best intentions if we fail to address these issues now.
We maintain independence. AE exists without funding from outside shareholders, venture capital, and private equity (Google’s founders never wanted ads, but their investors did). AE is already a fully-bootstrapped business that has grown from 0 to ~150 with agency-increasing BCI as its ambitious Big, Hairy, Audacious Goal. We are a group of experts in machine learning, neuroscience, software development, and user-focused design. We develop technology that increases human agency, rather than harvesting user data as a product. We are growing a profitable, longtermist software company, training the best developers, designers, and engineers on the planet (or any other), building agency-increasing products for those clients and ourselves, and investing in altruistic, agency-increasing BCI initiatives. We fund internal skunkworks projects that increase human agency and can become viable businesses themselves. We follow our core values of increasing human agency, rather than pursuing short-term financial incentives. We take acceleratingly large A/B testing baby steps, realizing that we are only 1% of what we could be.
Schedule a call or fill out this form: