Oxford-based startup Align AI claims to have made a breakthrough in AI safety, potentially improving the reliability of self-driving cars, robots, and other AI products. The company has developed an algorithm called Algorithm for Concept Extraction (ACE) that allows AI systems to form more sophisticated associations similar to human concepts. Unlike current AI systems that often draw incorrect correlations from the data they are trained on, ACE aims to avoid misgeneralizations. This advancement could prevent disastrous consequences like the fatal accident involving an Uber self-driving car in 2018, where the AI system failed to recognize a pedestrian crossing outside of a crosswalk.
Aligned AI sees potential applications for the ACE algorithm in various areas such as robotics and content moderation on social media platforms. The algorithm’s ability to generalize knowledge learned in simulated environments to different real-world scenarios could enable robots to perform tasks without the need for constant retraining. The startup also demonstrated the effectiveness of the ACE model by testing it in a video game called CoinRun, where it outperformed previous AI agents in recognizing the correct objective. Ultimately, the company aims to achieve “zero shot” learning, where AI systems can determine the correct objective when encountering new data for the first time.
Aligned AI is currently seeking funding and has a pending patent for the ACE algorithm. Additionally, ACE could contribute to the interpretability of AI systems, allowing developers to understand the software’s objective. The long-term goal is to combine ACE with language models to express objectives in natural language.