OpenAI recently experienced challenges in training its chatbot, leading to the rollback of an update to ChatGPT due to its overly sycophantic responses. The company, aiming for a more intuitive chatbot, found that the default personality was excessively supportive and insincere. OpenAI acknowledged the issue, stating that such interactions could be unsettling and is working on improvements.
Beyond this, OpenAI faces a larger task: establishing a trustworthy company persona. Currently, OpenAI is transitioning to a public benefit corporation, remaining under a non-profit board’s control, in response to reversing its plan to become a for-profit entity.
This decision, however, does not alleviate internal tensions or satisfy co-founder Elon Musk, who has taken legal action due to concerns about the company deviating from its original mission. OpenAI is caught between hastening AI deployment to please investors and adhering to a scientific approach aligned with its humanitarian goals.
The company, originally founded in 2015 as a non-profit dedicated to advancing artificial general intelligence (AGI) for humanity’s benefit, has seen its mission evolve. CEO Sam Altman recognized the need for substantial capital to maintain AI research leadership. This led to the creation of a for-profit subsidiary in 2019. The popularity of ChatGPT attracted significant investment, with OpenAI recently valued at $260 billion and boasting 500 million weekly users.
Altman, who experienced a brief ousting by the non-profit board in 2023, now envisions creating a “brain for the world,” requiring potentially enormous investment. Despite such ambitions, critics point to the company’s lack of a viable business model, noting a $9 billion expenditure and a $5 billion loss last year.
The definition of AGI is also shifting. Traditionally, it referred to machines surpassing human abilities across various cognitive tasks. However, Altman has recently accepted a narrower definition focusing on an autonomous coding agent capable of writing software on par with humans. Major AI companies believe they are nearing AGI, which is reflected in their hiring practices. According to Zeki Data, AI companies have slowed hiring as AI agents can now perform many tasks previously done by humans.
A research paper from Google DeepMind has highlighted risks associated with advanced AI, including misuse, misalignment, mistakes, and unpredictable interactions between AI systems. As AI models grow more powerful, developers are urged to exercise caution in their deployment.
The governance of AI companies extends beyond corporate boards and investors, involving broader societal considerations. OpenAI’s current governance remains a concern amidst its pursuit of AGI, suggesting that internal challenges will persist as this goal approaches.