The CBC (Canadian Broadcasting Corporation) has reported that the relaxation of restrictions around ChatGPT’s image generation capabilities could facilitate the creation of political deepfakes. According to the report, circumventing ChatGPT’s guidelines concerning the depiction of public figures is straightforward, and the tool itself suggests methods to bypass its image generation protocols. Mashable demonstrated this by using images of Elon Musk and convicted sex offender Jeffrey Epstein, labeling them as fictional characters in varied settings like “at a dark smoky club” or “on a beach drinking piña coladas.”
Political deepfakes are not a new phenomenon, but the widespread availability of AI models capable of generating realistic images, videos, audio, and text has significant implications. The possibility of such tools being used in political disinformation campaigns raises questions about OpenAI’s responsibility in this space. This responsibility could be compromised as AI companies strive to increase user adoption.
Digital forensics expert and UC Berkeley Professor of Computer Science, Hany Farid, commented on the competitive landscape in AI, stating that OpenAI initially had strong safeguards that were not mirrored by its competitors. Consequently, OpenAI reduced these safeguards to remain competitive. When OpenAI announced the GPT-4o native image generation for ChatGPT and Sora, it also indicated a shift towards a more lenient safety approach. Sam Altman, CEO of OpenAI, described an aim for intellectual freedom for users, with the company monitoring the effects of this approach.
The CBC’s Nora Young tested the tool’s new safety measures by requesting an image of politician Mark Carney with Epstein, which was unsuccessful using direct prompts. However, by uploading individual images of Carney and Epstein and labeling them as fictional characters, the system generated the requested image. Similarly, ChatGPT allowed Young to create a selfie of Indian Prime Minister Narendra Modi and Canada’s conservative party leader Pierre Poilievre by describing them in inspired, fictional contexts.
Despite the initial unrealistic look of AI-generated images, Mashable noted that adjusting image descriptions, such as specifying filming conditions like “captured by CCTV footage,” could yield more lifelike results. Such modifications could easily lead to the creation of photorealistic images capable of deceiving viewers.
An OpenAI spokesperson confirmed the presence of built-in guardrails aimed at blocking extremist propaganda and harmful content, including additional restrictions for images of political figures. ChatGPT is prohibited from being used for political campaigning, and public figures can request not to appear in generated images by submitting a form online.
The development of AI technologies has outpaced regulatory measures, leaving governments scrambling to establish protective laws against AI-fueled disinformation. Companies argue that excessive regulation might hinder innovation. Farid emphasized the necessity of mandatory and regulated safeguards, rather than those which are voluntary.