In the lead-up to the 2024 U.S. presidential election, OpenAI’s ChatGPT tool blocked over 250,000 attempts to generate images of candidates, aiming to curb misuse of generative AI that could spread misinformation. The blocked image requests included prominent figures like President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, and Vice President-elect JD Vance, OpenAI revealed in a recent blog post.
As generative AI technology rapidly evolves, its potential for creating deepfakes has prompted mounting concerns. In 2024 alone, deepfake content surged by 900% year-over-year, according to data from Clarity, a machine learning firm specializing in threat detection. Many deepfake videos have been flagged by U.S. intelligence as foreign attempts to interfere with American elections, with some content reportedly linked to Russian operations aimed at undermining trust in the electoral process.
To counter these threats, OpenAI has deployed proactive monitoring and enforcement measures. In a detailed October report, the company disclosed it had disrupted over 20 operations attempting to use AI-generated media to influence global political outcomes. These activities ranged from AI-generated articles to social media posts crafted by fake accounts, each intending to sway public opinion or mislead voters. Despite these efforts, OpenAI reported that none of the identified operations were able to reach viral status or establish significant online followings.
Although OpenAI has worked to block certain content, lawmakers and experts remain concerned about the reliability of AI-generated information. Since the launch of ChatGPT in 2022, public adoption of large language models has soared, yet they are still prone to producing inaccurate or misleading content. “Voters categorically should not look to AI chatbots for information about voting or the election,” warned Alexandra Reeve Givens, CEO of the Center for Democracy & Technology. Givens stressed that accuracy and transparency remain significant issues for generative AI in political contexts, where even subtle inaccuracies can spread rapidly and influence public opinion.
Legislators have also begun scrutinizing the role of generative AI in democratic processes, with some considering policy changes to address AI-driven misinformation. With foreign and domestic actors alike exploring how AI could be weaponized to misinform, the importance of accountability and transparency in AI technology has reached new urgency. In response, tech companies like OpenAI, Meta, and Google are rolling out policy updates aimed at safeguarding election integrity, including clearer labeling of AI-generated content and stricter user guidelines.
Beyond U.S. elections, generative AI’s influence is a global concern as elections take place around the world. The spread of misleading AI-driven content in politically volatile regions has highlighted the need for stronger global standards and collaboration. OpenAI’s ongoing efforts to limit the misuse of its technology have made strides, but the exponential growth of deepfakes suggests that the battle is far from over.
In the future, AI companies, policymakers, and social media platforms may need to work in concert to develop comprehensive frameworks to control the potential misuse of generative AI. Without such initiatives, the increasing sophistication of AI tools could pose growing risks to electoral integrity and the spread of reliable information worldwide.
(Adapted from Reuters.com)









