The Future. Following in the footsteps of rivals like Google and Meta, OpenAI announced several new guidelines for how its tools can be used in the run-up to this year’s global elections. But with the changes still in process, it’s possible that AI-generated misinformation could go viral before a red flag is ever raised.
OpenAI is getting serious about how its tools could negatively impact upcoming elections in the US, Mexico, India, Indonesia, and South Africa.
- Users are no longer allowed to create deepfakes of “real people” (i.e., candidates or elected officials) or “institutions” (i.e., local governments or state agencies).
- The firm’s tools can’t be used for campaigns or lobbying, including chatbots or other apps — meaning no one can pull deepfake stunts like South Korea’s Yoon Suk-yeol’s to win the presidency.
- And making apps that aim to disrupt the democratic process or even discourage voting is now a big no-no.
- OpenAI is also embedding AI-generated images with digital credentials developed by the Coalition for Content Provenance and Authenticity (C2PA), which makes it easier for people to check if an image was manipulated.
And to avoid the potential for ChatGPT hallucinations, OpenAI will re-route voting questions in the US to CanIVote.org (which is run by the National Association of Secretaries of State) and attribute other election-adjacent information to new outlets or provide a menu of links to visit.
It’s clear that OpenAI really doesn’t want to be in the hot seat post-election like Facebook was after the last election.