ChatGPT rejected more than 250,000 image generations of presidential candidates prior to Election Day
Didem Mente | Anadolu | Getty Photos
OpenAI estimates that ChatGPT rejected more than 250,000 requests to generate photography of the 2024 U.S. presidential candidates within the lead as much as Election Day, the company acknowledged in a weblog on Friday.
The rejections incorporated characterize-generation requests inviting President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz and Vice President-elect JD Vance, OpenAI acknowledged.
The upward thrust of generative man made intelligence has resulted in concerns about how misinformation created the utilization of the abilities might perhaps perhaps perhaps maintain an tag on the masses of elections taking area all over the sphere in 2024.
The amount of deepfakes has elevated 900% year over year, per records from Clarity, a machine studying agency. Some incorporated videos that maintain been created or paid for by Russians looking out for to disrupt the U.S. elections, U.S. intelligence officials direct.
In a 54-page October document, OpenAI acknowledged it had disrupted “more than 20 operations and fraudulent networks from all over the sphere that attempted to employ our items.” The threats ranged from AI-generated online page articles to social media posts by fraudulent accounts, the company wrote. None of the election-linked operations maintain been in a area to attract “viral engagement,” the document neatly-known.
In its Friday weblog, OpenAI acknowledged it hadn’t seen any evidence that covert operations aiming to manual the pause outcomes of the U.S. election the utilization of the company’s merchandise maintain been in a area to efficiently inch viral or accomplish “sustained audiences.”
Lawmakers maintain been in particular obsessed on misinformation within the age of generative AI, which took off in slack 2022 with the originate of ChatGPT. Extensive language items are quiet new and mechanically spit out wrong and unreliable recordsdata.
“Voters categorically might perhaps perhaps perhaps maintain to quiet no longer note to AI chatbots for information about vote casting or the election — there are a long way too many concerns about accuracy and completeness,” Alexandra Reeve Givens, CEO of the Heart for Democracy & Technology, educated CNBC last week.
WATCH: AI most likely to be less regulated and more dangerous below 2nd Trump presidency