Consumer advocacy group urges OpenAI to pull video app Sora over privacy,
Listen to this article
Estimated 6 minutes
The audio version of this article is generated by text-to-speech, a technology based on artificial intelligence.
Non-profit consumer advocacy group Public Citizen demanded in a Tuesday letter that OpenAI withdraw its video-generation software Sora 2 after the application sparked fears about the spread of misinformation and privacy violations.
The letter, addressed to the company and CEO Sam Altman, accused OpenAI of hastily releasing the app so that it could launch ahead of competitors.
That showed a “consistent and dangerous pattern of OpenAI rushing to market with a product that is either inherently unsafe or lacking in needed guardrails,” the watchdog group said.
Sora 2, the letter says, shows a “reckless disregard” for product safety and people’s rights to their own likeness. It also contributes to the broader undermining of the public’s trust in the authenticity of online content, it argued.
The group also sent the letter to the U.S. Congress.
OpenAI didn’t immediately respond to a request for comment Tuesday.
More responsive to complaints about celebrity content
The typical Sora video is designed to be amusing enough for you to click and share on platforms such as TikTok, Instagram, X and Facebook.
It could be the late Queen Elizabeth II rapping or something more ordinary and believable. One popular Sora genre depicts fake doorbell camera footage capturing something slightly uncanny — say, a boa constrictor on the porch or an alligator approaching an unfazed child — and ends with a mildly shocking image, such as a grandma shouting as she beats the animal with a broom.
The Current24:17The new AI video app Sora is here: Can you tell what’s real?
Whether it’s your best friend riding a unicorn, Michael Jackson teaching math, or Martin Luther King Junior dreaming about selling vacation packages — it’s now easier and faster to turn those ideas into realistic videos, using the new AI app, Sora. The company behind it, OpenAI, promises guardrails to prevent against violence, and fraud — but many critics worry that the app could push misinformation into overdrive… and pollute society with even more “AI slop.”
Public Citizen joins a growing chorus of advocacy groups, academics and experts ]raising alarms about the dangers of letting people create AI videos based on just about anything they can type into a prompt, leading to the proliferation of non-consensual images and realistic deepfakes in a sea of less harmful “AI slop.”
OpenAI has cracked down on AI creations of public figures doing outlandish things — among them, Michael Jackson, Martin Luther King Jr. and Mister Rogers — but only after an outcry from family estates and an actors’ union.
“Our biggest concern is the potential threat to democracy,” said Public Citizen tech policy advocate J.B. Branch in an interview.
“I think we’re entering a world in which people can’t really trust…
Read More: Consumer advocacy group urges OpenAI to pull video app Sora over privacy,