Home Opinion OpenAI’s ‘Sora’ releases with a slew of ethical concerns

OpenAI’s ‘Sora’ releases with a slew of ethical concerns

0
Deep fake content generated using Sora looks practically identical to their real world subjects. | Photo via Jacob Smith

Kennedy Owens
Staff Writer

On Sept. 30, OpenAI unleashed the latest terrifying advancement in generative AI technology. Sora is a new social media app meant solely to host AI-generated videos.

Users can post up to 60-second-long cameos of themselves or others and then engage with other user’s creations, much like most other social media apps. For the app’s initial roll-out, the platform is invite-only and available on iOS and PC, with an Android release currently in the works. 

Users are able to create deep-fakes of either their own faces or of OpenAI CEO Sam Altman through the app’s cameo system. While the service claims to prohibit sexual and violent content, as well as the usage of photos of others without explicit permission, some users are able to make content using the images of dead celebrities and historical figures.

An example of this is the slew of AI-generated videos depicting ex-Viner and independent boxer Jake Paul in effeminate clothing as he raves about popular makeup brands. In a video released Oct. 8, Paul jokingly threatened legal action against fans, claiming the videos cost him relationships, his sanity and numerous business endeavors. 

Deep fake content generated using Sora looks practically identical to their real world subjects. | Photo via Jacob Smith

“I’m gonna be suing everybody that is continuing to spread these false narratives of me doing things that I would literally never ever do,” Paul said while pretending to apply makeup.

While Paul might be on board with his likeness being used in this way, others may not be as willing. In an article by technology magazine PCMag, OpenAI shed some light on how they handle their users’ data. 

“We’ll retain your personal data for only as long as we need in order to provide our services to you, or for other legitimate business purposes such as resolving disputes, safety and security reasons, or complying with our legal obligations,” OpenAI said.

In the same article, OpenAI claimed that if you delete your account, your information will be deleted from the system after 30 days. However, your Sora account is linked to your ChatGPT account, meaning both profiles would get deleted. If you wanted to make another account, you’d have to sign up with an entirely new email and phone number.

OpenAI believes this practice is in the best interest of users, where they updated their polices in Sept. 2023 to avoid future abuse and fraud schemes. Fortunately, they said they’re working on these account management issues.

“We’re committed to giving people clear ways to manage their OpenAI accounts, and you can delete yours today,” the company said to PCMag. “We’re also working on a way to delete your Sora account separately, and we’ll let people know as soon as that’s available.”

Generative AI is a relatively new concept that I would need to see further practical usage of before I completely embrace it. The industry has a lot of restructuring to do before it is properly regulated and lessens the damage done to both users and non-users alike.

Others seem to agree. According to a Pew Research study, “most Americans (76%) say it’s extremely or very important to be able to tell if pictures, videos and text were made by AI or people. But 53% of Americans are not too or not at all confident they can detect if something is made by AI versus a person.”

Their concerns aren’t unfounded. According to Forbes, “71% of social media images now are AI-generated,” and “deep-fake fraud attempts surge to 6.5% worldwide.” This is a huge increase from 0.01% in 2022 and puts deep-fakes among the three most popular methods of fraud.

With the growing accessibility of generative AI and AI-integrated technology, it’s important that methods exist to distinguish AI-generated visual media from human content. Around 2023, many AI companies like Google and OpenAI began working alongside both the Coalition for Content Provenance and Authenticity and the International Press Telecommunications Council to make embedded meta-data, which is essentially data about data of an image, an industry standard so that users can determine if it’s AI or not.

The problem comes when people screenshot an image, which removes the meta-data. If that doesn’t work, it’s suggested to check when the image was created. If it was made a long time ago, chances are the image is man-made.

It’s important to research where the content you consume comes from. While Sora AI is open about its content being 100% AI-generated, other platforms aren’t as much. While at best the content you’re consuming is just memes or AI slop, at worst, you could be exposing yourself and others to misinformation.

With so few resources on how AI is affecting day-to-day life, it is important to remain cautious and not accept all of the new technology just yet until we have solid evidence on the long-term impacts it has.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version