Connect with us

News

Tech Giants Aid Harmful AI “Nudify” Websites

Published

on

Major technology companies, including Google, Apple, and Discord, have unknowingly helped users access harmful websites that generate fake nude images using AI. These “nudify” websites allow users to upload real photos, which are then altered to make the person appear nude without their consent. This activity is a form of deepfake abuse that mostly targets women and girls.

sing in block on AI undress website

According to WIRED, more than 16 of these sites have been using single sign-on systems from Google, Apple, Discord, and other tech platforms. This login infrastructure makes it easy for users to sign up on the harmful websites and adds a layer of legitimacy to the process. Users can create accounts, purchase credits, and generate fake images of people.

The abuse has grown significantly with the rise of generative AI. Teens are allegedly using these tools to target classmates, creating fake images to bully and harass them. Criticism has been directed at these tech companies for not acting quickly enough to address the issue, despite their policies against harm and harassment. The websites are often promoted through search engines and social media ads, making them widely accessible.

After WIRED’s investigation, both Discord and Apple began terminating developer accounts associated with these sites. Google stated it would take action if its terms of service were violated, though the harm was already extensive.

Legal Action and Website Networks

Some of these deepfake websites are part of a larger network, often operated by the same individuals or companies. They run like businesses, even charging users for image creation and offering multiple languages, highlighting the global nature of the problem. Some sites also promote affiliate schemes to spread their reach further.

San Francisco’s city attorney, David Chiu, recently filed a lawsuit against some of these websites, claiming they had about 200 million visits in just six months. He described their actions as sexual exploitation and abuse, with the content being used to threaten and humiliate women worldwide.

The people behind these websites have responded with claims that they are aware of the potential harm and are trying to prevent the creation of images involving minors. However, this has done little to curb the widespread abuse.

Technology’s Role

While the websites primarily use Google’s login systems, Discord and Apple were also heavily involved. Discord’s API was found on 13 sites, and Apple’s was used on six. The companies were quick to act when informed by WIRED, with Discord disabling access to its sign-in system for these websites and Apple terminating multiple developer licenses. Google, Discord, and Apple have strict policies against using their platforms for abusive purposes, but enforcement has been lacking until media exposure prompted action.

This situation raises concerns about how easily technology can be misused when companies don’t proactively monitor for violations. While it’s unclear how many people have used these sign-in systems, critics say Big Tech is enabling the growth of such harmful content by not taking stronger, earlier steps to prevent its spread.

author avatar
Rody
AI-lover
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *