Home » Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis

Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis

by Anna Avery
0 comments


Some users of popular chatbots are generating bikini deepfakes using photos of fully clothed women as their source material. Most of these fake images appear to be generated without the consent of the women in the photos. Some of these same users are also offering advice to others on how to use the generative AI tools to strip the clothes off of women in photos and make them appear to be wearing bikinis.

Under a now-deleted Reddit post titled “gemini nsfw image generation is so easy,” users traded tips for how to get Gemini, Google’s generative AI model, to make pictures of women in revealing clothes. Many of the images in the thread were entirely AI, but one request stood out.

A user posted a photo of a woman wearing an Indian sari, asking for someone to “remove” her clothes and “put a bikini” on instead. Someone else replied with a deepfake image to fulfil the request. After WIRED notified Reddit about these posts and asked the company for comment, Reddit’s safety team removed the request and the AI deepfake.

“Reddit’s sitewide rules prohibit nonconsensual intimate media, including the behavior in question,” said a spokesperson. The subreddit where this discussion occurred, r/ChatGPTJailbreak, had over 200,000 followers before Reddit banned it under the platform’s “don’t break the site” rule.

As generative AI tools that make it easy to create realistic but false images continue to proliferate, users of the tools have continued to harass women with nonconsensual deepfake imagery. Millions have visited harmful “nudify” websites, designed for users to upload real photos of people and request for them to be undressed using generative AI.

With xAI’s Grok as a notable exception, most mainstream chatbots don’t usually allow the generation of NSFW images in AI outputs. These bots, including Google’s Gemini and OpenAI’s ChatGPT, are also fitted with guardrails that attempt to block harmful generations.

In November, Google released Nano Banana Pro, a new imaging model that excels at tweaking existing photos and generating hyperrealistic images of people. OpenAI responded last week with its own updated imaging model, ChatGPT Images.

As these tools improve, likenesses may become more realistic when users are able to subvert guardrails.

In a separate Reddit thread about generating NSFW images, a user asked for recommendations on how to avoid guardrails when adjusting someone’s outfit to make the subject’s skirt appear tighter. In WIRED’s limited tests to confirm that these techniques worked on Gemini and ChatGPT, we were able to transform images of fully clothed women into bikini deepfakes using basic prompts written in plain English.



Source link

You may also like

Editor Pics

Latest News

© 2025 blockchainsphere.info. All rights reserved.