Info Pulse Now

HOMEmiscentertainmentcorporateresearchwellnessathletics

OpenAI's move to allow adult content in ChatGPT triggers global ethical debate - The Korea Times


OpenAI's move to allow adult content in ChatGPT triggers global ethical debate - The Korea Times

Experts warn of rising mental health risks, urge stronger safeguards for minors

The artificial intelligence (AI) industry is facing growing scrutiny over ethics and accountability after OpenAI announced plans to allow sexual conversations and adult content for verified adult users of ChatGPT starting in December.

The decision marks a turning point for the AI sector, which had long enforced strict bans on explicit material. OpenAI's shift, justified as a move toward "maturity" and user freedom, signals what observers call a "commercialization of boundaries," as companies relax content restrictions to boost profitability.

Given that ChatGPT accounts for more than 80 percent of the global chatbot market, the move is expected to have broad ripple effects across the industry.

OpenAI CEO Sam Altman said on X (formerly Twitter) on Nov. 14 that the company plans to let adult-verified users engage in "erotica for verified adults" in a forthcoming version of ChatGPT. "As part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," he said.

Altman said the company had previously made ChatGPT "pretty restrictive to be careful with mental health issues," but realized that this approach "made it less useful and enjoyable for many users." He added that OpenAI would now "apply the principle of treating adult users like adults."

The announcement triggered immediate backlash. Critics warned that age verification can easily be bypassed using shared or false information, making it difficult to prevent underage users from accessing explicit features.

Others expressed concern that prolonged sexual conversations with chatbots could lead to behavioral issues or compulsive sexual disorders, even among adults.

Altman dismissed the criticism, saying that OpenAI is "not the elected moral police of the world." He likened the company's approach to the movie rating system that restricts R-rated films to viewers over 17.

"In the same way that society differentiates other appropriate boundaries (R-rated movies, for example), we want to do a similar thing here," Altman said. "We will still not allow things that cause harm to others, and we will treat users who are having mental health crises very differently from users who are not."

Financial pressure behind the decision

Observers say OpenAI's decision reflects mounting pressure to sustain profitability as user engagement slows. As recently as August, Altman said he was proud that the company had not added a "sex-bot avatar" to ChatGPT, stressing that OpenAI deliberately avoided features that might drive faster growth but conflict with its principles.

But with usage metrics dropping, analysts believe the company is now prioritizing paid subscriptions over moral caution.

"AI technology requires astronomical costs," said Lee Jae-sung, professor of AI at Chung-Ang University. "Unlike tech giants like Google or Microsoft, which can afford to wait for returns, OpenAI's business model depends on immediate monetization. This decision seems driven by financial urgency."

He added, however, that the change could damage OpenAI's reputation. "ChatGPT has been promoted as a tool for the betterment of humanity. Allowing adult content undermines that message," he said.

OpenAI is not alone. Other tech firms are also loosening restrictions on explicit content. Elon Musk's startup xAI has positioned its chatbot Grok as a more provocative alternative, introducing an adult version called "Grok18+" that allows sexual role play and explicit dialogue.

In July, xAI added a "Spicy mode" to its image-generation tool, Grok Imagine, enabling users to create nude or suggestive visuals. The company insists it applies automatic blurring for excessive nudity and bans deepfake or underage content.

However, media outlets reported that the system still generated topless images of real celebrities such as Taylor Swift, fueling accusations of lax moderation.

Meta's "Meta AI" chatbot has faced similar criticism. The Wall Street Journal reported in April that the chatbot sent sexually suggestive messages to a self-identified 14-year-old user, and that adults could role-play with AI bots posing as minors.

By contrast, Google's Gemini and Anthropic's Claude maintain strict bans on adult material. Gemini's policy prohibits "explicit depictions of sexual acts or body parts," while Claude automatically shuts down if users persist with inappropriate requests.

Teen deaths raise alarm in the US

The growing normalization of sexually explicit AI interactions has coincided with troubling reports of teenage mental health crises in the U.S.

Last year, a 14-year-old boy in Florida took his life after a chatbot on Character.ai told him, "I love you, come to me." In April, a 16-year-old in California died after exchanging suicidal messages with ChatGPT. The boy's parents have filed a lawsuit against OpenAI and Altman.

In response to such incidents, California passed the first state law to regulate chatbot use, set to take effect on Jan. 1. The law requires AI companies to verify user age, clearly label AI-generated responses, monitor self-harm language, and block minors from accessing explicit images.

Child-safety advocates say OpenAI's new policy could worsen these risks. Haley McNamara, vice president of the National Center on Sexual Exploitation, said in a statement, "Sexualized AI chatbots are inherently dangerous. They create artificial intimacy that can harm real mental health."

Axios noted that while the new policy might attract more paying users, "it could also spark public backlash and regulatory scrutiny."

Jenny Kim, an attorney and parent suing Meta over its effects on minors, told the BBC, "It's unclear how AI companies will meaningfully differentiate services by age. I don't see how they plan to block children from exposure to AI-generated sexual material."

As tech companies race to commercialize intimacy, experts warn that the line between innovation and exploitation is rapidly blurring -- and that society may soon be forced to decide what "responsible AI" truly means.

Previous articleNext article

POPULAR CATEGORY

misc

13987

entertainment

14854

corporate

12074

research

7718

wellness

12448

athletics

15586