South Korea begins investigation into Grok, sexually exploitative images
Grok is estimated to have generated around 3 million sexually explicit images between December 29, 2025, and January 8 this year

The South Korean government is moving towards regulatory actions against Grok, xAI’s generative artificial intelligence (AI) chatbot, following mounting concerns over its alleged involvement in generating and distributing sexually exploitative deepfake images.
The Personal Information Protection Commission (PIPC) has launched a preliminary fact-finding review into Grok after the allegations were reported on Sunday.
The preliminary process is to confirm whether the violation actually occurred and whether the matter falls within its jurisdiction before launching a formal investigation. The move follows a series of reports that surfaced overseas accusing Grok of being used to create explicit and nonconsensual deepfake images, with some involving real individuals and minors.
PIPC will reportedly determine its next steps after reviewing Grok’s explanation and supporting documents, while also reviewing global regulatory trends. Under the Personal Information Protection Act, altering or generating sexual images of identifiable individuals without consent may constitute unlawful handling of personal data.
The AI service, which is integrated with social media platform X and offers both text and image generation on the platform, has faced criticism for deepfake images of real people since late last year.
According to the global non-governmental organisation Center for Countering Digital Hate, Grok is estimated to have generated around 3 million sexually explicit images between December 29, 2025, and January 8 this year. Among them, around 23,000 images involved minors.
