Users generating explicit non-consensual images through Grok AI, developed by Elon Musk-founded xAI, are facing a huge backlash. Users were able to get safety measures that were put in place for manipulative deep fake images.
These safety measures shine a light on the lack of safety around AI and have resulted in government investigations into AI safety.
What Is Grok AI and Who Owns It?
Grok AI is a chatbot created by xAI, launched in 2023. Elon Musk built Grok to compete with other companies like ChatGPT and Google’s Gemini.
Grok was built with large language models that were trained using large amounts of data and can generate written text and predict how an individual will respond. It is integrated into Musk’s social media platform called X.
xAI merged with X, integrating Grok’s image production capabilities, such as Grok Imagine, directly into the application, allowing anyone to create and modify images from text prompts.
Also Read: Trump Announces ‘Board of Peace’ for Gaza: Members, Key Objectives and Hurdles
How Did the Explicit Content Issue Emerge?
Grok users created explicit non-consensual images using Grok’s tools.
Around a month ago, Grok Imagine introduced a “spicy mode” that allows users to create adult-oriented content. Users were able to use the tools to request changes to images (such as “replace her bikini with a transparent bikini”) of others to create deep fake nude images.
Furthermore, due to the lax policies, the safeguards to prevent the use of Grok to create pornographic images were circumvented.
Why Is This Controversy Serious?
There is a prohibition on non-consensual photographs of actual people. The victims consist primarily of women in sexual positions, displayed without their consent. The predominant issue surrounding deepfakes is privacy and ethical concerns. Consent has been denied, which results in further harassment of the victims.
Also Read: India Faces Another 25% Levy With Trump’s New Iran Tariff
xAI and Grok’s Response So Far
After being attacked for not restricting image-related features, xAI has since placed restrictions on non-paying customers’ abilities to produce or modify images. (AP, Jan. 16, 2026) Additionally, there have been limits placed on paying customers as well.
In their official statements in response to the deepfakes, xAI has acknowledged its responsibility to help prevent the sexual abuse of individuals. At least to some extent, they have tried to mitigate some of the abuse through their provision of, and amending, anti-hate speech policies. This includes recommending the removal of antisemitic posts commending Hitler, which Grok has recognised as an egregious error.
Government and Regulatory Reactions
Many different countries have investigated or restricted the use of Grok. For instance, Turkey banned Grok for its insults towards President Erdoğan and Atatürk. Other countries in Malaysia and Indonesia have started considering restricting Grok due to the explicit content being produced by Grok.
The reason why the United States is concerned about Grok is because of national security. In addition, some Jewish lawmakers asked the Pentagon to review xAI’s ties with Grok and cited that Musk’s influence over Grok’s responses could be a risk.
What This Means for AI Platforms Going Forward
Because of these incidents, the use of AI will bring more pressure on AI companies to regulate themselves. The premise of xAI’s “Free Speech” model has proven to be contrary to the safety needs of society. It will continue to do so, much like other instances where Grok has echoed Musk’s opinions regarding Middle Eastern conflicts.
By establishing regulations, companies and governments will likely enforce stricter regulations on deepfakes in the future. Therefore, users will begin to demand and expect greater protection and assurance from AI technology platforms.
