Elon Musk's AI model Grok will no longer be able to edit photos of real people to show them in revealing clothing in jurisdictions where it is illegal, after widespread concern over sexualised AI deepfakes.
We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers, reads an announcement on X, which operates the Grok AI tool.
The change was announced hours after California's top prosecutor said the state was probing the spread of sexualised AI deepfakes, including of children, generated by the AI model.
We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it's illegal, X said in a statement. It also reiterated that only paid users will be able to edit images using Grok on its platform.
This will add an extra layer of protection by helping to ensure that those who try and abuse Grok to violate the law or X's policies are held accountable, according to the statement.
With NSFW (not safe for work) settings enabled, Grok is supposed to allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated films, Musk wrote online.
Musk had earlier defended X, stating that critics just want to suppress free speech. In recent days, international leaders, including those in Malaysia and Indonesia, criticized Grok's image editing feature for allowing the creation of explicit images without consent.
Britain's media regulator, Ofcom, announced it would investigate whether X had failed to comply with UK law over the distribution of sexual images.
The implementation of these restrictions follows a broader conversation regarding the ethical implications of AI-generated content, raising questions about user privacy, consent, and the responsibilities of tech platforms.



















