News | National
20 Jan 2026 19:58
NZCity News
NZCity CalculatorReturn to NZCity

  • Start Page
  • Personalise
  • Sport
  • Weather
  • Finance
  • Shopping
  • Jobs
  • Horoscopes
  • Lotto Results
  • Photo Gallery
  • Site Gallery
  • TVNow
  • Dating
  • SearchNZ
  • NZSearch
  • Crime.co.nz
  • RugbyLeague
  • Make Home
  • About NZCity
  • Contact NZCity
  • Your Privacy
  • Advertising
  • Login
  • Join for Free

  •   Home > News > National

    Sexualised deepfakes on X are a sign of things to come. NZ law is already way behind

    While the UK reins in deepfakes on Elon Musk’s X, NZ’s outdated laws leave women exposed and platforms unaccountable.

    Cassandra Mudgway, Senior Lecturer in Law, University of Canterbury, Andrew Lensen, Senior Lecturer in Artificial Intelligence, Te Herenga Waka — Victoria University of Wellington
    The Conversation


    Elon Musk finally responded last week to widespread outrage about his social media platform X letting users create sexualised deepfakes with Grok, the platform’s artificial intelligence (AI) chatbot.

    Musk has now assured the United Kingdom government he will block Grok from making deepfakes in order to comply with the law. But the change will likely only apply to users in the UK.

    These latest complaints were hardly new, however. Last year, Grok users were able to “undress” posted pictures to produce images of women in underwear, swimwear or sexually suggestive positions. X’s “spicy” option let them to create topless images without any detailed prompting at all.

    And such cases may be signs of things to come if governments aren’t more assertive about regulating AI.

    Despite public outcry and growing scrutiny from regulatory bodies, X initially made little effort to address the issue and simply limited access to Grok on X to paying subscribers.

    Various governments took action, with the UK announcing plans to legislate against deepfake tools, joining Denmark and Australia in seeking to criminalise such sexual material. UK regulator Ofcom launched an investigation of X, seemingly prompting Musk’s about-turn.

    So far, the New Zealand government has been silent on the issue, even though domestic law is doing a poor job of preventing or criminalising non-consensual sexualised deepfakes.

    Holding platforms accountable

    The Harmful Digital Communications Act 2015 does offer some pathways to justice, but is far from perfect. Victims are required to show they’ve suffered “serious emotional distress”, which shifts focus to their response rather than the inherent wrong of non-consensual sexualisation.

    Where images are entirely synthetic rather than “real” (generated without a reference photo, for example), legal protection becomes even less certain.

    A members’ bill is expected to be introduced later this year that would criminalise the creation, possession and distribution of sexualised deepfakes without consent.

    This reform is both necessary and welcome. But it only tackles part of the problem.

    Criminalisation holds individuals accountable after harm has already occurred. It does not hold companies accountable for designing and deploying the AI tools that produce these images in the first place.

    We expect social media providers to take down child sexual abuse material, so why not deepfakes of women? While users are responsible for their actions, platforms such as X provide an ease of access that removes the technical barrier to deepfake creation.

    The Grok case has been in the news for many months, so the resulting harm is easily foreseeable. Treating such incidents as isolated misuse distracts from the platform’s responsibility.

    Light-touch regulation is not working

    Social media companies (including X) have signed the voluntary Aotearoa New Zealand Code of Practice for Online Safety and Harms, but this is already out of date.

    The code does not set standards for generative AI, nor does it require risk assessments prior to implementing an AI tool, or set meaningful consequences for failing to prevent predictable forms of abuse.

    This means X can get away with allowing Grok to produce deepfakes while still technically complying with the code.

    Victims could also hold X responsible by complaining to the Privacy Commissioner under the Privacy Act 2020.

    The commissioner’s guidance on AI suggests that both the use of someone’s image as a prompt and the generated deepfake could count as personal information.

    However, these investigations can take years, and any compensation is usually small. Responsibility is often split among the user, the platform and the AI developer. This does little to make platforms or AI tools such as Grok safer in the first place.

    New Zealand’s approach reflects a broader political preference for light-touch AI regulation that assumes technological development will be accompanied by adequate self-restraint and good-faith governance.

    Clearly, this isn’t working. Competitive pressures to release new features quickly prioritise novelty and engagement over safety, with gendered harm often treated as an acceptable byproduct.

    A sign of things to come

    Technologies are shaped by the social conditions in which they are developed and deployed. Generative AI systems trained on masses of human data inevitably absorb misogynistic norms.

    Integrating these systems into platforms without robust safeguards allows sexualised deepfakes that reinforce existing patterns of gender-based violence.

    These harms extend beyond individual humiliation. The knowledge that a convincing sexualised image can be generated at any time – by anyone – creates an ongoing threat that alters how women engage online.

    For politicians and other public figures, that threat can deter participation in public debate altogether. The cumulative effect is a narrowing of digital public space.

    Criminalising deepfakes alone won’t fix this. New Zealand deserves a regulatory framework that recognises AI-enabled, gendered harm as foreseeable and systemic.

    That means imposing clear obligations on companies that deploy these AI tools, including duties to assess risk, implement effective guardrails, and prevent predictable misuse before it occurs.

    Grok offers an early signal of the challenges ahead. As AI becomes embedded across digital platforms, the gap between technological capabilities and legislation will continue to widen unless those in power take action.

    At the same time, Elon Musk’s response to legislative action in the UK demonstrates how effective political will and robust regulation can be.


    The authors acknowledge the contribution of Chris McGavin to the preparation of this article.

    The Conversation

    Andrew Lensen receives funding from the Ministry of Business, Innovation and Employment and the Royal Society of New Zealand through contestable academic research funds. He is the co-director of LensenMcGavin AI, a consultancy specialising in the responsible uptake of AI in Aotearoa.

    Cassandra Mudgway does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

    This article is republished from The Conversation under a Creative Commons license.
    © 2026 TheConversation, NZCity

     Other National News
     20 Jan: Auckland Police have seized a dark-coloured Toyota van they believe was used in a shooting in Onehunga on Friday morning
     20 Jan: Deep sea mining is the next geopolitical frontline – and the Pacific is in the crosshairs
     20 Jan: A group of five riders has threatened to push off the front of the peloton approaching halfway on the 148-kilometre third stage of cycling's Tour of Southland from Riverton to Te Anau
     20 Jan: Why Keir Starmer had to speak out against Trump over Greenland after staying quiet on Venezuela
     20 Jan: How George Orwell’s Nineteen Eighty-Four predicted the global power shifts happening now
     20 Jan: A year on from his second inauguration, Trump 2.0 has one defining word: power
     20 Jan: For 80 years, the president’s party has almost always lost House seats in midterm elections, a pattern that makes the 2026 congressional outlook clear
     Top Stories

    RUGBY RUGBY
    It appears Australian oval ball code flyer Mark Nawaqanitawase is heading back to rugby union More...


    BUSINESS BUSINESS
    The share market likes what it sees - as Fletcher Building offloads its construction arm More...



     Today's News

    International:
    UN representative says Iran's death toll is rising and there could be investigations into 'crimes against humanity' 19:47

    Entertainment:
    Bella Hadid has set her sights on Hollywood because it's her "dream" to become an actress 19:40

    Entertainment:
    Stephen Graham's Golden Globe Awards win felt "exceptionally surreal" 19:10

    Cricket:
    Reward for newcomer Kristian Clarke after a strong start to his international cricket career in the Black Caps one-day series win over India 18:57

    Entertainment:
    Melissa Leo is convinced winning an Oscar was "not good" for her career 18:40

    Business:
    The share market likes what it sees - as Fletcher Building offloads its construction arm 18:37

    Entertainment:
    Stephen Graham nearly didn't recommend Erin Doherty for A Thousand Blows, because he thought she spoke like Princess Anne 18:10

    Basketball:
    A sombre mood within the Breakers basketball club after losing a team-favourite player to a season-ending injury 18:07

    Motoring:
    NZTA says the damage at Waioweka Gorge on State Highway 2 - from the weekend's flooding, is extensive 18:07

    Entertainment:
    Channing Tatum "peed on" Amanda Seyfried's leg and then "ran away" while they were filming Dear John 17:40


     News Search






    Power Search


    © 2026 New Zealand City Ltd