News | Technology
22 May 2025 18:35
NZCity News
NZCity CalculatorReturn to NZCity

  • Start Page
  • Personalise
  • Sport
  • Weather
  • Finance
  • Shopping
  • Jobs
  • Horoscopes
  • Lotto Results
  • Photo Gallery
  • Site Gallery
  • TVNow
  • Dating
  • SearchNZ
  • NZSearch
  • Crime.co.nz
  • RugbyLeague
  • Make Home
  • About NZCity
  • Contact NZCity
  • Your Privacy
  • Advertising
  • Login
  • Join for Free

  •   Home > News > Technology

    Evidence shows AI systems are already too much like humans. Will that be a problem?

    On the internet, nobody knows you’re a chatbot.

    Sandra Peter, Director of Sydney Executive Plus, University of Sydney, Jevin West, Professor, University of Washington, Kai Riemer, Professor of Information Technology and Organisation, University of Sydney
    The Conversation


    What if we could design a machine that could read your emotions and intentions, write thoughtful, empathetic, perfectly timed responses — and seemingly know exactly what you need to hear? A machine so seductive, you wouldn’t even realise it’s artificial. What if we already have?

    In a comprehensive meta-analysis, published in the Proceedings of the National Academy of Sciences, we show that the latest generation of large language model-powered chatbots match and exceed most humans in their ability to communicate. A growing body of research shows these systems now reliably pass the Turing test, fooling humans into thinking they are interacting with another human.

    None of us was expecting the arrival of super communicators. Science fiction taught us that artificial intelligence (AI) would be highly rational and all-knowing, but lack humanity.

    Yet here we are. Recent experiments have shown that models such as GPT-4 outperform humans in writing persuasively and also empathetically. Another study found that large language models (LLMs) excel at assessing nuanced sentiment in human-written messages.

    LLMs are also masters at roleplay, assuming a wide range of personas and mimicking nuanced linguistic character styles. This is amplified by their ability to infer human beliefs and intentions from text. Of course, LLMs do not possess true empathy or social understanding – but they are highly effective mimicking machines.

    We call these systems “anthropomorphic agents”. Traditionally, anthropomorphism refers to ascribing human traits to non-human entities. However, LLMs genuinely display highly human-like qualities, so calls to avoid anthropomorphising LLMs will fall flat.

    This is a landmark moment: when you cannot tell the difference between talking to a human or an AI chatbot online.

    On the internet, nobody knows you’re an AI

    What does this mean? On the one hand, LLMs promise to make complex information more widely accessible via chat interfaces, tailoring messages to individual comprehension levels. This has applications across many domains, such as legal services or public health. In education, the roleplay abilities can be used to create Socratic tutors that ask personalised questions and help students learn.

    At the same time, these systems are seductive. Millions of users already interact with AI companion apps daily. Much has been said about the negative effects of companion apps, but anthropomorphic seduction comes with far wider implications.

    Users are ready to trust AI chatbots so much that they disclose highly personal information. Pair this with the bots’ highly persuasive qualities, and genuine concerns emerge.

    Recent research by AI company Anthropic further shows that its Claude 3 chatbot was at its most persuasive when allowed to fabricate information and engage in deception. Given AI chatbots have no moral inhibitions, they are poised to be much better at deception than humans.

    This opens the door to manipulation at scale, to spread disinformation, or create highly effective sales tactics. What could be more effective than a trusted companion casually recommending a product in conversation? ChatGPT has already begun to provide product recommendations in response to user questions. It’s only a short step to subtly weaving product recommendations into conversations – without you ever asking.

    What can be done?

    It is easy to call for regulation, but harder to work out the details.

    The first step is to raise awareness of these abilities. Regulation should prescribe disclosure – users need to always know that they interact with an AI, like the EU AI Act mandates. But this will not be enough, given the AI systems’ seductive qualities.

    The second step must be to better understand anthropomorphic qualities. So far, LLM tests measure “intelligence” and knowledge recall, but none so far measures the degree of “human likeness”. With a test like this, AI companies could be required to disclose anthropomorphic abilities with a rating system, and legislators could determine acceptable risk levels for certain contexts and age groups.

    The cautionary tale of social media, which was largely unregulated until much harm had been done, suggests there is some urgency. If governments take a hands-off approach, AI is likely to amplify existing problems with spreading of mis- and disinformation, or the loneliness epidemic. In fact, Meta chief executive Mark Zuckerberg has already signalled that he would like to fill the void of real human contact with “AI friends”.

    Relying on AI companies to refrain from further humanising their systems seems ill-advised. All developments point in the opposite direction. OpenAI is working on making their systems more engaging and personable, with the ability to give your version of ChatGPT a specific “personality”. ChatGPT has generally become more chatty, often asking followup questions to keep the conversation going, and its voice mode adds even more seductive appeal.

    Much good can be done with anthropomorphic agents. Their persuasive abilities can be used for ill causes and for good ones, from fighting conspiracy theories to enticing users into donating and other prosocial behaviours.

    Yet we need a comprehensive agenda across the spectrum of design and development, deployment and use, and policy and regulation of conversational agents. When AI can inherently push our buttons, we shouldn’t let it change our systems.

    The Conversation

    Jevin West receives funding from the National Science Foundation, the Knight Foundation, and others. The full list of funders and affiliated organizations can be found here: https://jevinwest.org/cv.html

    Kai Riemer and Sandra Peter do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

    This article is republished from The Conversation under a Creative Commons license.
    © 2025 TheConversation, NZCity

     Other Technology News
     18 May: How often should you turn off your phone and devices?
     16 May: Sean 'Diddy' Combs's lawyers ask singer Cassie to read out graphic texts in court
     16 May: Unprecedented cuts to the National Science Foundation endanger research that improves economic growth, national security and your life
     08 May: Vatican deploying high-tech measures during papal vote to ensure secrecy
     17 Apr: A stadium sized balloon has successfully launched from Wanaka Airport - after several cancellations
     16 Apr: Internet and phone services in Otago and Southland have been disrupted - first by animals, then a human whoopsie
     13 Apr: Thousands of West Auckland homes have no internet because of a fault in Kumeu
     Top Stories

    RUGBY RUGBY
    Risi Pouri-Lane's won gold at the Olympics, now she is ready to begin her road to the 15s World Cup More...


    BUSINESS BUSINESS
    NZ Budget 2025: tax cuts and reduced revenues mean the government is banking on business growth More...



     Today's News

    Business:
    NZ Budget 2025: tax cuts and reduced revenues mean the government is banking on business growth 18:17

    Politics:
    Exiled Labor MP Jimmy Sullivan tears up, raises voice in emotional state parliament address 18:17

    Netball:
    The Northern Mystics are leaving it up to their shooters to determine if and when they gun for two-point plays 18:07

    Entertainment:
    Hailey Bieber has broken her silence over rumours about the state of her marriage to Justin Bieber 18:06

    Entertainment:
    Meghan, Duchess of Sussex has admitted motherhood isn't how she "envisioned" the experience 17:36

    Motoring:
    Risi Pouri-Lane's won gold at the Olympics, now she is ready to begin her road to the 15s World Cup 17:27

    Entertainment:
    Lead singer of the Dropout Kings Adam Ramey has died aged 32 17:06

    Law and Order:
    A fourth person's been charged with manslaughter and kidnapping - relating to a woman's body being found in a bag in Auckland's Gulf Harbour in March last year 16:57

    Entertainment:
    George Wendt has died aged 76 16:36

    Business:
    Centre pieces of today's Budget include KiwiSaver changes - and a new business tax incentive 16:17


     News Search






    Power Search


    © 2025 New Zealand City Ltd