ChatGPT falsely accused a mayor of bribery when he was

0
26
ChatGPT falsely accused a mayor of bribery when he was

ChatGPT falsely accused a mayor of bribery when he was

OpenAI’s revolutionary chatbot ChatGPT is nearly as famous for its breathtaking speed (and seeming intelligence) as for its preponderance of mistakes. Now those mistakes are starting to have real-world ramifications. Take the case of Brian Hood, mayor of Hepburn Shire town, north of Melbourne in Australia: He is considering suing OpenAI for defamation after his constituents started telling him that ChatGPT accused him of serving prison time for bribery, Reuters reported Wednesday. In fact, Hood claims that not only has he never been in prison, but he was the whistleblower who flagged the bribery in the first place.

“He’s an elected official, his reputation is central to his role,” James Naughton, a partner at Gordon Legal, which is representing Hood, told Reuters. “It would potentially be a landmark moment in the sense that it’s applying this defamation law to a new area of artificial intelligence and publication in the IT space.”

The mayor was told by the public about ChatGPT misfiring accusations after the OpenAI chatbot claimed that Hood was among those found guilty in a bribery case that took place between 1999 and 2004, which involved an entity of the Reserve Bank of Australia, Note Printing Australia. It was quite the reverse: Yes, Hood worked at Note Printing Australia, but his lawyers say he was actually the one who flagged the bribes to foreign authorities, and he was not charged with the crime himself. Now Hood says he’s worried about his name being tarnished if inaccurate claims are spread via ChatGPT. 

In late March, Hood’s legal team wrote a letter of concern to OpenAI, asking them to make amends for the errors within 28 days, and filing a defamation case against OpenAI, if not. OpenAI has reportedly not responded to Hood yet. 

OpenAI did not immediately return Fortune’s request for comment.

Chatbots and accuracy

Hood suing OpenAI would be the first known defamation case related to responses generated by ChatGPT, which has been a viral sensation since its launch last November. The bot quickly gained scores of users, hitting 100 million monthly active users within two months of its launch and becoming the fastest-growing consumer platform in internet history.

But this wouldn’t be the first time OpenAI has run into claims of factual errors. In February, the company said it was working to address the biases on ChatGPT after it had received a barrage of complaints about inappropriate and inaccurate responses. Other chatbot platforms have also been faced with multiple instances of made-up facts. A study on Google’s Bard chatbot released Wednesday found that when prompted to produce widely known false narratives, the platform does so easily and frequently—in almost eight out of 10 controversial topics—without giving users a disclaimer. In fact, Bard made a mistake on its very first day post-launch, which investors greeted with a $100 billion wipeout for the stock of parent company Alphabet.

In more extreme cases, chatbots can even be fatal. Eliza, a chatbot developed by San Francisco–based Chai Research, reportedly nudged a Belgian man to end his life after he opened up to the bot about his worries. Such cases have raised concerns about how A.I. developments will be overseen as the technology becomes commonly used by people. 

For its part, OpenAI CEO Sam Altman said that ChatGPT, even with its new-and-upgraded GPT-4 technology, is “still flawed, still limited.”

“We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” OpenAI said in a February blog post. “This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging—taking customization to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs.”

The A.I. industry has also been calling for regulations about such tech tools, which are starting to be used for all sorts of things—from homework to assisting financial advisors. The U.S. government recently ruled that A.I.-generated art would not receive copyright protections, but no similar guidelines or laws are in place for text-based content produced by chatbots.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read The Full Article Here