Round-Up: ChatGPT, Bard, etc - We Did Ask What Could Go Wrong, Didn't We?
Freeman writes:
A spokesperson for Gordon Legal provided a statement to Ars confirming that responses to text prompts generated by ChatGPT 3.5 and 4 vary, with defamatory comments still currently being generated in ChatGPT 3.5. Among "several false statements" generated by ChatGPT were falsehoods stating that Brian Hood "was accused of bribing officials in Malaysia, Indonesia, and Vietnam between 1999 and 2005, that he was sentenced to 30 months in prison after pleading guilty to two counts of false accounting under the Corporations Act in 2012, and that he authorised payments to a Malaysian arms dealer acting as a middleman to secure a contract with the Malaysian Government." Because "all of these statements are false," Gordon Legal "filed a Concerns Notice to OpenAI" that detailed the inaccuracy and demanded a rectification. "As artificial intelligence becomes increasingly integrated into our society, the accuracy of the information provided by these services will come under close legal scrutiny," James Naughton, Hood's lawyer, said, noting that if a defamation claim is raised, it "will aim to remedy the harm caused" to Hood and "ensure the accuracy of this software in his case.")
It was only a matter of time before ChatGPT-an artificial intelligence tool that generates responses based on user text prompts-was threatened with its first defamation lawsuit. That happened last month, Reuters reported today, when an Australian regional mayor, Brian Hood, sent a letter on March 21 to the tool's developer, OpenAI, announcing his plan to sue the company for ChatGPT's alleged role in spreading false claims that he had gone to prison for bribery.
To avoid the landmark lawsuit, Hood gave OpenAI 28 days to modify ChatGPT's responses and stop the tool from spouting disinformation.
Read more of this story at SoylentNews.