Connect with us

Lawsuits Against AI Creators Needed For Accountability, Expert Says

Published

on

An Australian mayor is considering legal action against OpenAI for defamation after its ChatGPT tool falsely claimed he had been imprisoned for bribery. In fact, Hepburn Shire Council Mayor Brian Hood was a whistleblower employed by a subsidiary of Australia’s national bank, and was never charged with a crime. Lisa Palmer, Chief AI Strategist at aiLeaders, says lawsuits against AI creators are needed, and Hood’s case is “compelling.” “Just like a GPS navigation system needs to provide accurate directions to avoid sending drivers off course, AI chatbots need to provide accurate and reliable information to users,” Palmer said. Disclaimers that warn users of potential inaccuracies may not be enough on their own, Palmer said. “Disclaimers and warnings are like a warning sign on a hazardous road ahead. They are helpful in alerting drivers to potential dangers, but ultimately it’s up to the driver to make the right decisions,” Palmer said. “While disclaimers and warnings can be helpful in mitigating the risks associated with AI-generated content, it’s important to strike a balance between providing users with access to information and ensuring that the information they receive is accurate and reliable.” Transparency and accountability in AI are “like the ingredients on a food label,” and just as necessary for users of AI products, Palmer said. “Just as consumers have the right to know what ingredients are in their food, users of AI chatbots have the right to know how the chatbot is generating its responses and whether the information is accurate and reliable,” Palmer said. Despite the growing pains, Palmer remains “bullish” on AI’s potential to transform industries and drive innovation. “AI can provide businesses with new insights, efficiencies, and totally new business models that are impossible through traditional methods,” she said. TMX contributed to this story.