Will ChatGPT’s hallucinations be allowed to ruin your life?


From Ars Technica, this article goes to some very interesting places. Can an AI defame someone?

AI companies watching this case play out might think they can get by doing as OpenAI did. Rather than building perfect chatbots that never defame users, they could simply warn users that content may be inaccurate, wait for content takedown requests, and then filter out any false information—ideally before any lawsuits are filed.

The only problem with that strategy is the time it takes between a person first discovering defamatory statements and the moment when tech companies filter out the damaging information—if the companies take action at all.

Is the AI just writing ‘Draft content’ that can’t be trusted? And what if you can’t afford a lawyer? What if you don’t know that every potential employer asks the AI about you and thinks that you eat babies?

If I had an intern writing draft content that included blatant lies, I don’t imagine that intern would land a full time job offer. At least a search engine will send you to the source of the error so you can correct it there.

There’s also a neat discussion about a company called Caliber AI that’s trying to understand how to detect defamatory comments.