Knowledge Management News
More newsFormatting with AI may be riskier than you realise
In Concord Music Group, Inc. v Anthropic PBC , 5:24-cv-03811, (N.D. Cal.) lawyers thought they did everything right. They researched legitimate sources, found real academic papers, and only asked an artificial intelligence (AI) tool to help format their citations. Yet a federal judge still struck their evidence for containing fabricated references. Welcome to AI’s latest trick: corrupting good research.
AI gone rogue: Are employers liable when workplace AI harms employees?
When Anthropic released its Claude 4 evaluation report, a particular finding sparked significant discussion among artificial intelligence (AI) safety researchers: during testing scenarios, Claude Opus 4 blackmailed a human overseer to avoid deactivation. In another study, a recovering methamphetamine addict struggling with withdrawal and worried about losing his job as a taxi driver due to exhaustion was encouraged to take a “ small hit of meth ” to get through the week. As employers race to deploy AI platforms within their organisations, these findings raise an urgent question: If these events were real workplace incidents, who, if anyone, would be liable for any resultant harm?
Another episode of fabricated citations, real repercussions: South African courts show no tolerance for AI-hallucinated cases
Following Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others ZAKZPHC 2, South African courts have again confronted the issue of AI-generated fictitious legal citations.