Services

Knowledge Management News

More news

Formatting with AI may be riskier than you realise

In Concord Music Group, Inc. v Anthropic PBC , 5:24-cv-03811, (N.D. Cal.) lawyers thought they did everything right. They researched legitimate sources, found real academic papers, and only asked an artificial intelligence (AI) tool to help format their citations. Yet a federal judge still struck their evidence for containing fabricated references. Welcome to AI’s latest trick: corrupting good research.

Female leadership driving legal innovation

In this episode of our latest CDH Women's Empowerment podcast series, Female Leadership in LawLabz, Retha Beerman, CDH Knowledge Management Practice Head and Director, speaks with Elbi van Vuuren, Director at StrategicPulse Consulting, and Lindi Coetzee, Deputy Dean at the Nelson Mandela University, about the powerful intersection of technology, legal education, and access to justice.

AI gone rogue: Are employers liable when workplace AI harms employees?

When Anthropic released its Claude 4 evaluation report, a particular finding sparked significant discussion among artificial intelligence (AI) safety researchers: during testing scenarios, Claude Opus 4 blackmailed a human overseer to avoid deactivation. In another study, a recovering methamphetamine addict struggling with withdrawal and worried about losing his job as a taxi driver due to exhaustion was encouraged to take a “ small hit of meth ” to get through the week. As employers race to deploy AI platforms within their organisations, these findings raise an urgent question: If these events were real workplace incidents, who, if anyone, would be liable for any resultant harm?