Hallucination Complications

As artificial intelligence (AI) tools become increasingly embedded in legal practice, South African legal practitioners (and potentially their insurers) face a new risk, challenging traditional professional liability and insurance coverage.

These developments raise the question as to whether professional indemnity insurance will cover attorneys and advocates when AI-generated errors lead to client losses.

15 Jul 2025 4 min read Dispute Resolution Alert Article

At a glance

  • Attorneys who incorporate artificial intelligence (AI) tools into their practice must ensure that all outputs are thoroughly reviewed and verified. Failure to do so may expose them to regulatory scrutiny and not being covered by professional indemnity insurance.
  • To mitigate these risks, law firms should consider implementing internal protocols for AI use, including mandatory verification of all AI-generated content. Training programmes should educate legal professionals on the limitations of AI tools and the importance of human oversight.
  • Insurers, for their part, may need to revisit policy wording to address the evolving landscape of legal technology. This could include specific endorsements for AI-related risks or exclusions that clarify the boundaries of coverage.

The hallucination-induced complications

A recent plethora of judgments (both locally and internationally) have dealt with case law references submitted in argument that simply do not exist – the dreaded AI “hallucination”.

One question that arises out of this legal “evolution” and the move towards actively utilising AI in legal practice is whether a client will have a valid claim against an attorney for professional negligence where the attorney’s reliance on AI-generated content is a contributing (or the sole) factor that results in an adverse judgment or order being delivered against them.

Duty of care

The duty of care owed by legal practitioners to their clients includes the obligation to verify the accuracy and reliability of all legal submissions, regardless of their source. If an attorney incorporates unchecked fictitious case law or misleading arguments manufactured or hallucinated by an AI tool and this materially affects the outcome of the client’s matter, then the client could demonstrate that the attorney failed to exercise the standard of skill and diligence expected of a reasonable legal practitioner. This could also potentially give rise (depending on the facts of the matter) to a delictual or contractual claim for damages, particularly where the client suffers financial or reputational harm as a result of their attorney’s negligence.

The presumed comfort of insurance

When faced with a claim being made against them for negligence, an attorney is going to turn to their Professional Indemnity (PI) insurers. In this regard, PI insurance is designed to protect legal practitioners against claims arising from negligence in the course of their professional duties.

With the question of whether the client has a claim against the attorney having been addressed, the question that then arises is whether the attorney’s PI insurance cover extends to negligence arising out of the use of AI tools.

While PI policies generally cover negligence by legal practitioners, they often contain specific exclusions or limitations and require the insured to act diligently and to take reasonable steps to prevent or mitigate losses.

The discomfort of rejection

In circumstances where an attorney relies on AI-generated or hallucinated content, such as case law, without verifying its accuracy, insurers may argue (and would be justified in doing so) that this constitutes a failure to satisfy the reasonable steps obligation. 

The situation becomes even murkier when considering the evolving nature of AI tools. Unlike junior attorneys or paralegals, AI systems are not regulated, licensed, or subject to professional codes of conduct. AI is not tested and made to write board exams, and it does not have to make an application for admission into the profession as a fit and proper person.

This lack of accountability may (and should) prompt insurers to treat AI-related errors as outside the scope of traditional coverage, unless the policy explicitly includes technology-related risks. In such cases, the error may be deemed a result of negligence on the part of the practitioner, rather than a fault of the AI tool itself.

The use of unverified AI tools, particularly those prone to hallucinations, will likely be construed as a breach of professionals’ duty to act with reasonable care and skill – meaning that insurers would likely reject PI claims of this nature.

A cautionary tale

Attorneys who incorporate AI tools into their practice must do so with caution, ensuring that all outputs are thoroughly reviewed and verified. Failure to do so may not only expose them to regulatory scrutiny but also leave them without the safety net of their PI insurance if things go wrong.

To mitigate these risks, law firms should consider implementing internal protocols for AI use, including mandatory verification of all AI-generated content. Training programmes should educate legal professionals on the limitations of AI tools and the importance of human oversight.

Insurers, for their part, may need to revisit policy wording to address the evolving landscape of legal technology. This could include specific endorsements for AI-related risks or exclusions that clarify the boundaries of coverage.

The South African Law Reform Commission may also play a role in shaping future regulation. As AI becomes more prevalent in legal practice, statutory guidance is necessary to delineate responsibilities and liabilities.

The integration of AI into legal practice offers immense potential, however, it also introduces complex risks. The consequences of AI hallucinations can be severe, in a professional, ethical and financial sense.

Whether insurers will cover attorneys in such scenarios depends largely on policy wording, the nature of the error, and the steps taken to mitigate risk.

Until clearer guidance emerges, attorneys and advocates would be wise to treat AI as a tool – not a substitute – for professional judgment. Insurers, on the other hand, must grapple with the reality that the future of legal risk is no longer purely human error.

The information and material published on this website is provided for general purposes only and does not constitute legal advice. We make every effort to ensure that the content is updated regularly and to offer the most current and accurate information. Please consult one of our lawyers on any specific legal problem or matter. We accept no responsibility for any loss or damage, whether direct or consequential, which may arise from reliance on the information contained in these pages. Please refer to our full terms and conditions. Copyright © 2025 Cliffe Dekker Hofmeyr. All rights reserved. For permission to reproduce an article or publication, please contact us cliffedekkerhofmeyr@cdhlegal.com.