An overview of the Artificial Intelligence in the South African Financial Sector report

On 24 November 2025, the South African Reserve Bank Prudential Authority (PA) and the Financial Sector Conduct Authority (FSCA) released a joint report on Artificial Intelligence (AI) in the South African Financial Sector (Report). The Report provides valuable insight into the deployment and adoption of AI in the financial sector, the anticipated investments by financial institutions in AI systems, and the benefits and risks associated with AI systems.

3 Dec 2025 4 min read Combined Corporate & Commercial and Technology & Communications Alert Article

At a glance

  • On 24 November 2025, the South African Reserve Bank Prudential Authority and the Financial Sector Conduct Authority released a joint report on Artificial Intelligence (AI) in the South African Financial Sector (Report).
  • The Report provides valuable insight into the deployment and adoption of AI in the financial sector, the anticipated investments by financial institutions in AI systems, and the benefits and risks associated with AI systems.

Approaches around the world

The Report considers the legislative and regulatory approaches which have been adopted for the deployment of AI around the world. It identifies two approaches to AI regulations - reliance on existing regulatory frameworks which impose general requirements for the use of technology, and the development of technology-specific regulatory requirements. The Report considers the regulation of AI in other jurisdictions and identifies several commonalities: a risk-based approach; general and specific sector requirements; policy alignment to incorporate AI frameworks in other policies; private sector collaboration; and workshops to obtain information to assist in identifying good practices for AI regulation. While the approaches adopted internationally will serve as a good starting point for the FSCA and PA, these regulatory bodies need to consider the nuances of the South African financial sector when drafting sector-specific AI regulations to ensure innovation and start-ups are not stifled.

AI deployment in the financial sector

The Report identifies where AI is currently being deployed by various categories of financial institutions. The main use cases for AI in financial institutions are:

  1. Operational efficiency: Financial institutions are able to use AI to automate repetitive tasks. This results in decreased costs and faster processing times for transactions and applications.
  2. Personalised customer experience: Financial institutions can leverage AI to analyse customer behaviour and provide customised financial products and service offerings. The use of AI chatbots also allows for increased customer satisfaction due to quicker response times.
  3. Risk management and compliance: AI is able to detect patterns which may indicate fraud quicker and allows for real-time continuous monitoring of customer accounts. Credit risk assessments may be improved and may result in more accurate results with the use of AI. AI may also be deployed to manage operational risks through identifying inefficiencies and risks in the current financial system.
  4. Data driven decision-making: AI is able to process data at a higher rate than humans and can provide deeper insight into market trends and credit modelling. This allows financial institutions to be more competitive and to limit their potential risk.

Opportunities and risks of AI deployment

The deployment and use of AI in the financial sector does not come without significant risks and challenges.

  • The Report indicates that AI adoption in financial sectors ranged from 8% (insurance and lenders) to 50–52% (payments and banks).
  • Of the banks surveyed, 45% indicated that they were planning to invest more than R30 million in AI in 2024. This varied significantly from the majority of insurers and investment providers which indicated that they did not plan to spend more than R1 million on AI investments.
  • Data security and data privacy and protection risks were identified as critical risks by 37% of surveyed financial institutions.
  • Cybersecurity risks were identified as critical risks by 37% of financial institutions and reputational risks were identified as critical for 26% of financial institutions.
  • It was identified that 53% of the staff in financial institutions are not sufficiently trained to use AI.
  • 41% of organisations identified that a constraint on the use of AI is a lack of appropriate transparency and explainability.
  • 37–39% of organisations identified a lack of access to sufficient data and accountability and governance as constraints when introducing AI.
  • 21% of financial institutions do not use an explainability method when using AI.

Ethical considerations

There are ethical considerations that need to be taken into account when AI is used by the financial sector. AI has been proven to show bias in decision-making. This would result in discriminatory practices, for example in credit lending applications. Financial service providers must ensure the AI systems deployed do not erode the principles of fairness, transparency and accountability. The Report also notes that AI needs to be deployed with human autonomy and oversight.

Advice for financial institutions

With the high implementation rate of AI in the financial sector, financial institutions should consider where AI solutions may be adopted to increase efficiency and decrease expenditure. This will ensure that financial institutions do not fall behind their counter parts as AI continues to develop. However, financial institutions should ensure that appropriate governance frameworks are in place to ensure compliance with data protection requirements and the Joint Standard – Cyber Security and Cyber Resilience. Financial institutions deploying AI need to ensure that there is sufficient transparency in the usage of AI to present infringement of consumer protection rights.

Conclusion

The Report demonstrates the approach adopted by financial institutions when adopting AI is cautious optimism. Financial institutions need to ensure that risks from the adoption of AI are mitigated to ensure AI’s potential can be fully harnessed. A human-centric approach should be adopted by financial institutions to maintain trust, fairness, accountability and transparency with customers when deploying AI systems.

The information and material published on this website is provided for general purposes only and does not constitute legal advice. We make every effort to ensure that the content is updated regularly and to offer the most current and accurate information. Please consult one of our lawyers on any specific legal problem or matter. We accept no responsibility for any loss or damage, whether direct or consequential, which may arise from reliance on the information contained in these pages. Please refer to our full terms and conditions. Copyright © 2025 Cliffe Dekker Hofmeyr. All rights reserved. For permission to reproduce an article or publication, please contact us cliffedekkerhofmeyr@cdhlegal.com.