Regulating Emerging Tech in the Silicon Savanna: Kenya’s Artificial Intelligence Bill, 2026

Kenya has established itself as a regional leader in the development and adoption of innovative technologies, driven by sustained public and private sector support. Its regulatory approach has generally sought to balance fostering innovation with protecting the rights and safety of consumers.

18 Mar 2026 4 min read Corporate & Commercial Law Alert Article

Background

Kenya has established itself as a regional leader in the development and adoption of innovative technologies, driven by sustained public and private sector support. Its regulatory approach has generally sought to balance fostering innovation with protecting the rights and safety of consumers.

Building on this foundation, Kenya’s ICT Policy Guidelines, 2020 recognised the centrality of artificial intelligence (AI), and the Ministry of ICT & the Digital Economy published a National AI Strategy in March 2025 (the AI Strategy). The AI Strategy highlighted “a recognised need for comprehensive AI-specific regulations to address ethical implications and potential harms.”

The government has now published the Artificial Intelligence Bill, 2026 (the AI Bill), which aims to provide a comprehensive framework for regulating and governing AI in Kenya while fostering innovation and safeguarding human rights, data protection and public welfare. This alert provides a synopsis of the AI Bill’s salient features.

Salient Features of the AI Bill

Key Definitions

The AI Bill adopts a definition of AI aligned with the European Union’s AI Act, describing it as a machine-based system leveraging machine learning, data processing and algorithmic methods to operate with varying levels of autonomy and inferring outputs such as predictions, recommendations or decisions from inputs.

Two key stakeholders are assigned specific obligations: a deployer, being a person who puts an AI system into service under their authority (excluding personal end-users), and a provider, being a person who develops an AI system and places it on the market under their own name or trademark. The AI Bill also defines a high-risk AI system as one that poses significant risks to health, safety, fundamental rights or societal welfare.

Proposed Regulatory Oversight

The AI Bill proposes establishing the Office of the AI Commissioner as a corporate state office with broad oversight functions, including:

  • conducting risk assessments and conformity audits of AI systems, including high-risk systems;
  • developing policies, guidelines and standards on AI governance, ethics and responsible deployment;
  • establishing and managing regulatory sandboxes to facilitate safe innovation and testing;
  • maintaining a public register of high-risk AI systems; and
  • receiving and investigating complaints relating to AI systems, including harms such as bias or discrimination.

The AI Commissioner would also have powers to enter and inspect premises, require production of records, issue compliance orders and impose administrative fines. An Advisory Committee on AI, chaired by the AI Commissioner and comprising government, non-governmental and private sector stakeholders, would support the AI Commissioner’s mandate. The AI Commissioner would further be required to develop ethical guidelines addressing privacy, data protection, bias, human oversight, environmental sustainability and the prevention of misinformation—including prohibitions on non-consensual use of personal images in AI-generated content.

AI Governance

Risk Classification Framework — The AI Bill proposes a risk-based governance approach classifying AI systems by the level of risk they pose. Risk classes include: unacceptable risk (prohibited systems posing severe threats); high risk (systems in critical sectors such as healthcare, education, finance, security and public administration); limited risk; and minimal risk.

Risk and Impact Assessments — Providers and deployers of high-risk AI systems would be required to conduct human rights impact assessments and general risk assessments before deployment and to implement mitigation measures including human oversight. Data protection impact assessments under the Data Protection Act will also be required as well as workforce impact assessments where AI systems are likely to impact employment (with reskilling programmes being implemented where necessary).

Transparency and Notification — Decision-making processes of high-risk AI systems must be transparent, traceable and explainable. Providers must submit annual compliance reports to the AI Commissioner, with a requirement for non-confidential information in such reports being made publicly available. Users and affected persons will have to be notified of the nature, purpose and limitations of AI systems, the extent of automated decision-making, and measures taken to ensure fairness and to mitigate bias.

Human Centricity — The AI Bill proposes to require AI systems to be designed to support human involvement and enhance human capabilities rather than replace them, and further to provide for human oversight in critical decisions.

Enforcement

Beyond administrative fines, the AI Bill proposes criminal penalties of up to one million or five million Kenyan shillings (approximately USD 7,800 and USD 39,000 respectively) and imprisonment of up to six months or two years, depending on the offence. Offences principally relate to non-compliance with various requirements including disclosure, transparency and impact assessment requirements for high-risk AI systems.

Conclusion

In summary, the AI Bill introduces a regulatory framework that, consistent with Kenya’s AI Strategy, seeks to protect users of AI tools without stifling the culture of innovation that defines Kenya’s tech ecosystem. Its alignment with international trends—particularly the risk-based approach to AI governance—positions Kenya among jurisdictions actively shaping AI regulation.

As the AI Bill progresses through the legislative process, it will be interesting to see how the text evolves with public and parliamentary input. We will continue to monitor developments and publish further alerts as as new developments emerge in this context.

The information and material published on this website is provided for general purposes only and does not constitute legal advice. We make every effort to ensure that the content is updated regularly and to offer the most current and accurate information. Please consult one of our lawyers on any specific legal problem or matter. We accept no responsibility for any loss or damage, whether direct or consequential, which may arise from reliance on the information contained in these pages. Please refer to our full terms and conditions. Copyright © 2026 Cliffe Dekker Hofmeyr. All rights reserved. For permission to reproduce an article or publication, please contact us cliffedekkerhofmeyr@cdhlegal.com.