Skip to content

AI Regulation is Here

Since the end of 2023 with the release of ChatGPT, the discussion around artificial intelligence(AI) has exploded. While initially this caught many regulators off guard, they have been rapidly introducing regulation in effort to avoid what many perceptive as the falling to regulate digital advertising.

Where We Are Now

For most organizations dealing with the United States and Europe, a few notable laws have already passed and begun enforcement. This includes Utah’s Artificial Intelligence Policy Act (on May 1,2024), Colorado’s Artificial Intelligence Act (which will enter enforcement Feb, 1, 2026) and the European Union’s colossal Artificial Intelligence Act (which has multiple phase in dates over the next several years).

We also recently saw amendments be passed, such as this amendment in California which extends the protections of the California Consumer Protection Act to AI systems, when that AI system deals with personal information of California residents.

It thus be reasoned that adding personal information for jurisdictions that have privacy laws may bring those laws into scope in addition to any specific AI laws that also apply. Organizations are advised to really understand their use case and the data involved to help determine regulatory obligations to avoid being blindsided with unexcepted enforcement actions.

What Is Coming

It could be said these laws are precursors for that is to come. As of earlier this month State legislatures have already introduced more bills than in all of 2024.

It is exceptionally unlikely that out of these hundreds of bills, that nothing will pass and become law. Still, there are common themes between what has already been enacted, and what is proposed. Forward thinking organizations would be well served from designing and implementing an AI Governance Program – something which many of the passed regulation and proposed bills require.

AI Governance Programs

A common misconception is that all the risk sits in the AI Model itself. A AI Risk Management framework helps to illustrate the required / suggested processes for managing the AI system (inclusive of it’s hardware, relevant software, Modal, Security and Privacy concerns).

To this end, several organizations have released an AI Risk Management Model that may be worth considering:

These frameworks seek to introduce controls that enable or support items such as:

  • Explainability
  • Accountability
  • Transparency
  • Fairness
  • Robustness
  • Saftey
  • Security
  • Data Governance; and
  • Privacy

Often the AI laws require extensive documentation of the AI risk analysis (which can often be requested by regulators), and several have specific data retention requirements on this documentation . Organizations would be well served by factoring in the time investment for these activities in any AI effort.

AI Security

In addition to the above governance program, I want to drill down a bit into the security aspect. Many of these regulations mandate security of the AI system. AI Security is what you do in addition to having a robust security program already in place as it builds on it. Several laws / bills mandate external validation for specific use cases, to be conducted by an experienced Penetration Team. This means that security is not something that can be quickly bolted on and needs to be considered early in the design process as it can have fundamental impacts on the overall system architecture. Organization are advised to have trained Cyber Security personnel involved in the AI system design and deployment to assist in these matters.

Thinktanks such as OWASP have come up with a top 10 list for various AI related concerns that is worth reviewing. It should be realized that while the top 10 are easily listed, they may require sizable effort to implement mitigations depending on the use case.

Summary

AI regulation will continue to take center stage for the next several years as States domestically and countries internationally seek to put laws on the books to clarify liability and reduce risk to the general populace. Any organization undertaking AI work is thus strongly advised to consider proactively implementing an AI Risk Management program, and ensuring that proper trained professionals are involved in the design and implementation any AI system. These are actions that can be taken now, to avoid pain later once enforcement picks up in the coming months / years.

Published inAIGovernanceLegalSecurity

Be First to Comment

Leave a Reply