Skip to content

AI as Infrastructure

I recently attended the IAPP Global Privacy Summit and noticed something I have seen repeated in various podcasts.  The conversation around Artificial Intelligence is changing. No longer are we asking if we should use AI, or having discussions on how to use AI, instead we’re talking about building on top of AI powered systems.  In this context, AI and the related systems it runs on are infrastructure supporting other business functions.

This shift in discussions is notable and means how we think about AI has to change.  If AI truly is infrastructure, then I see a disconnect between how people approach using AI to support business processes, and actually treating it as infrastructure backed by sound engineering considerations. So in this post I want to highlight just some of the things I think deserve serious consideration when deciding to build a business / feature on top of an AI system. 

Contracts and Compliance

Many companies are subject to various legal and industry concerns that they have to consider when making business decisions.  Often contracts are used to handle various aspects such as who owns what, and what controls may need to be in place.  This is particularly important when dealing with standards such as SOC II or laws such as Europe’s General Data Protection Regulation.  

Typically a business would not actively seek to deploy systems which jeopardize these considerations. However, which service and what license level employees may use can put compliance with those standards and regulations at risk. 

Alex Isskova created a handy chart that helps to illustrate this issue:

Generally speaking, free accounts mean that the service can train on the data you are feeding it.  This could range from a business risk to outright illegal under various conditions.  It’s important when treating AI as infrastructure you consider what it truly means to build a service offering on top of an AI platform. Different contracts may be required (such as Enterprise agreements), different documentation may have to be written, various risk assessments may have to be done and chances are those items often have to happen prior to actually launching the system for public use.

Business Continuity 

The next thing I have noticed relates to business continuity.  Everyone wants to talk about what happens when the AI system works as expected.  Very few people I have spoken with have had serious discussions about what happens if the AI system fails or becomes otherwise inaccessible.  In technology downtime isn’t a matter of if, it’s a matter of when.  When you have critical business systems that need to be running – consideration needs to be given to failure cases.

An example may be helpful.   Let’s say you’re using Google Cloud Platform and leveraging Gemini in your project.  Your project is set up exclusively in a data center on the East Coast.  What happens when the data center goes offline (due to a storm or some other condition)?

The wrong time to find out about this is when it actually happens.  The correct time is now.  If the AI is infrastructure supporting other business processes  and you’re subject to up-time requirements then typically work is required to ensure that the AI and related systems can actually meet those Service Levels.   

High Availability of critical systems is based on good engineering practices and considering the failure cases in detail. The wrong time to do this is when the system is offline and already impacting business processes.  If AI truly is infrastructure – then you need to be proactive in ensuring the system can meet business objectives.

Security

The last thing I want to touch on is often the conversations I see mention agentic automation of specific tasks.  With this process the user is attempting to automate some activity they regularly undertake and hand it off to an AI process to conduct. In many cases the user leverages their own accounts to conduct this automation.  There are security considerations with this.  Two of the major ones relate to access levels and a concept called non-repudiation. 

When it comes to access levels, most users are an administrator on their personal computers.  This means if the AI is granted the same access as you have – it can effectively do everything you can do.  This means that it can open a file, run spell check and send an email, but it can also reformat your hard drive.  The danger here can be extensive considering that some AI agents have been shown to execute commands from simply viewing a webpage at the same permission level it was running at.

AI systems need to be properly scoped with permissions to limit the damage they can do.  This typically means provisioning the AI system with a low access level service account because generally you’d want the agent to fail to conduct a task rather than risk it destroying a system

Allowing the AI to use the same user account as the person behind the keyboard poses another risk, one related to the security concept of non-repudiation.  In non-repudiation, you ensure that a party in a transaction cannot deny the authenticity of their message or actions. This breaks down when multiple people or systems share accounts. This isn’t just a matter of ensuring people can’t deny actions, it’s critical for good governance of administrator accounts. If something goes wrong you want to be able to determine if it was a user mistake or an AI making the error.  If everything looks like a user in the logs, it can be difficult to identify the source and address it in a timely manner.

Conclusion

It can be exciting to build things with AI.  However serious discussions have to happen when converting that AI powered system into infrastructure for your business.  As soon as you start building on top of an AI system several risks surface that can imperil business operations if not addressed.  I think it’s neat that people are thinking about AI as infrastructure, but if it truly is infrastructure – then we need to treat it like it is, and that means taking the time to ensure it’s fit for the business purpose we’re matching it to.

Published inEngineeringSecurity

Be First to Comment

Leave a Reply