How to get budget for MLOps?
March 15, 2021
Jean Frerot

How to get budget for MLOps?

MLOps
ML in Production

The rise of artificial intelligence has become omnipresent in recent years. In reality we see that only a small percentage of models makes it to production and stays so. In a series of blog posts on MLOps, we explain why and how companies can adopt MLOps practices to unlock the business value of AI. 


Have you experienced this as a Data Science team?


You present a new exciting proof of concept backed by a brand new, state of the art, in-house developed ML model to your management. They are beyond excited! Instead of that expensive team that talks in weird mathematics, you are the rockstars of the company. They start talking about putting the POC in production as soon as possible. Management talk for: “It should have been integrated yesterday”. On top of that, you need to expand your team and develop even more proof of concepts for other business domains. 


Be careful… if you say “Yes” and don’t yet have an MLOps strategy backed by the right team and budget, you might quickly have a tough row to hoe.


Scaling without MLOps

Let’s look into a few pitfalls that we’ve encountered in our MLOps advisory services and ML in production projects if you scale AI without the right team, budget and MLOps. 

  • The data sets you’ve used for the proof of concept was a 1-time dump hacked together by an analyst in the business and can’t be easily refreshed because the data pipeline, owned by the data engineering team,  is not yet fully available.
  • The front-end team needs to integrate your ML model. It takes two months to convince their architecture team, get the change request in a sprint but then it seems you underestimated the number of requests/second.  So your ML serving is too slow and can’t scale-up without expensive hardware that needs to be approved and installed.
  • Your ML model runs in production for a few months, but you start getting more and more complaints that it’s generating unexpected results. You start fire fighting and notice that the data got skewed over time due to new types of customers and other business process changes. 
  • You manage to integrate your ML model into an internal application, including labelling and feedback functionality so the ML model can be constantly improved. You get less feedback than expected, wrong labelling and a call from HR. HR mentions that people are concerned their job is going to be fully automated. 


You need a multidisciplinary team

First of all, make sure you work with a multidisciplinary team. Involve all the stakeholders that will be affected by AI. This includes technical stakeholders such as security, IT architecture, data engineering and integration/application teams.
The input will be valuable to avoid data and integration issues, and you will get enough priority in the change request planning. 


Additionally, you need to involve the actual users, customers and teams responsible for the business process and compliance. Users trust, and often become the best ambassadors for AI, if they are genuinely involved and understand it’s not a robot or black box that will replace all their work.


Some of our customers, by default work with a flexible multidisciplinary team for every proof of concept. Other customers start involving more stakeholders as soon as the proof of concept seems feasible from an ML perspective. 

Reduce risks

From a legal perspective, AI must be implemented in a safe, consistent and ethical way.

If you are having trouble securing budget, mention to your manager that MLOps will ensure legal compliance by: 

  • Versioning of data sets, configuration/training parameters and ML models
  • Data monitoring, including schema, data quality and data drift monitoring, so the risk for unexpected effects is minimized.
  • ML model monitoring
  • Is the accuracy still acceptable?
  • ML serving monitoring
  • Secure request and response logging
  • Latency monitoring

MLOps frameworks and the existing platform monitoring solutions include tools to do this at scale.


Legal requirements are easier to get approved than technical requirements that are harder to understand for the business. A good example is GDPR. You need to respond to any data request by a user in a number of days, including explaining why an ML model returned a specific result.

Efficiency improvements for your team

On top of reducing risk, you can also explain that the efficiency of your team will increase by introducing MLOps principles. MLOps tools offer plenty of functionality to split an ML pipeline into reusable components. These components can be shared across the data science team. 


Working with known components and adding the right amount of validations into the pipeline before it’s released to production also contributes to psychological safety. People won’t be afraid to experiment and release code into the wild (aka production). Check “Resilience Engineering and DevOps – A Deeper Dive” for more information about this critical benefit.


Summary

We hope you’ve provided you with some conversation starters to “sell” MLOps in your organisation. Do remember that the goal of AI, and any other IT project, is generating value for your organization, so be careful not to over-engineer MLOps and lose track of this goal. 


More questions on MLOPs? Join our free Q&A Session - Register Here

This post is  part of a series of blog posts on the topic of MLOps. In this series, we explain why and how companies can adopt MLOps practices to unlock the business value of AI. Find the other content here.


Related posts

Want to learn more?

Let’s have a chat.
Contact us
No items found.