A picture worth a thousand words they say. In this article, I will show you three key MLOps architecture frameworks that you can implement on the cloud. Let’s begin!
MLOps is an emerging field that has gained a lot of popularity with data scientists, ML engineers, and AI enthusiasts.
It is also known as ModelOps, ML DevOps, and ML CI/CD.
MLOps is the application of DevOps approaches to machine learning.
Essentially, it enables the efficient movement of ML models from development to production and management throughout their life cycles.
Let’s then think for a moment what a common architecture of an MLOps system would look like.
Well as you might guess, it will certainly include data science platforms and analytical engines.
You would use data science platforms to build your models. An analytical engine, on the other hand, would help you perform computations.
In addition, you will need an orchestration tool for the end-to-end model life cycle management. Like below? Yes!
Let’s now see how cloud-based implementation of MLOps would look like.
Cloud-based MLOps architecture as a guide for implementation
Major cloud providers like Google, Microsoft, Amazon have already come up with reference MLOps architectures to help organizations in their implementations.
Do you notice that they look conceptually the same? Mostly, the technologies they use differ.
In fact, they have a lot in common:
- They automate the execution of the ML pipeline to retrain new models. This happens usually when there is new data to capture any emerging patterns.
- They build a continuous delivery pipeline to deploy new implementations of the entire ML pipeline.
- You can trigger your pipeline on-demand and/or on a schedule. For example when new data becomes available. Or when model performance degrades. It could also be that the statistical properties of the data have significantly changed. We can extend the list.
- The availability of a new implementation of the ML pipeline (including new model architecture, feature engineering, and hyperparameters) is another important trigger to re-execute the ML pipeline.
Remember, MLOps is a framework that sets out some principles and guides to achieve scalable AI/ML development across an organization.
As you know by now, you could implement your MLOps architecture using one of the above cloud-based frameworks.
Or, you can come up with your own custom implementation. As long as you stick to the same underlying principles you’ve seen.
Last but not least…
Feel free to check out the popular open-source platform called MLflow which is widely used for MLOps purposes.
It supports integrations with many of the popular technologies and tools like TensorFlow, Spark, Databricks, Kubernetes, etc.
Also, many renowned companies like Facebook, Microsoft, Databricks are using and contributing to MLflow.