Abbreviation of machine learning operations, the MLOps aims to design learning models adapted to their deployment in production, then to maintain them throughout their life cycle.
MLOps, what is it?
The MLOps aims to design and maintain machine learning models that can be used in the field. Like DevOps for applications, it involves mastering their entire life cycle. The goal? Take deployment constraints into account right from the model design and training stage.
Following the logic of agile methods, MLOps takes shape through the implementation of learning pipelines combined with model monitoring tools.
The MLOPs engineer is the protagonist. This emerging profession is the product of the crossbreeding between the data scientist (specialist in data science) and the data engineer.
What are the building blocks of MLOps?
MLOps involves the implementation of several bricks aimed at controlling the entire machine learning cycle:
- A store of reusable models (model store),
- A store of reusable features (feature store),
- A continuous integration and delivery tool (CI/CD),
- A model monitoring and traceability tool,
- A collaborative environment.
What are the MLOps tools?
Among the major tools of MLOps, we can mention:
- Dataiku (proprietary application),
- DataRobot (proprietary application),
- Domino Data (proprietary application),
- Kubeflow (open source application created by Google),
- Metaflow (open source application),
- MLFlow (open source application),

On the cloud provider side, AWS, Google and Microsoft Azure all integrate the MLOps dimension into their respective machine learning platform, Amazon SageMaker for the first, Vertex AI for the second and Azure Machine Learning for the third.
How to train in MLOps?
Several MLOps training modules are offered online and in faculties of science or engineering schools. The MLOps engineer is above all a data specialist. Training as a data scientist is the key to entering the profession. He must also master the rules of programming and software engineering.
MLOps vs. DevOps
DevOps, a contraction in English of Development (Dev) and Operations (Ops), combines two essential functions: application development and system engineering. The challenge is to take deployment constraints into account right from the programming phase and thus improve the quality of the finished product. MLOps stems from DevOps, but responds more specifically to machine learning-oriented applications.
For further training on MLOps, we recommend this online course MLOPS Level 1 by TowardsDataScience:
Characteristics of MLOps level 1
- Quick experimentation: The steps that consist of the experimentation pipeline is fully automated and orchestrated. By automating and orchestrating this pipeline it enables data scientists to experiment with different models and data quickly. In addition, the transition of moving the machine learning pipeline from development/testing to production is made easier.
- Continuous training of model in production: The model used in production is trained using new data by using triggers.
- Machine learning and system operation symmetry: The machine learning pipeline used in development/testing and production is symmetrical. This is a key component in MLOps practice for integrating DevOps philosophy.
- Modular code for components and pipelines: Components inside machine learning pipelines should be re-usable, composable and shareable across machine learning pipelines. Some components, such as exploratory data analysis (EDA) can still be inside notebooks, but other components must be modular. Ideally components should also be containerised.
- Continuous delivery of models: In production the machine learning pipeline should continuously deploy prediction services of new models that have been trained on fresh data. The pipeline that deploys the trained and validated model as a prediction service must be automated.
- Deployment of pipeline: In MLOps level 0 only the trained model artefact was deployed as a prediction service to production. In MLOps level 1 the whole training pipeline is deployed, the model deployed as the prediction service in production is trained on the freshest data in production.
ABOUT LONDON DATA CONSULTING (LDC)
We, at London Data Consulting (LDC), provide all sorts of Data Solutions. This includes Data Science (AI/ML/NLP), Data Engineer, Data Architecture, Data Analysis, CRM & Leads Generation, Business Intelligence and Cloud solutions (AWS/GCP/Azure).
For more information about our range of services, please visit: https://london-data-consulting.com/services
Interested in working for London Data Consulting, please visit our careers page on https://london-data-consulting.com/careers