MITB Banner

Watch More

Guide To MLflow – A Platform To Manage Machine Learning Lifecycle

MLflow is an open-source platform that enables smooth organization of a machine learning project. It handles the machine learning lifecycle such that if we use MLflow for deploying an ML project built on an unsupported framework, it provides an open interface to integrate that framework with the existing system easily. MLflow is said to be ‘library-agnostic’ which means, as all the functionalities of MLflow can be accessed using REST API and CLI, any ML library and any programming language can be used while handling a project using MLflow.

Installation

pip install mlflow

Major components of MLflow

  1. MLflow Tracking

It allows an ML practitioner to capture the metadata required for the ML project regardless of the working environment in which the project is being trained and deployed. It is based on a concept called ‘runs’ where each run has the following parts of information: 

  • Code version
  • Start and end time of that run
  • Source that marks the starting point of the run
  • Parameters (basically key-value pairs)
  • Metric to trace the run
  • Artifacts (output files recorded for late visualization of previous runs)

To record the runs, MLflow Tracking makes use of the REST, Python, R and Java APIs. Multiple runs performing a particular task are organized as an ‘experiment’ using one of  the two ways as follows:

The recorded runs can be queried using MLflow API or Tracking UI.

For logging metrics, parameters and/or artifacts using the Tracking API, the associated library should first be imported in Python as:

import mlflow

Example of logging parameters (key-value pairs, each of which must be a string)

mlflow.log_param("parameter1”, 8.26)

Example of logging a metric, say accuracy of the model (it is update-able throughout the run)

mlflow.log_metric("acc", 0.9)

Example of logging an artifact i.e. an output file, say abc.txt

mlflow.log_artifact("abc.txt")

Wherever we run our program, the Tracking API by default records the corresponding data into a local directory ./mlruns. The Tracking UI can then be run using the command:

mlflow ui 

It can then be viewed at https://localhost:5000

  1. MLflow Projects

It provides a standardized format for packaging an ML project code. It is nothing but a directory or Git repository which helps create a pipeline for chaining projects and thus create a systematic ML workflow. It includes an optional configuration file which specifies the procedure to run the code and dependencies among the libraries used in the code. 

Each project deployed using MLflow can have the following properties:

  • Name of the project
  • Entry point(s) which specify the code that can be executed within the project along with its associated parameters, if any.
  • Software environment that needs to be used for execution of the entry points.

Following are the ways to run a project using MLflow Projects component:

For instance,

  mlflow run sklearn_elasticnet_wine -P alpha=0.5
 mlflow run https://github.com/mlflow/mlflow-example.git -P alpha=5.0 

Note: When using mlflow run command, all the required library dependencies are installed using the conda environment. To avoid that, the command should be executed withan additional –no-conda option as follows:

 mlflow run sklearn_elasticnet_wine -P alpha=0.5 --no-conda

  1. MLflow Models

It is a feature used for packaging ML models in several standard formats, each of which is known as a ‘flavour’. An MLflow Model is saved in the form of a directory and each such directory comprises following two things:

  • Some random files
  • One MLmodel descriptor file which tells us in which flavours that particular model can be used.

 For example, 

           time_created: 2020-01-19T13:41:44.02
           flavors:
                sklearn:
                  sklearn_version: 0.19.1
                  pickled_model: model.pkl
               python_function:
                  loader_module: mlflow.sklearn
                  pickled_model: model.pkl 

If a model has the MLmodel descriptor file as above, it means that the model can be used with only those tools for which sklearn or python_function model flavours are supported. Any such model flavour can be provided by MLflow if we use the mlflow serve command such as

 mlflow models serve -m my_first_model

(‘my_first_model’ in the above line of code is the name by which the model was saved)

To deploy such models using AWS SageMaker, mlflow sagemaker command-line tool can be used as follows:

mlflow sagemaker deploy -m my_first_model

 Each MLflow Model can be saved and loaded in several ways. For instance, save_model, log_model and load_model functions are available in mlflow.sklearn if we want to use sklearn models for our project.

MLflow Model Registry

It is the controlling component which manages the complete lifecycle of an MLflow Model. It is a collection of APIs and UI that enables tracking history of a model, its versioning, annotations and phase transformations. 

If we have our own MLflow server, the MLflow Model Registry can be accessed through an API or UI, each of the ways having its own set of steps to be followed.

Practical implementation

Following is an example code for predicting the wine-quality from a Kaggle’s dataset using sklearn’s Elastic Net through MLflow Projects component.

def train(input_alpha, input_l1_ratio):

 #import required libraries

     import os
     import warnings
     import sys
     import pandas as pd
     import numpy as np 
     from sklearn.metrics import mean_squared_error, mean_absolute_error,  r2_score
     from sklearn.model_selection import train_test_split
     from sklearn.linear_model import ElasticNet
     import mlflow
     import mlflow.sklearn
     import logging
     logging.basicConfig(level=logging.WARN)
     logger = logging.getLogger(__name__) 

#define a function for computing evaluation metrics – root mean squared error, mean absolute error and r2 score

     def eval_metrics(actual, pred):
         rmse = np.sqrt(mean_squared_error(actual, pred))
         mae = mean_absolute_error(actual, pred)
         r2 = r2_score(actual, pred)
         return rmse, mae, r2 

#ignore warnings in the output, if any

     warnings.filterwarnings("ignore")
     np.random.seed(40) 

    # Read the wine-quality csv file from the URL

     csv_url =\
         'http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv' 

#Throw an exception if the reading of the .csv file is unsuccessful, else read the file into a DataFrame named ‘data’

     try:
         data = pd.read_csv(csv_url, sep=';')
     except Exception as e:
         logger.exception(
             "Unable to download training & test CSV, check your internet connection. Error: %s", e) 

    # Split the data into training and test sets (train:test ratio of 3:1)

    train, test = train_test_split(data)

    # The label column is “quality”. Form the features(x) and label(y) columns for training and testing accordingly

     train_x = train.drop(["quality"], axis=1)
     test_x = test.drop(["quality"], axis=1)
     train_y = train[["quality"]]
     test_y = test[["quality"]] 

    # Set default values if no alpha is provided

     if float(in_alpha) is None:
         alpha = 0.5
     else:
         alpha = float(input_alpha) 

    # Set default values if no l1_ratio is provided

     if float(in_l1_ratio) is None:
         l1_ratio = 0.5
     else:
         l1_ratio = float(input_l1_ratio) 

    # Useful for multiple runs (only doing one run in this sample notebook)    

    while mlflow.start_run():

    # Execute ElasticNet

         lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42)
         lr.fit(train_x, train_y) 

        # Evaluate the metrics

         pred = lr.predict(test_x)
         (rmse, mae, r2) = eval_metrics(test_y, pred_qualities) 

        # Print out metrics

         print("Elasticnet model (alpha=%f, l1_ratio=%f):" % (alpha, l1_ratio))
         print("  RMSE: %s" % rmse)
         print("  MAE: %s" % mae)
         print("  R2: %s" % r2) 

        # Record the parameters

         mlflow.log_param("alpha", alpha)
         mlflow.log_param("l1_ratio", l1_ratio) 

       #Record the evaluation metrics

         mlflow.log_metric("rmse", rmse)
         mlflow.log_metric("r2", r2)
         mlflow.log_metric("mae", mae) 

#Record the model for later visualizations

      mlflow.sklearn.log_model(lr, "model")

To execute the above lines of code, call the train() function with appropriate arguments.

For example,

train(0.5,0.5)

gives the output as follows:

 Elasticnet model (alpha=0.500000, l1_ratio=0.500000):
   RMSE: 0.82224284975954
   MAE: 0.6278761410160691
   R2: 0.12678721972772689 

Source: https://github.com/mlflow/mlflow/blob/master/examples/sklearn_elasticnet_wine/train.ipynb

For detailed information of MLflow platform, refer to the links given below:

MLflow’s official documentation

GitHub repository

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Nikita Shiledarbaxi

Nikita Shiledarbaxi

A zealous learner aspiring to advance in the domain of AI/ML. Eager to grasp emerging techniques to get insights from data and hence explore realistic Data Science applications as well.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories