Anshuman is a Senior MLOps Engineer with 5+ years of experience in deploying and monitoring ML models in production environments across Azure and AWS. He has a proven track record of implementing CI/CD pipelines and optimizing ML pipeline performance.
Reduced ML model deployment time by 40% using Azure DevOps and Jenkins.
Cut infrastructure costs by 30% by developing serverless architectures on AWS.
Improved data preparation time by 50% and model training speed by 30% using Databricks and PySpark.
Increased deployment efficiency by 60% through automation of ML pipelines with Jenkins, Docker, and Terraform.
Reduced model serving latency by 30% by deploying models as REST APIs on Azure Kubernetes Service (AKS).
Overview: This project involved designing and implementing a scalable MLOps pipeline on Azure. Responsibilities: Designed and implemented a scalable MLOps pipeline on Azure, enabling seamless integration and deployment of ML models into production. Automated model training and deployment processes using Azure DevOps, reducing deployment time by 40%. Leveraged Databricks and PySpark for efficient data processing and model training, handling large datasets. Implemented CI/CD pipelines with Jenkins and Azure DevOps, ensuring continuous integration and robust deployment. Managed Kubernetes clusters (EKS) for containerized applications, optimizing resource allocation and ensuring high availability.
Key outcomes:
Reduced deployment time by 40% through automation.
Ensured continuous integration and robust deployment processes.
Optimized resource allocation and ensured high availability through Kubernetes management.
Overview: This project focused on developing a serverless architecture using AWS Lambda, S3, and Glue. Responsibilities: Developed a serverless architecture using AWS Lambda, S3, and Glue to support real-time data processing and ML model inference. Automated the deployment of ML models using AWS Lambda functions, reducing infrastructure costs by 30%. Integrated CI/CD pipelines with Jenkins for automated testing and deployment of ML models in a serverless environment.
Key outcomes:
Reduced infrastructure costs by 30% through serverless deployment.
Ensured consistency across development, testing, and production environments using Docker.
Enhanced agility and reliability of ML deployments by implementing DevOps practices.
Overview: This project involved designing and developing an end-to-end machine learning pipeline using Databricks and Apache Spark for a financial services client. Responsibilities: Designed and developed an end-to-end machine learning pipeline using Databricks and Apache Spark for a financial services client. Automated ETL processes to ingest and preprocess data from multiple sources, reducing data preparation time by 50%. Implemented a model training and deployment pipeline that integrates seamlessly with Azure services for monitoring and scaling. Set up GitLab CI/CD for continuous integration and deployment, enabling rapid iteration and minimizing downtime. Optimized PySpark jobs to run on Databricks clusters, improving model training speed by 30%.
Key outcomes:
Reduced data preparation time by 50% through automated ETL processes.
Improved model training speed by 30% by optimizing PySpark jobs.
Enabled rapid iteration and minimized downtime using GitLab CI/CD.
Overview: This project focused on leading the deployment of AI models using Kubernetes on EKS for predictive analytics solutions. Responsibilities: Led the deployment of AI models using Kubernetes on EKS, ensuring scalable and reliable deployments for predictive analytics solutions. Automated the entire ML pipeline using Jenkins, Docker, and Terraform, reducing manual intervention and increasing deployment efficiency by 60%. Configured Kubernetes clusters to manage containerized workloads and ensure high availability and fault tolerance.
Key outcomes:
Increased deployment efficiency by 60% and reduced manual intervention.
Ensured high availability and fault tolerance of containerized workloads.
Ensured compatibility and consistency of TensorFlow models across environments.
Overview: This project involved developing a hybrid cloud data pipeline integrating Azure and AWS to support large-scale predictive analytics models. Responsibilities: Developed a hybrid cloud data pipeline integrating Azure and AWS to support large-scale predictive analytics models. Automated ETL processes using Apache Airflow and Terraform to manage infrastructure as code, improving deployment speed by 50%. Deployed machine learning models in a multi-cloud environment to leverage the strengths of both Azure and AWS.
Key outcomes:
Improved deployment speed by 50% using Apache Airflow and Terraform.
Streamlined deployment process and reduced release times with CI/CD pipelines.
Optimized model performance and data processing workflows through collaboration.
Anshuman
MLOps