Abhishek is a Senior MLOps Engineer with 6+ years of experience in multi-cloud environments, specializing in MLOps and data engineering. He has a proven track record of optimizing machine learning model performance and implementing CI/CD pipelines.
Improved fraud detection accuracy by 35% through innovative MLOps practices.
Reduced time to production by 50% with automated CI/CD pipelines.
Enhanced data ingestion efficiency by 40% for serverless ML platforms.
Improved fraud detection accuracy by 35%.
Reduced time to production by 50%.
Increased deployment frequency by 60%.
Overview: Developed a real-time fraud detection system leveraging Azure MLOps and Databricks. Responsibilities: Implemented an automated MLOps pipeline on Azure for model training, deployment, and monitoring using Azure DevOps, reducing time to production by 50%. Integrated MLflow for model tracking, versioning, and lifecycle management, ensuring traceability and compliance. Deployed containerized machine learning models on Azure Kubernetes Service (AKS) to enable scalability and high availability.
Key outcomes:
Improved fraud detection accuracy by 35%.
Reduced time to production by 50%.
Overview: Built a serverless machine learning platform using AWS services for real-time model inference and data processing. Responsibilities: Automated data preprocessing and ETL pipelines using AWS Glue, enhancing data ingestion efficiency by 40%. Deployed models using AWS SageMaker and Lambda, reducing infrastructure costs by 45% through serverless architecture.
Key outcomes:
Enhanced data ingestion efficiency by 40%.
Reduced infrastructure costs by 45%.
Overview: Developed a multi-cloud data engineering and MLOps pipeline using Databricks, Apache Airflow, and Kubernetes. Responsibilities: Integrated data from Azure and AWS using Apache Airflow for ETL processes, reducing data pipeline latency by 55%. Containerized machine learning models and deployed them on Kubernetes (EKS) for scalable and high-availability deployments.
Key outcomes:
Reduced data pipeline latency by 55%.
Key outcomes:
Reduced manual interventions and deployment errors by 70%.
Optimized cloud resource usage by 40%.
Improved inference speed by 25%.
Key outcomes:
Improved inference speed by 25% (from optimizing containerized AI models).
Abhishek
Senior MLOps Engineer