Krishna is a Senior Data Engineer with 4+ years of experience in data and software engineering, specializing in Gen AI tool development and LLM integration. He has successfully led a team of 12 Data Engineers and executed large-scale migrations.
Led a team of 12 Data Engineers to build Gen AI tools.
Successfully migrated 240 projects from a commercial DataOps platform to an in-house GitLab + DBT platform.
Developed ETL pipelines and a data lake for Nestle, supporting their carbon-neutral goal by 2035.
Implemented fault-tolerant solutions for startups and MNCs using Data, Python, and Cloud.
Successfully migrated approximately 240 projects from DataOps.live to an in-house platform.
Built an AI/LLM-powered tool for Anti-Money Laundering (AML) policy checks, utilizing GPT-3.5 and GPT-4.
Developed a central data catalog with data quality metrics and Row Level Security.
Overview: Developed a tool to automate AML policy checks for clients, simplifying manual processes. Responsibilities: Built the core tool, processing policy documents and generating compliance reports. Designed and implemented a RAG pipeline to handle large volumes of documents. Utilized GPT-3.5 and GPT-4 Azure Deployments hosted in the EU, ensuring GDPR compliance.
Key outcomes:
Simplified manual AML policy checks for clients.
Implemented a RAG pipeline for processing extensive policy documents.
Overview: Built finance data products in Snowflake, integrating data from SAP cubes and static files. Responsibilities: Developed data products as a Snowflake Developer. Utilized DataOps.live (DBT) for building and managing data products. Integrated data from SAP cubes via Talend jobs and various seed files.
Key outcomes:
Built robust finance data products in Snowflake.
Supported business users with data queries and ad-hoc changes.
Overview: Migrated approximately 240 projects from a commercial DataOps.live platform to an in-house GitLab + DBT platform. Responsibilities: Led the migration of 240 projects from DataOps.live to an in-house platform as a Snowflake Developer. Collaborated with project-specific tech leads to understand migration scope and requirements.
Key outcomes:
Successfully migrated ~240 projects from DataOps.live to an in-house platform.
Standardized code structure and configured data governance tools.
Overview: Developed ETL pipelines and a data lake to support a PowerBI dashboard for business users, contributing to Nestle's carbon-neutral goal by 2035. Responsibilities: Worked as a Databricks Developer, building ETL pipelines. Implemented pipelines in Azure Data Factory and Databricks Notebooks.
Key outcomes:
Developed ETL pipelines and a data lake to support a PowerBI dashboard.
Contributed to Nestle's carbon-neutral goal by 2035.
Overview: Built a central data catalog for the entire organization's data, enhancing discoverability and governance. Responsibilities: Worked as an Azure Data Engineer to build the data catalog. Developed Python/SQL scripts to fetch metadata from various data sources.
Key outcomes:
Built a central data catalog for organization-wide data.
Implemented data quality metrics and Row Level Security.
Krishna
GEN AI