Krishna  ·  Python / GenAI Data Engineer  ·  4+ yrs

Mid-Level
4+ years experienceremote
Available within 48 hrs

Proof of scale

240 projects migrated
carbon-neutral goal by 2035
240 projects migratedcarbon-neutral goal by 2035
Built for
Nestle

About Krishna

Krishna is a Senior Data Engineer with 4+ years of experience in data and software engineering, specializing in Gen AI tool development and LLM integration. He has successfully led a team of 12 Data Engineers and executed large-scale migrations.

4+ years of commercial experience in

Skills(23)

AzureFlaskGPT-3.5Node.jsPythonSQLSnowflakedbtGitLab CIGPT-4TableauTalendMonte CarloCollibraDatabricksAzure Data FactorySparkSQLLogic AppsAzure SQLSAP BWSAP HANAADLSPySpark

Why hire Krishna?

Production deploy authorityMentored 12 juniorsLed large-scale migrations

Led a team of 12 Data Engineers to build Gen AI tools.

Successfully migrated 240 projects from a commercial DataOps platform to an in-house GitLab + DBT platform.

Developed ETL pipelines and a data lake for Nestle, supporting their carbon-neutral goal by 2035.

Implemented fault-tolerant solutions for startups and MNCs using Data, Python, and Cloud.

Successfully migrated approximately 240 projects from DataOps.live to an in-house platform.

Built an AI/LLM-powered tool for Anti-Money Laundering (AML) policy checks, utilizing GPT-3.5 and GPT-4.

Developed a central data catalog with data quality metrics and Row Level Security.

Project highlights(5)

Anti-Money Laundering Policy CheckLLM Developer

Overview: Developed a tool to automate AML policy checks for clients, simplifying manual processes. Responsibilities: Built the core tool, processing policy documents and generating compliance reports. Designed and implemented a RAG pipeline to handle large volumes of documents. Utilized GPT-3.5 and GPT-4 Azure Deployments hosted in the EU, ensuring GDPR compliance.

PythonFlaskGPT-3.5GPT-4Azure

Key outcomes:

  • Simplified manual AML policy checks for clients.

  • Implemented a RAG pipeline for processing extensive policy documents.

Finance Data ProductsSnowflake Developer

Overview: Built finance data products in Snowflake, integrating data from SAP cubes and static files. Responsibilities: Developed data products as a Snowflake Developer. Utilized DataOps.live (DBT) for building and managing data products. Integrated data from SAP cubes via Talend jobs and various seed files.

SnowflakeDBTSQLPythonTableauTalend

Key outcomes:

  • Built robust finance data products in Snowflake.

  • Supported business users with data queries and ad-hoc changes.

DataOps.live to Inhouse Data Platform MigrationSnowflake Developer

Overview: Migrated approximately 240 projects from a commercial DataOps.live platform to an in-house GitLab + DBT platform. Responsibilities: Led the migration of 240 projects from DataOps.live to an in-house platform as a Snowflake Developer. Collaborated with project-specific tech leads to understand migration scope and requirements.

DBTSnowflakeSQLGitLab CIMonte CarloCollibra

Key outcomes:

  • Successfully migrated ~240 projects from DataOps.live to an in-house platform.

  • Standardized code structure and configured data governance tools.

Nestle Packaging Analytics ProjectDatabricks Developer

Overview: Developed ETL pipelines and a data lake to support a PowerBI dashboard for business users, contributing to Nestle's carbon-neutral goal by 2035. Responsibilities: Worked as a Databricks Developer, building ETL pipelines. Implemented pipelines in Azure Data Factory and Databricks Notebooks.

AzureDatabricksAzure Data FactorySparkSQLLogic AppsAzure SQLSAP BWSAP HANAADLSPythonPySpark

Key outcomes:

  • Developed ETL pipelines and a data lake to support a PowerBI dashboard.

  • Contributed to Nestle's carbon-neutral goal by 2035.

Central Data Catalogue ProjectAzure Data Engineer

Overview: Built a central data catalog for the entire organization's data, enhancing discoverability and governance. Responsibilities: Worked as an Azure Data Engineer to build the data catalog. Developed Python/SQL scripts to fetch metadata from various data sources.

Azure Data FactoryDatabricksSQLNode.jsPython

Key outcomes:

  • Built a central data catalog for organization-wide data.

  • Implemented data quality metrics and Row Level Security.

Industry experience

FinTech

Reported in resume

AI / ML Platform

1 project
  • Anti-Money Laundering Policy CheckLLM DeveloperPython · Flask · GPT-3.5 · GPT-4 +1

Legal Tech

1 project
  • Anti-Money Laundering Policy CheckLLM DeveloperPython · Flask · GPT-3.5 · GPT-4 +1

Manufacturing & Industrial

3 projects
  • Anti-Money Laundering Policy CheckLLM DeveloperPython · Flask · GPT-3.5 · GPT-4 +1
  • Nestle Packaging Analytics ProjectDatabricks DeveloperAzure · Databricks · Azure Data Factory · SparkSQL +7
  • Central Data Catalogue ProjectAzure Data EngineerAzure Data Factory · Databricks · SQL · Node.js +1

Ready to work with Krishna?

Schedule an interview and onboard within 48 hours. No long hiring cycles.

At a Glance

Experience4+ years
Work moderemote
Starting from₹1.8 L/mo
Direct hirePossible
Start within48 hours
From₹1.8 L/ month

Single contract. No agency markup confusion.

Typically responds within 4 business hours.

5-day replacement guarantee
48-hour onboarding, single invoice
Direct chat — no recruiter middleman
Seniority signals
Owns production deploysGreenfield architectSystem ownerCode reviewerMentor / leads juniors
VerifiedVetted by Witarist
Technical skills assessed & verified
Background & identity checked
English communication verified
Ready to onboard in 48 hours

Not sure if this is the right fit?

Tell us your requirements and we'll match you with the best candidates.

Krishna

GEN AI