Pallavi  ·  Senior Snowflake Data Engineer  ·  5+ yrs

Mid-Level
5+ years experienceremote
Available within 48 hrs

Built for

LevisAlbertsonsJOANN

About Pallavi

Pallavi is a Data Engineer with 5+ years of hands-on experience in data engineering across AWS, GCP, and Azure platforms. She specializes in building robust data pipelines and optimizing ETL processes using Snowflake, Python, and DBT.

5+ years of commercial experience in

Skills(17)

PythonSQLSnowflakePySparkAirflowDBTRedshiftBigQueryAWS GlueAzure Data FactoryAzure DatabricksGitHub ActionsHadoopGCPSQL ServerDataProc APIGIT

Why hire Pallavi?

Production deploy authorityLed multiple data migration projectsStrong focus on data quality assurance

Led the adoption of DBT to streamline end-to-end data engineering workflows.

Successfully transitioned data processing from SQL queries on Redshift to AWS Glue jobs, improving ETL efficiency.

Implemented CI/CD pipelines for data warehousing and automated data processing using GitHub Actions.

Developed robust integration pipelines on Azure Data Factory and implemented PySpark jobs on Azure Databricks.

Migrated merchandising data from Hadoop to Google Cloud Platform (GCP), optimizing processes with Jenkins and Apache Airflow Composer, reducing processing time by 25%.

Implemented Business Intelligence solutions involving data migration from SQL Server to GCP BigQuery, ensuring data integrity through rigorous quality checks.

Project highlights(6)

Data Transformation Strategy EnhancementData Engineer

Overview: This project involved transitioning data processing from Redshift SQL queries to AWS Glue jobs, aiming to improve ETL efficiency for a retail company. Responsibilities: Successfully transitioned data processing from SQL queries on Redshift to AWS Glue jobs, significantly improving ETL processes. Developed PySpark jobs for transformation and aggregation reports, enhancing data processing efficiency. Implemented integration pipelines with various data sources using AWS Glue.

PythonSQLRedshiftAWS GluePySpark

Key outcomes:

  • Successfully transitioned data processing from SQL queries on Redshift to AWS Glue jobs, significantly improving ETL processes.

End-to-End Data Engineering Workflow OptimizationData Engineer

Overview: This project focused on optimizing data processing and engineering workflows using modern tools and methodologies for a retail company, specifically leveraging DBT. Responsibilities: Led data processing initiatives using DBT, overseeing the complete end-to-end data engineering workflow. Wrote code for ETL processes in Snowflake, ensuring seamless data flow from raw to transformed to aggregated layers.

DBTSnowflakeAirflowSQL

Key outcomes:

  • Streamlined end-to-end data engineering processes by leading the adoption of DBT.

Integration Pipeline and AutomationData Engineer

Overview: This project focused on streamlining data integration and automating processing tasks for a retail company using Azure services. Responsibilities: Developed integration pipelines on Azure Data Factory, ensuring smooth data flow. Implemented PySpark jobs for data transformation on Azure Databricks, enhancing processing efficiency.

Azure Data FactoryAzure DatabricksPySparkGitHub Actions

Key outcomes:

  • Developed robust integration pipelines on Azure Data Factory for smooth data flow.

Merchandizing Data MigrationData Engineer

Overview: This project implemented a merchandising solution for a retail company, focusing on migrating data from Hadoop to Google Cloud Platform (GCP) using Hive-to-BigQuery conversion. Responsibilities: Demonstrated expertise in migrating data from Hadoop to Google Cloud Platform (GCP) for a merchandising solution.

HadoopGCPBigQueryAirflow

Key outcomes:

  • Successfully migrated merchandising data from Hadoop to Google Cloud Platform (GCP).

Business Intelligence Solution ImplementationData Engineer

Overview: This project centered on implementing a Business Intelligence (BI) solution for a retail company, focusing on migrating data from SQL Server to Google Cloud Platform (GCP) using BigQuery. Responsibilities: Implemented a Business Intelligence solution with a focus on migrating data from SQL Server to Google Cloud Platform (GCP) using BigQuery.

SQL ServerGCPBigQueryPython

Key outcomes:

  • Implemented a Business Intelligence solution focused on data migration from SQL Server to GCP using BigQuery.

Industry experience

HealthTech

Reported in resume

Logistics & Supply Chain

Reported in resume

Ready to work with Pallavi?

Schedule an interview and onboard within 48 hours. No long hiring cycles.

At a Glance

Experience5+ years
Work moderemote
Starting from₹1.6 L/mo
Direct hirePossible
Start within48 hours
From₹1.6 L/ month

Single contract. No agency markup confusion.

Typically responds within 4 business hours.

5-day replacement guarantee
48-hour onboarding, single invoice
Direct chat — no recruiter middleman
Seniority signals
Owns production deploysSystem owner
VerifiedVetted by Witarist
Technical skills assessed & verified
Background & identity checked
English communication verified
Ready to onboard in 48 hours

Not sure if this is the right fit?

Tell us your requirements and we'll match you with the best candidates.

Pallavi

Data Engineer