Nitesh  ·  Senior Spark / Hadoop Data Engineer  ·  6+ yrs

Mid-Level
6+ years experienceremote
Available within 48 hrs

Built for

MastercardUniPol

About Nitesh

Nitesh is a Big Data Developer with 6+ years of experience in designing, implementing, and supporting big data applications. He has extensive expertise in Apache Spark, AWS, and Python, contributing to various projects across financial services and manufacturing domains.

6+ years of commercial experience in

Skills(17)

Apache SparkPythonAWSHadoopPostgreSQLEMRHBasePhoenixSparkRDBMSHDFSPySparkMap ReduceSqoopHiveSQLS3

Why hire Nitesh?

Production deploy authorityDesigned optimized data schemasContributed to team efforts

Designed and implemented diverse big data applications using Apache Spark, Hadoop, and AWS.

Achieved performance optimization and data extraction across various projects.

Successfully processed delta data and enabled customer report extraction from S3.

Designed, implemented, and supported diverse big data applications using Apache Spark, Hadoop, and AWS.

Achieved performance optimization, data extraction, cleaning, and reporting across various projects.

Project highlights(4)

Payment Processing SystemBigdata Developer

Overview: This project is part of Mastercard Inc., focusing on payment transaction processing and related payment services. Responsibilities: Understood upstream source nature and aligned with business cases. Monitored multiple batches and executed multiple scripts on various servers. Performed initial data load using `Initial_Dataload.sql`, checking table structure and sample data. Filtered data from several files and created Standard Operating Procedures (SOPs) for automation.

Apache SparkHadoopAWSPython

Key outcomes:

  • Enabled efficient data filtering and initial data loading for critical payment processing systems.

  • Created SOPs to streamline and automate operational tasks.

Data Warehouse AnalysisOfficer (Database Developer)

Overview: This project supports UniPol, a global leading manufacturing company specializing in investment casting technology for automotive and aerospace industries, by analyzing data for its data warehouse. Responsibilities: Understood upstream source nature and business cases, connecting to a PostgreSQL instance via Workbench. Performed initial data load using `Initial_Dataload.sql`, verifying table structure and sample data. Copied all dependencies on EMR and created HBase tables using Phoenix script. Wrote Spark jobs to process data and imported reference data from RDBMS to HBase.

PostgreSQLEMRHBasePhoenixSparkRDBMSPython

Key outcomes:

  • Successfully implemented data loading and processing pipelines for manufacturing data.

  • Integrated HBase and Spark for efficient data handling and transformation.

Hadoop Applications DevelopmentDatabase Developer

Overview: This project involved developing and architecting Hadoop applications for a UK branch client, focusing on data processing and analysis. Responsibilities: Interacted with onsite teams and customers to clarify and formulate exact requirements for Hadoop applications. Imported structured data from RDBMS to HDFS using Sqoop and performed necessary transformations. Processed data into HDFS by developing solutions and analyzed data using Spark and Hive to produce summary results into Hadoop.

HDFSSparkPySparkMap ReduceSqoopHiveHBasePythonRDBMS

Key outcomes:

  • Successfully designed and implemented efficient data ingestion and processing workflows using Hadoop ecosystem components.

  • Improved data retrieval performance by designing optimized Hive and HBase schemas with partitioning and bucketing.

Tetra PackagesSr. Engineer

  • This project involved developing and executing test plans to ensure quality objectives were met for packaging products.
  • Developed and executed test plans, implemented, and monitored test scripts to assess functionality, reliability, performance, and quality.
  • Identified and remedied defects within the production process and recommended preventative and corrective actions.
  • Compiled and analyzed statistical data, ensuring user expectations and compliance with quality standards and industry regulations.
  • Processed subscriber and complaint details, removing duplicates and enabling customer reports from S3.
SQLPostgreSQLHBasePhoenixHiveRDBMSS3

Key outcomes:

  • Ensured quality assurance standards were consistently achieved through thorough testing and defect remediation.

  • Contributed to creating customer reports and processing delta data, enhancing data accessibility and utility.

Industry experience

FinTech

1 project
  • Payment Processing SystemBigdata DeveloperApache Spark · Hadoop · AWS · Python

Manufacturing & Industrial

2 projects
  • Data Warehouse AnalysisOfficer (Database Developer)PostgreSQL · EMR · HBase · Phoenix +3
  • Tetra PackagesSr. EngineerSQL · PostgreSQL · HBase · Phoenix +3

Cybersecurity

Reported in resume

Logistics & Supply Chain

1 project
  • Data Warehouse AnalysisOfficer (Database Developer)PostgreSQL · EMR · HBase · Phoenix +3

Ready to work with Nitesh?

Schedule an interview and onboard within 48 hours. No long hiring cycles.

At a Glance

Experience6+ years
Work moderemote
Starting from₹1.6 L/mo
Direct hirePossible
Start within48 hours
From₹1.6 L/ month

Single contract. No agency markup confusion.

Typically responds within 4 business hours.

5-day replacement guarantee
48-hour onboarding, single invoice
Direct chat — no recruiter middleman
Seniority signals
Owns production deploysGreenfield architectSystem owner
VerifiedVetted by Witarist
Technical skills assessed & verified
Background & identity checked
English communication verified
Ready to onboard in 48 hours

Not sure if this is the right fit?

Tell us your requirements and we'll match you with the best candidates.

Nitesh

Big Data Engineer