Nitesh Madan  ·  Senior AWS Spark Data Engineer  ·  5+ yrs

Mid-Level
5+ years experiencehybrid
Available within 48 hrs

Proof of scale

Built for
Mastercard Inc.UniPolMastercard

About Nitesh

Nitesh Madan is a Big Data Developer with 6+ years of experience in designing, implementing, and supporting big data applications. He has a strong proficiency in AWS, Apache Spark, and Python, and has successfully contributed to projects in both financial services and manufacturing domains.

5+ years of commercial experience in

Skills(13)

AWSApache SparkPostgreSQLPythonSparkHadoopHiveHBaseSQLPhoenixSqoopRDBMSMap Reduce

Why hire Nitesh?

Production deploy authorityFull lifecycle ownershipClient interaction experience

Designed and implemented Big Data applications using Apache Spark and AWS.

Optimized PySpark scripts for data extraction and reporting.

Ensured data quality for critical payment-processing systems.

Contributed significantly to all project stages from requirements to production support.

Successfully designed + implemented Big Data applications using Apache Spark + Hadoop + AWS

Developed + optimised PySpark scripts for data extraction + cleaning + reporting

Implemented Hive + HBase schemas with partitioning + bucketing for performance

Project highlights(3)

Payment Processing SupportBigdata Developer

Overview: This project supports Mastercard Inc., the second-largest payment-processing corporation worldwide, by providing services for payment transaction processing. Responsibilities: Understood upstream source nature and business cases to ensure data quality and relevance. Monitored multiple batches and executed various scripts across several servers. Performed initial data loads using Initial_Dataload.sql, checked table structures, and filtered data from files. Created Standard Operating Procedures (SOP) for automation, enhancing operational efficiency.

SparkAWSHadoopHiveHBasePythonSQL

Key outcomes:

  • Ensured data quality and relevance by understanding upstream source nature and business cases.

  • Improved operational efficiency by creating SOPs for automation.

Investment Casting Data WarehouseOfficer (Database Developer)

Overview: This project supported UniPol, a global manufacturing company specializing in investment casting technology for automotive and aerospace industries. Responsibilities: Understood upstream source data and business requirements to facilitate data processing. Connected to a PostgreSQL instance using workbench for database operations. Performed initial data loads using Initial_Dataload.sql, checking table structures and sample data. Copied all dependencies to EMR and created HBase tables using Phoenix scripts. Developed Spark jobs to process data, supporting data transformation workflows.

SparkAWSHBasePhoenixPythonPostgreSQLSQL

Key outcomes:

  • Processed data for a global manufacturing company to support investment casting technology.

  • Successfully integrated PostgreSQL, HBase, and Spark for data warehousing and analysis.

Tetra PackagesSr. Engineer

  • This role involved developing and executing test plans for quality assurance, monitoring test scripts, and identifying/remedying production defects.
  • Developed and executed test plans to ensure all project objectives were met.
  • Implemented and monitored test scripts to assess functionality, reliability, performance, and quality.
  • Identified and remedied defects within the production process, contributing to system stability.
  • Imported structured data from RDBMS to HDFS using Sqoop and performed necessary transformations.
  • Processed data into HDFS by developing solutions, analyzing data using Spark and Hive, and producing summary results.
SparkHadoopHiveHBasePhoenixPythonSqoopRDBMSMap ReduceSQL

Key outcomes:

  • Ensured quality assurance standards were achieved by compiling and analyzing statistical data.

  • Successfully implemented Hive and HBase column family schemas with performance techniques like partitioning and bucketing.

  • Resolved production defects and contributed to ongoing compliance with quality and regulatory requirements.

Industry experience

FinTech

1 project
  • Payment Processing SupportBigdata DeveloperSpark · AWS · Hadoop · Hive +3

Manufacturing & Industrial

2 projects
  • Investment Casting Data WarehouseOfficer (Database Developer)Spark · AWS · HBase · Phoenix +3
  • Tetra PackagesSr. EngineerSpark · Hadoop · Hive · HBase +6

Cybersecurity

Reported in resume

Legal Tech

Reported in resume

Ready to work with Nitesh?

Schedule an interview and onboard within 48 hours. No long hiring cycles.

At a Glance

Experience5+ years
Work modehybrid
Starting from₹1.8 L/mo
Direct hirePossible
Start within48 hours
From₹1.8 L/ month

Single contract. No agency markup confusion.

Typically responds within 4 business hours.

5-day replacement guarantee
48-hour onboarding, single invoice
Direct chat — no recruiter middleman
Seniority signals
Owns production deploysGreenfield architectSystem ownerOn-call experience
VerifiedVetted by Witarist
Technical skills assessed & verified
Background & identity checked
English communication verified
Ready to onboard in 48 hours

Not sure if this is the right fit?

Tell us your requirements and we'll match you with the best candidates.

Nitesh Madan

Big Data Engineer