Search This Blog

Data Engineer (PySpark + AWS + Iceberg) :- Chicago, IL – Only Local – No Relocation

Hello Folks,

 

Hope you are doing great!

 

This is Himank Jani from ApTask.

 

We have urgent requirements with one of our client's, please review the job description below and let me know if you have any relevant candidates on your bench, Kindly share.

 

Need Only Local Candidate's.
(PySpark + AWS + Iceberg) exp is must, don't share any irrelevant resumes.

Please ensure that all profiles shared include details such as current location and work authorization status. Profiles without this information may not receive a response.

Job Title: Data Engineer (PySpark + AWS + Iceberg)

Location : Chicago, IL – Only Local – No Relocation

Exp: 10+ Yrs Minimum

Rate: $63/hr on C2C

No of Position:  2

RTTO – 5 Days Onsite

 

Job Description:
Job Summary

We are looking for a skilled Data Engineer to design and build scalable data solutions using PySpark and AWS services. The ideal candidate will have hands-on experience in building modern data platforms using Apache Iceberg and implementing Medallion architecture on AWS.

 

Key Responsibilities

  • Design and implement end-to-end data solutions using PySpark, ensuring scalability and performance.
  • Build and manage data pipelines using AWS services such as AWS Glue, EMR, and Lambda.
  • Develop data products using PySpark + AWS Glue stack.
  • Implement Medallion Architecture (Bronze, Silver, Gold layers) for structured data processing.
  • Work with Apache Iceberg tables for efficient data storage, versioning, and schema evolution.
  • Ensure data quality, governance, and optimization across pipelines.
  • Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
  • Optimize data processing jobs and improve performance and cost-efficiency on AWS.

 

Required Skills & Experience

  • Strong experience in PySpark for data processing and pipeline development.
  • Hands-on experience with AWS ecosystem (Glue, EMR, Lambda, S3).
  • Experience implementing Medallion Architecture.
  • Practical knowledge of Apache Iceberg or similar table formats.
  • Strong understanding of distributed data processing and big data frameworks.
  • Experience designing scalable and reliable data pipelines.
  • Good understanding of data modeling and ETL/ELT concepts.

 

Preferred Qualifications

  • Experience working outside of Databricks-only environments (ability to build solutions using native AWS stack).
  • Familiarity with modern data lake architectures and open table formats.
  • Knowledge of performance tuning and cost optimization in AWS.
  • Experience with CI/CD pipelines for data engineering workflows.

 

What the Client is Specifically Looking For

  • Engineers who can independently design solutions using PySpark (not limited to Databricks).
  • Strong expertise in AWS-native data engineering tools.
  • Hands-on implementation experience with Apache Iceberg (preferred over Delta).
  • Ability to build data products using Glue + PySpark stack.
  • Clear understanding and implementation of Medallion architecture using AWS services.

 

 

 

 

 

Best Regards,

Himank Deepak Jani

 

ApTask | A global, diversity-certified workforce solutions provider.

Address: 120 Wood Ave South, Suite # 300, Iselin, NJ 08830

 

This e-mail and any attachments may be confidential, proprietary or legally privileged. Any review, use, disclosure, distribution or copying of this e-mail is prohibited except by or on behalf of the intended recipient. If you received this message in error or are not the intended recipient, please delete or destroy the e-mail message and any attachments or copies and notify the sender of the erroneous delivery by return e-mail. It shall not attach any liability on the sender or ApTask or its affiliates. Any views or opinions presented in this email are solely those of the sender and may not necessarily reflect the opinions of ApTask or its affiliates.

 

Candidate Data Collection Disclaimer:
At ApTask, we prioritize safeguarding your privacy. As part of our recruitment process, certain Personally Identifiable Information (PII) may be requested by our clients for verification and application purposes. Rest assured, we strictly adhere to confidentiality standards and comply with all relevant data protection laws. Please note that we only collect the necessary information as specified by each client and do not request sensitive details during the initial stages of recruitment.

If you have any concerns or queries about your personal information, please feel free to contact our compliance team at 
compliance@aptask.com.

Applicant Consent:
By submitting your application, you agree to ApTask's (www.aptask.com)
 Terms of Use and Privacy Policy, and provide your consent to receive SMS and voice call communications regarding employment opportunities that match your resume and qualifications. You understand that your personal information will be used solely for recruitment purposes and that you can withdraw your consent at any time by contacting us at 732-355-8000 or help@aptask.com. Message frequency may vary. Msg & data rates may apply.

 

 

No comments:

Post a Comment

Thanks

Gigagiglet
gigagiglet.blogspot.com

Featured Post

Hiring: Oracle CCB Testing Lead | New York (Day 1 Onsite)

  Hiring: Oracle CCB Testing Lead | New York (Day 1 Onsite)  Location: New York, NY (Relocation OK)  Experience: 8–10 Years   Must Have: • O...

Contact Form

Name

Email *

Message *

Total Pageviews