Search This Blog

November 19, 2024

Immediate Hiring:: Data Engineer with spark, Scala experience- Onsite

Hello Folks,

 

We have 4 positions to fill. 3 positions are required for candidates with prior Walmart experience, and 1 position is needed for candidates without Walmart experience.
Submissions are required at any cost.
Non-locals are also fine but need candidate is expected to be in office from day 1

 

Team – The client is Looking for a sr Data Engineer with spark, Scala experience.- Need quick submissions here

Location: This is based out of Sunnyvale, CA  and a candidate is expected to be in office from day 1

Mandatory Areas 

Must have skills.  

Overall Experience level: and
3+ years of recent GCP experience

5+ years of hands-on experience Hadoop, Hive or Spark, Airflow or a workflow orchestration solution

4+ years of hands-on experience designing schema for data lakes or for RDBMS platforms

Experience with programming languages: Python, Java, Scala, etc.

Experience with scripting languages: Perl, Shell, etc.


Description:

The client ® is looking for a highly energetic and collaborative Data Engineer for a 12-month engagement.

Responsibilities:

 

As a Senior Data Engineer, you will

• Design and develop big data applications using the latest open source technologies.

• Desired working in offshore model and Managed outcome

• Develop logical and physical data models for big data platforms.

• Automate workflows using Apache Airflow.

Create data pipelines using Apache Hive, Apache Spark, Apache Kafka.

• Provide ongoing maintenance and enhancements to existing systems and participate in rotational on-call support.

• Learn our business domain and technology infrastructure quickly and share your knowledge freely and actively with others in the team.

• Mentor junior engineers on the team

• Lead daily standups and design reviews

• Groom and prioritize backlog using JIRA

• Act as the point of contact for your assigned business domain

Requirements:

 

GCP Experience

• 3+ years of recent GCP experience

• Experience building data pipelines in GCP

• GCP Dataproc, GCS & BIGQuery experience

• 5+ years of hands-on experience with developing data warehouse solutions and data products.

• 5+ years of hands-on experience developing a distributed data processing platform with Hadoop, Hive or Spark, Airflow or a workflow orchestration solution are required

• 4+ years of hands-on experience in modeling and designing schema for data lakes or for RDBMS platforms.

Experience with programming languages: Python, Java, Scala, etc.

• Experience with scripting languages: Perl, Shell, etc.

• Practice working with, processing, and managing large data sets (multi TB/PB scale).

• Exposure to test driven development and automated testing frameworks.

• Background in Scrum/Agile development methodologies.

• Capable of delivering on multiple competing priorities with little supervision.

• Excellent verbal and written communication skills.

• Bachelor's Degree in computer science or equivalent experience.

 

The most successful candidates will also have experience in the following:

• Gitflow

• Atlassian products – BitBucket, JIRA, Confluence etc.

• Continuous Integration tools such as Bamboo, Jenkins, or TFS

 

 

Thanks & Regards,
Sanjeev Kumar Battu
Associate Vice President - Delivery
American IT Systems
1116 S Walton Blvd, Suite 113, Bentonville, AR 72712
https://americanitsystems.com/
sanjeev@americanitsystems.com
LinkedIn: https://www.linkedin.com/in/sanjeevkumar-battu-5a079377/
WhatsApp: +91 817-992-4611 (India)
Phone: (479) 265-8608 Ext 108
Cell: +1 469-715-2663