Hadoop Data Engineer || San Jose, CA || 12+ Months Contract

Hi,

Hope you are doing well

I would like to check your interest & availability for below mention position. If you are interested kindly reply me back with your updated resume, contact details & best time to reach you.

Hadoop Data Engineer
San Jose, CA
Phone/Skype
12+ Months Contract

What you’ll do
• Designing, develop & tune data products, applications and integrations on large scale data platforms (Hadoop, Kafka Streaming, SQL server etc) with an emphasis on performance, reliability and scalability and most of all quality.
• Analyze the business needs, profile large data sets and build custom data models and applications to drive the Adobe business decision making and customers experience
• Develop and extend design patterns, processes, standards, frameworks and reusable components for various data engineering functions/areas.
• Collaborate with key stakeholders including business team, engineering leads, architects, BSA's & program managers.
The ideal candidate will have:
• MS/BS in Computer Science / related technical field with 4+years of strong hands-on experience in enterprise data warehousing / big data implementations & complex data solutions and frameworks
• Strong SQL, ETL, scripting and or programming skills with a preference towards Python, Java, Scala, shell scripting
• Demonstrated ability to clearly form and communicate ideas to both technical and non-technical audiences.
• Strong problem-solving skills with an ability to isolate, deconstruct and resolve complex data / engineering challenges
• Results driven with attention to detail, strong sense of ownership, and a commitment to up-leveling the broader IDS engineering team through mentoring, innovation and thought leadership.
Desired Skills:
• Familiarity with streaming applications
• Experience in development methodologies like Agile / Scrum
• Strong Experience with Hadoop ETL/ Data Ingestion: Sqoop, Flume, Hive, Spark, Hbase
• Strong experience on SQL and PLSQL
• Nice to have experience in Real Time Data Ingestion using Kafka, Storm, Spark or Complex Event Processing
• Experience in Hadoop Data Consumption and Other Components: Hive, Hue HBase, , Spark, Pig, Impala, Presto
• Experience monitoring, troubleshooting and tuning services and applications and operational expertise such as good troubleshooting skills, understanding of systems capacity, bottlenecks, and basics of memory, CPU, OS, storage, and networks.
• Experience in Design & Development of API framework using Python/Java is a Plu
• Experience in developing BI Dash boards and Reports is a plus.
 
Waiting for your response…
Best Regards,
Ravi Nigam
VBeyond Corporation
+1-716-442-3700 Ext. 505  

Disclaimer: We respect your Online Privacy. This is not an unsolicited mail. Under Bill S 1618 Title III passed by the 105th US Congress this mail cannot be considered Spam as long as we include Contact information and a method to be removed from our mailing list. If you are not interested in receiving our e-mails then please reply at RaviN@VBeyond.com with subject “Remove”. Also mention all the e-mail addresses to be removed which might be diverting the e-mails to you. We are sorry for the inconvenience.



Comments

Popular posts from this blog

SAP Basis Architect

JD :: Snowflake Python AWS Developer | contract | New Jersey

Data Architect