Hadoop Data Engineer
Location – San Jose
Long Term Contract
The ideal candidate will have:
- MS in Computer Science / related technical field with 10+(level 5) years of strong hands-on experience in enterprise data warehousing / big data implementations & complex data solutions and frameworks
- Strong SQL, ETL, scripting and or programming skills with a preference towards Python, Java, Scala, shell scripting
- Demonstrated ability to clearly form and communicate ideas to both technical and non-technical audiences.
- Strong problem-solving skills with an ability to isolate, deconstruct and resolve complex data / engineering challenges
- Results driven with attention to detail, strong sense of ownership, and a commitment to up-leveling the broader IDS engineering team through mentoring, innovation and thought leadership
Desired skils:
- Familiarity with streaming applications
- Experience in development methodologies like Agile / Scrum
- Strong Experience with Hadoop ETL/ Data Ingestion: Sqoop, Flume, Hive, Spark, Hbase
- Strong experience on SQL and PLSQL
- Nice to have experience in Real Time Data Ingestion using Kafka, Storm, Spark or Complex Event Processing
- Experience in Hadoop Data Consumption and Other Components: Hive, Hue HBase, , Spark, Pig, Impala, Presto
- Experience monitoring, troubleshooting and tuning services and applications and operational expertise such as good troubleshooting skills, understanding of systems capacity, bottlenecks, and basics of memory, CPU, OS, storage, and networks.
- Experience in Design & Development of API framework using Python/Java is a Plu
- Experience in developing BI Dash boards and Reports is a plus
No comments:
Post a Comment
Thanks
Gigagiglet
gigagiglet.blogspot.com