Search This Blog

Urgent Need of Hadoop/MapR Admin

Hi    
 

This is Santosh from RSRIT below is the job description for Hadoop/MapR Admin position.

 

Role: Hadoop/MapR Developer

Location: Columbus, OH

Duration: Long-term

 

Responsibilities:

  • Provide end to end vision and hands on experience with MapR Platform especially best practices around HIVE and HBASE
  • Should be a Rockstar in HBase and Hive “Best Practices”
  • Troubleshoot and develop on Hadoop technologies including HDFS, Kafka, Hive, Pig, Flume, HBase, Spark, Impala and Hadoop ETL development via tools such as ODI for Big Data and APIs to extract data from source.
  • Translate, load and present disparate data-sets in multiple formats and from multiple sources including JSON, Avro, text files, Kafka queues, and log data.
  • Lead workshops with many teams to define data ingestion, validation, transformation, data engineering, and Data MOdeling
  • Performance tune HIVE and HBASE jobs with a focus on ingestion
  • Design and develop open source platform components using Spark, Sqoop, Java, Oozie, Kafka, Python, and other components
  • Lead the technical planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver complex projects
  • Participate and lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates
  • Contribute to the thought capital through the creation of executive presentations, architecture documents and articulate them to executives through presentations
  •  

Qualifications:

  • At least 8+ years of experience in designing and developing large scale data processing/data storage/data distribution systems
  • At least 3+ years of experience on working with large projects including the most recent project in the MapR platform
  • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
  • Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform.
  • Hands-on experience with Hadoop, Teradata (or other MPP RDBMS), MapReduce, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
  • Experience with end-to-end solution architecture for data capabilities including:
  • Experience with ELT/ETL development, patterns and tooling (Informatica, Talend)
  • Ability to produce high quality work products under pressure and within deadlines with specific references
  • VERY strong communication, solutioning, and client facing skills especially non-technical business users
  • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
  • At least 5+ years of working with a complex Big Data environment
  • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets

Preferred Skills and Education:

  • Master’s degree in Computer Science or related field
  • Certification in Azure platform

Santosh Peri

Sr. IT Recruiter

(W) 248-814-2361, Cell: 614-726-1683

2260 Haggerty Road, Suite#285, Northville, MI 48167.

www.rsrit.com    

 
 
 
 

To unsubscribe from future emails or to update your email preferences click here .

No comments:

Post a Comment

Thanks

Gigagiglet
gigagiglet.blogspot.com

Featured Post

Fwd: Senior QA Automation Engineer_ Sunrise, FL (LOCAL CANDIDATES ONLY!) In Person Interview

Greetings, We have the below requirement with Client. Kindly go through the Job Description and let me know your interest.   J...

Contact Form

Name

Email *

Message *

Total Pageviews