Title: BIG DATA ARCHITECT (Spark Streaming)
Location: Milwaukee, WI
Duration: 6 Months
Location :
Qualifications
· 12+ years of total IT experience including 3+ years of Big Data experience (Hadoop, Spark Streaming, Kafka, Spark SQL, HBase, Hive and Sqoop). Hands on experience on Big Data tools and technologies is mandatory.
· Proven experience of driving technology and architectural execution for enterprise grade solutions based on Big Data platforms.
· Designed at least one Hadoop Data Lake end to end using the above Big Data Technologies.
· Exp in designing Hive and HBase Data models for storage and high-performance queries.
· Knowledge of standard methodologies, concepts, best practices, and procedures within Big Data environment.
· Proficient in Linux/Unix scripting.
· Bachelor's degree in Engineering - Computer Science, or Information Technology. Master's degree in Finance, Computer Science, or Information Technology a plus.
· Experience in Agile methodology is a must.
· Experience in Storm and NoSQL Databases (e.g. Cassandra) is desirable.
· Knowledge on Oracle or any other RDBMS experience is desirable
· Familiarity with one of the leading Hadoop distributions like Hortonworks, Cloudera, or MapR is desirable.
· Exposure to infrastructure as service providers such as: Google Compute Engine, Microsoft Azure or Amazon AWS is a plus.
· Self-starter and able to independently implement the solution.
· Good communication skills and problem-solving techniques
Job Description
· Define big data solutions that leverage value to the customer; understand customer use cases and workflows and translate them into engineering deliverables
· Architecting and Designing Hadoop solution.
· Actively participate in Scrum calls, work closely with product owner and scrum master for the sprint planning, estimates and story points.
· Break the user stories into actionable technical stories, dependencies and plan the execution into sprints.
· Designing batch and real time load jobs from a broad variety of data sources into Hadoop. And design ETL jobs to read data from Hadoop and pass to variety of consumers / destinations.
· Perform analysis of vast data stores and uncover insights.
· Responsible for maintaining security and data privacy, creating scalable and high-performance web services for data tracking.
· Propose best practices / standards and implement them in the deliverables.
· Analyze the long running queries and jobs, performance tune them by using query optimization techniques and Spark code optimization.
Skills
APPS-NICHE-BIGDATA-HADOOP
Comments
Post a Comment
Thanks