Big Data Developer @ Westerville, OH(Need min 9+yrs)

Big Data Developer
Location: Westerville, OH
Duration: Long Term
Required Skills:
Education: Bachelor's degree in Engineering - Computer Science, or Information Technology. Master's degree in Finance, Computer Science, or Information Technology a plus

Qualifications
  • 9+ years of total IT experience including 4+ years of Big Data experience
  • Person need to be very strong in Spark Streaming and Kafka (first preference). Otherwise, the second preference is strong in Spark SQL (Scala or Python) and Hive.
  • Experience on Cloudera distribution and Cloudera Data Hub (CDH) is a plus
  • Proficient in Linux/Unix scripting
  • Knowledge on PeopleSoft data structures and HR related data is a plus
  • Experience in Agile methodology is a must
  • Knowledge of standard methodologies, concepts, best practices, and procedures within Big Data environment
  • Exposure to infrastructure as service (IAAS) providers such as: Google Compute Engine, Microsoft Azure or Amazon AWS is a plus
  • Self-starter and able to independently implement the solution
  • Good problem-solving techniques and communication
Job Description
  • Hands on Big Data developer role
  • Develop data pipelines using Big Data technologies that leverage value to the customer; understand customer use cases and workflows and translate them into engineering deliverables
  • Actively participate in scrum calls, story points, estimates and own the development piece
  • Analyze the user stories, understand the requirements and develop the code as per the design
  • Develop test cases, perform unit testing and integrating testing
  • Support QA Testing, UAT and production deployment
  • Develop batch and real-time data load jobs from a broad variety of data sources into Hadoop
  • Design ETL jobs to read data from Hadoop and pass to variety of consumers / downstream applications
  • Perform analysis of vast data stores and uncover insights
  • Analyze the long running queries and jobs, performance tune them by using query optimization techniques and Spark code optimization

Thanks

vandana@levanture.com
Company Name | Website

Comments

Popular Posts