Job Location: NYC, NY
Job Description:
• 10+ year work experience with Big Data Framework Hadoop (Spark, HBase, HDFS etc.)
• Hands on experience building a Hadoop based Data Lake or warehouse
• Experience with traditional ETL technologies and Hadoop Sqoop and Hadoop Flume
• Excellent problem solving and analytical skills;
• Excellent verbal and written communication skills;
• Experience in optimizing large data loads
• Develop detailed ETL specifications based on business requirements
• Analyze functional specifications and assist in designing potential technical solutions
• Identifies data sources and works with source system team and data analyst to define data extraction methodologies
• Analysis of existing designs and interfaces and applying design modifications or enhancements.
• Good knowledge in writing complex queries in PL SQL
• Unit Integration testing activities and assist in User Acceptance Testing
• Database: Teradata
• Bachelor’s/Master’s Degree in Engineering, preferably Computer Science/Engineering
Comments
Post a Comment
Thanks