Our client is looking for Senior Big Data Engineer in Denver, CO

Hi,

I hope you are doing well. Please find below the Job Description. Kindly reply back to me with your updated resume, contact details and best time to reach you.  I apologize if the job is not of your interest. However, I would highly appreciate it if you refer someone suitable for this position


Senior Big Data Engineer
Location: Denver, CO
Term: Contract

Mandatory Skills: Big Data (Spark, Kafka) , AWS, Database (SQL, MYSQL, PostgreSQL). Programming (JAVA OR Scala).

Deploy Enterprise data-oriented solutions leveraging Data Warehouse, Big Data and Machine Learning frameworks
Optimizing data engineering and machine learning pipelines
Support data and cloud transformation initiatives
Contribute to our cloud strategy based on prior experience
Understand the latest technologies in a rapidly innovative marketplace
Independently work with all stakeholders across the organization to deliver point and strategic solutions   Skills - Experience and Requirements
Should have prior experience in working as Data warehouse/Big-data architect.
Experience in advanced Apache Spark processing framework, spark programming using Scala or Python with knowledge in shell scripting.
Coding experience in Java and/or Scala is a must.
Experience in using AWS APIs (e.g., JavaAPI, Boto3, etc.) to integrate different services
Should have experience in both functional programming and Spark SQL programming dealing with processing terabytes of data
Specifically, this experience must be in writing Big-data data engineering jobs for large scale data integration in AWS. Prior experience in writing Machine Learning data pipeline using Spark programming language is an added advantage.
Advanced SQL experience including SQL performance tuning is a must.
Experience in logical & physical table design in Big data environment to suit processing frameworks
Knowledge of using, setting up and tuning Spark on EMR using resource management framework such as Yarn or standalone spark.
Experience in writing spark streaming jobs (producers/consumers) using Apache Kafka or AWS Kinesis is required
Should have knowledge in variety of data platforms such as Redshift, S3, DynamoDB, MySQL/PostgreSQL
Experience in AWS services such as EMR, Glue, Athena, IAM, Lambda, Cloud watch and Data pipeline
Experience in AWS cloud transformation projects are required.


Regards

Sachin Tyagi

Senior Associate - Recruitment
Okaya Inc

4949 Expy Dr N, Suite 101, Ronkonkoma, NY - 11779

Landline: +1-631-267-4883 Extn. 328

Email styagi@okayainc.com || www.okayainc.com

Comments

Popular Posts