Urgent Requirement | Data Engineer with Spark, Scala | Minneapolis/MN (Remote till Covid)

Hi Partners,

 

Hope you are doing well!        

                                                                                                                                                                                                                

Please see the job details below and let me know if you would be interested in this role.

If interested, please send me your resume, your contact details, your availability, and a good time to connect with you.

 

Role: Data Engineer with Spark, Scala

Job Location: Minneapolis/MN (Remote till Covid)

Duration: Long Term Contract

 

Experience: 10 Plus Years  

 

Job Description:

  • Experience in Data pipeline engineering for both batch and streaming applications.   
  •  Kafka Streaming, Security: Anytime connect via Kafka know about security authentication, SSL Knowledge. Confluent Kafka
  • Experience with data ingestion process, creating data pipelines and performance tuning with Snowflake and AWS. 
  • Knowledge aspect of AWS S3(Storage, EC2 ; like knowledge of AWS-101), how to connect AWS(encryption knowledge)
  • Implementing SQL query tuning, cache optimization, and parallel execution techniques. Must be hands-on coding capable in at least a core language skill of (Python, Java, or Scala) with Spark. 
  • Scala: Should be good in troubleshooting, Knows ETL/ELT. Distributed processing using spark, how to optimize the processing, performance tuning (knows PySpark).
  •  Knowledge of Docker: what’s docker, how to build, test and deploy 
  • Expertise in working with distributed DW and Cloud services (like Snowflake, Redshift, AWS etc) via scripted pipeline Leveraged frameworks and orchestration like Airflow as required for ETL pipeline This role intersects with “Big data” stack to enable varied analytics, ML etc. Not just Datawarehouse type workload.  
  • Handling large and complex sets of XML, JSON, Parquet and CSV form various sources and databases. Solid grasp of database engineering and design Identify bottlenecks and bugs in the system and develop scalable solutions Unit Test and document deliverables Capacity to successfully manage a pipeline of duties with minimal supervision. 
  • Nice to have: Stream sets, Databricks, Airflow orchestration,
  • DevOps: Working knowledge on DevOps environment. Basic idea how to move data in snowflake from Delta Lake(Databricks)/Onprem data lake, Some knowledge of Kerberos.

 


Thanks & Regards,

 

Akash Raj

Human Resource Executive

Call: 716-952-9960

E: AkashR@VBeyond.com | www.vbeyond.com

390 Amwell Road, Suite # 107, Hillsborough, NJ 08844

Note – VBeyond is fully committed to Diversity and Equal Employment Opportunity.

 

Disclaimer: We respect your Online Privacy. This is not an unsolicited mail. Under Bill S 1618 Title III passed by the 105th US Congress this mail cannot be considered Spam as long as we include Contact information and a method to be removed from our mailing list. If you are not interested in receiving our e-mails then please reply to AkashR@vbeyond.com subject=Remove. Also mention all the e-mail addresses to be removed which might be diverting the e-mails to you. We are sorry for the inconvenience.


Comments

Popular Posts