Hi Professionals-
For this high rate they must be ROCKSTARS.
Big Data/Machine Learning Lead
Duration: 3 months – contract-to-hire, must be eligible to convert without sponsorship of a visa. (no EAD-OPT, CPT, H1 or H4)
Location: McLean VA- onsite, no remote option
Must have experience:
Building Large Scale batch systems with python/hadoop
Building Large Scale data systems within streaming context using kafka, spark
Experience building machine learning models for production use
Running CICD pipelines
Experience leading a team
Hybrid of Big Data and Machine Learning Engineer
Preferred skills:
Python
Hadoop
Java
Elastic Search
Data Engineering
Machine Learning – decision trees –interflow – keras - logistic progression - Deep Learning
Tools:
Heavy Python, Java, Scala – not really looking for R tools.
Will go perm after the original duration- will be moving to a leadership role when this goes permanent.
JOB DESCRIPTION
• Collaborating as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation Big Data & Fast Data applications.
• Building efficient and scalable storage for structured and unstructured data.
• Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Nifi, Storm and Kafka on AWS Cloud
• Building and running large-scale NoSQL databases like Elasticsearch and Cassandra.
• Utilizing programming languages like Java, Scala, Python.
• Designing and building applications for the cloud (AWS, Azure, GCP, DO)
• Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Chef, Terraform, Ruby, Git and Docker.
• Performing unit tests and conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance
• B.A./B.S. in Computer Science of related technical discipline
• 3+ years of professional programming experience in Java, Scala, Python, C++, or Golang
• 3+ years of professional experience working on data streaming (Apache Spark, Flink, Storm, and/or Kafka) or data warehousing (Snowflake Analytics, Presto, AWS Athena, AWS Redshift) applications.
• 2+ years working with Linux-based OSes (Red Hat preferred)
• 2+ years working with scripting languages (Shell, Python, Perl)
• Experience working within cloud environments (AWS preferred)
• Experience with streaming analytics, complex event processing, and probabilistic data structures.
• Experience with columnar data stores and MPP
Submission Format:
Legal Name:
Phone Number:
Email Address:
Last 4 of SSN:
MM/DD of birth:
Visa Status:
Location:
Availability to interview:
Availability to start:
Bio:
Skill Highlights- Provide the # years the candidate has on each of the following skills:
Big Data Engineering
Machine Learning skills
Building Large Scale batch systems with python/hadoop
Building Large Scale data systems within streaming context using kafka, spark
Experience building machine learning models for production use
Running CICD pipelines
Experience leading a team - # years
Experience leading a team - # people led
Python
Hadoop
Java
Elastic Search
Data Engineering
Machine Learning – decision trees –interflow – keras - logistic progression - Deep Learning
Python
Java
Scala
Any relevant Certifications?
Other points that make this candidate a great fit for the role:
Thanks and Regard,
Joseph Adams
|joseph@sonsoftinc.com| |www.sonsoftinc.com|
|Yahoo IM: joseph.whiz@yahoo.com|
|Desk: |229-454-7386|
|fax: 678 317 9601|
|11797 north fall Lane Suite # 701|
|Alpharetta GA 30009|
"Coming together is a beginning. Keeping together is progress. Growing together is success...."
No comments:
Post a Comment
Thanks
Gigagiglet
gigagiglet.blogspot.com