Hi,
Hope you are doing well,
We have Immediate Job Requirement for BigData-PySpark Developer with my client and location is Boston, MA if you are Interested,
Role: BigData-PySpark Developer
Location: Boston, MA or Remote till covid19 situation
Duration: 12 Months
Interview Process: Hacker Rank Coding Test + 2 WebEx round.
Job Description:
Two main skills:
- Ability to write complex SQL queries.
- Ability to write complex Python code to manipulate data.
Responsibilities
- Looking for very hands on passionate coder who can develop the code as per business and technical requirements.
- We are looking for someone who is passionate about technology and engineering
- Has exceptional analytical skills and ability to apply knowledge and experience in decision-making to arrive at creative and commercial solutions
- Manage multiple tasks and use sound judgment when prioritizing
- Collaborate with global cross functional team in building customer-centric products
- Analyze existing software implementations to identify areas of improvement and provide deadline estimates for implementing new features
- Update and maintain documentation for team processes, best practices and software runbooks
- Establish trusted partnerships with peers, and customer stakeholders
- Leverage technology to deliver business value
- Energetic, self-directed and self-motivated
- Ability to communicate clearly
Minimum Qualifications:
- B.S. or higher in Computer Science
- A very hands on coding experience using a modern programming language (Python or Scala)
- Understand data ingestion mechanics
- 7+ years of hands on coding experience in Hadoop Ecosystems with at-least one/multiple major distributions: AWS (or) Azure HDInsight/Databricks (or) Hortonworks-HDP (or) Cloudera-CDP
- 7+ years of working experience in writing & Understanding Complex HQL queries using Hive and SQL queries.
- 5+ years of hands on experience with writing Complex Spark programs in Scala/Python to process huge volume of data.
- Experience with developing systems that can scale to large amounts of data
- 7+ years of experience in Unix Shell scripting for automation activities
- 7+ years on experience in anyone of traditional relational databases development like Teradata, Oracle, Netezza, SQLServer so on
- Experience in working with Scrum teams
Preferred qualifications
• 3+ years on real-time streaming tools like Apache Kafka and NiFi
• 7+ years on experience in using any ETL tools like Informatica (or) IBM Infosphere (or) Talend.
• Knowledge of Healthcare and Insurance Domain
• Experience in Data Warehouse and BI Analytics
• Good understanding of the project lifecycle process, with experience working in an agile scrum operating model
• Good understanding of the software development lifecycle, including continuous build and test, peer code review, design review, info security review, production, UAT and QA release cycles
• Experience with NoSQL databases, such as HBase, Cassandra
No comments:
Post a Comment
Thanks
Gigagiglet
gigagiglet.blogspot.com