Hello,
Hope you are doing well
Let me know if you have any consultant for below roles:
Position: Hadoop Developer (Application Support)
Location: NYC, NY
Duration: 6 +Months
Interview: Phone and F2F (Client will pay expenses for F2F for non-local )
Job Description:
Qualifications :
• Bachelor's degree in a related field required, advanced degree preferred.
• Experience managing an application support function at a financial institution.
• 7 or more years' experience performing application support at a financial institution
• Knowledge and experience with SQL and relational database structures, primarily Oracle and Sybase.
• Knowledge of ETL, KYC, Enterprise data lakes – Data acquisition, data discovery tools and Enterprise Data Dictionary.
• Knowledge of active directory, LDAP and Kerberos authentication.
• Hands on experience on Hadoop platform.
• Knowledge of data acquisition, data factory and data services tools e.g. Attunity, LORE, Global ID, Waterline, TAMR and IBM datastage.
• Hands on experience of Linux/Windows platforms.
• Knowledge of the SDLC and tools such as Source Repositories, Automated Building, and Release Management
Perform the monitoring & automation tasks for the large scale distributed systems, especially those involving Hadoop, HIVE, BEELINE, AMBARI, SOLR, Spark, NoSQL databases and analytical programming languages (Python/R).
2. Position: Big Data Engineer
Location: Pittsburgh, PA
Duration: 6+Months
Interview: Phone and Skype
Job Description:
Note: Big Data Engineers with Google Cloud. customer for Google Cloud experience
To build highly resilient, fault tolerant, batch and real time data pipelines in Google Cloud Platform.
Requirements:
o 6+ years of technical experience developing scalable enterprise application using Java or Python.
o Experience designing and building Big Data ETL Pipelines.
o Proficiency in Cloud and Distributed Computing.
o Experience with AWS or Google Cloud Platform.
o Experience using Docker, Kubernetes, CI/CD pipelines and build automation tools like Gradle, Ant or Maven
o Must have strong ANSI SQL skills and experience.
o Experience with cloud-based big data platforms such as DataProc, BigQuery, BigTable and Redshift a plus.
o Experience with orchestration and scheduling platforms like Apache Airflow or Luigi
Responsibilities:
o Develop high performing, Scalable ETL Pipelines and applications in a fast paced and Agile environment.
o Apply principles of SDLC and methodologies like Scrum, CI/CD, Software and Product Security, Scalability, Documentation Practices, Refactoring and Testing Techniques.
o knowledgeable with open source tools and technologies and can use and extend them where appropriate to develop solutions.
Regards,
Parul Bobal
Cybertec, Inc.,
11710 Plaza America Drive
Suite #2000, Reston, VA 20190
Direct: 703-880-4674
Fax: 703-871-5291
Email: parul@cy-tec.com
Hangout : shyamnickan@gmail.com
Hope you are doing well
Let me know if you have any consultant for below roles:
Position: Hadoop Developer (Application Support)
Location: NYC, NY
Duration: 6 +Months
Interview: Phone and F2F (Client will pay expenses for F2F for non-local )
Job Description:
Qualifications :
• Bachelor's degree in a related field required, advanced degree preferred.
• Experience managing an application support function at a financial institution.
• 7 or more years' experience performing application support at a financial institution
• Knowledge and experience with SQL and relational database structures, primarily Oracle and Sybase.
• Knowledge of ETL, KYC, Enterprise data lakes – Data acquisition, data discovery tools and Enterprise Data Dictionary.
• Knowledge of active directory, LDAP and Kerberos authentication.
• Hands on experience on Hadoop platform.
• Knowledge of data acquisition, data factory and data services tools e.g. Attunity, LORE, Global ID, Waterline, TAMR and IBM datastage.
• Hands on experience of Linux/Windows platforms.
• Knowledge of the SDLC and tools such as Source Repositories, Automated Building, and Release Management
Perform the monitoring & automation tasks for the large scale distributed systems, especially those involving Hadoop, HIVE, BEELINE, AMBARI, SOLR, Spark, NoSQL databases and analytical programming languages (Python/R).
2. Position: Big Data Engineer
Location: Pittsburgh, PA
Duration: 6+Months
Interview: Phone and Skype
Job Description:
Note: Big Data Engineers with Google Cloud. customer for Google Cloud experience
To build highly resilient, fault tolerant, batch and real time data pipelines in Google Cloud Platform.
Requirements:
o 6+ years of technical experience developing scalable enterprise application using Java or Python.
o Experience designing and building Big Data ETL Pipelines.
o Proficiency in Cloud and Distributed Computing.
o Experience with AWS or Google Cloud Platform.
o Experience using Docker, Kubernetes, CI/CD pipelines and build automation tools like Gradle, Ant or Maven
o Must have strong ANSI SQL skills and experience.
o Experience with cloud-based big data platforms such as DataProc, BigQuery, BigTable and Redshift a plus.
o Experience with orchestration and scheduling platforms like Apache Airflow or Luigi
Responsibilities:
o Develop high performing, Scalable ETL Pipelines and applications in a fast paced and Agile environment.
o Apply principles of SDLC and methodologies like Scrum, CI/CD, Software and Product Security, Scalability, Documentation Practices, Refactoring and Testing Techniques.
o knowledgeable with open source tools and technologies and can use and extend them where appropriate to develop solutions.
Regards,
Parul Bobal
Cybertec, Inc.,
11710 Plaza America Drive
Suite #2000, Reston, VA 20190
Direct: 703-880-4674
Fax: 703-871-5291
Email: parul@cy-tec.com
Hangout : shyamnickan@gmail.com
If you would like to unsubscribe from Cybertec, Inc., please click here.
No comments:
Post a Comment
Thanks
Gigagiglet
gigagiglet.blogspot.com