Need Data Engineer@Bentonville, AR
Role: Data Engineer
Location: BentRole: Data Engineer
Location: Bentonville, AR
The mandatory skills are GCP, Data proc, Airflow, Python, Spark , Hive, Terradata.
We need Immediate joiners.
Resource Request ID: 11101618
Description:
UST Global® is looking for a highly energetic and collaborative Data Engineer with experience building enterprise data intelligence on Cloud platforms. The Data Engineer will be responsible for delivering quality reports and data intelligence solutions to the organization and assist client teams with drawing insights to make informed, data driven decisions for a leading Retailer in the United States. The ideal candidate is expected to be experienced in all phases of the data management lifecycle, gather and analyze requirements, collect, process, store and secure, use, share and communicate, archive, reuse and repurpose data. Identify and manage existing and emerging risks that stem from business activities and ensure these risks are effectively identified and escalated to be measured, monitored and controlled. The candidate should be a proven self-starter with demonstrated ability to make decisions and accept responsibility and risk. Excellent written and verbal communication skills with the ability to collaborate effectively with domain experts and IT leadership team is key to be successful in this role.
Responsibilities:
As a Data Engineer, you will
• Excepted to provide hands on software development for a large data project, hosted in a cloud environment.
• Develop and refine the technical architecture used with Teradata, Python, Spark and Hadoop development teams.
• Provide expertise in the development of estimates for EPICs and User Stories for planning and execution.
• Be able to help others break down large team goals into specific and manageable tasks.
• Be involved and supportive of agile sprint model of development, helping to enforce the practice and the discipline.
• Coach and mentor team members on Teradata, Python, Spark and Hadoop development best practices.
• Define and enforce application coding standards and best practices.
• Identify and resolve technical and process impediments preventing delivery teams from meeting delivery commitments.
• Align and collaborate with architects, other team leads, and IT leadership to develop technical architectural runways supporting upcoming features and capabilities.
• Diagnose and troubleshoot performance and other issues.
• Collaborate with peers, reviewing complex change and enhancement requests.
• Evaluate potential changes and enhancements for objectives, scope and impact.
• Take a proactive approach to development work, leading peers and partners to strategic technical solutions in a complex IT environment.
• Document functional/technical requirements and design based on requirements or objectives.
• Mentor peers on coding standards, patterns and strategy.
• Guide the team on best practices in Teradata, Python, Spark and Hadoop as well as perform code reviews.
• Build and maintain active relationships with customers to determine business requirements.
• Partner with other IT teams during integration activities to facilitate successful implementations.
• Participate in on-call application support and respond to application issues when identified.
• Communicate effectively with technical peers in in a clear manner, while also being able to articulate complex solutions in ways nontechnical business partners can understand.
• Have a good understanding of where their project fits into the larger goals for engineering and adapts their work so that the priorities of the systems they are creating match those of the organization
Requirements:
• BA/BS degree or technical institute training or equivalent work experience
• 4+ years of hands on Teradata, Python, Spark and Hadoop development experience
• 1+ years combined of hands on Google Cloud Platform (GCP) development experience
• Expertise working in GCS Connector, DataProc, Bigquery
• Experience working in ADF Python will be an added advantage
• Experience with Big Data processing frameworks (Spark, Hadoop) is required.
• Experience with DevOps tools and techniques (Continuous Integration, Jenkins, Puppet) is required.
• Experience with one or more software version control systems (e.g. Git, Subversion)
• Experience overseeing team members.
• Excellent communication and presentation skills.
• Experience in agile environment
• Experience with Sprint Boot, Maven, Bamboo and great debugging skills.
• Great understanding with builds, software development and GIT.
• Strong effective communication skills, both written and verbal
The most successful candidates will also have experience in the following:
• Gitflow
• Atlassian products – BitBucket, JIRA, Confluence etc.
• Continuous Integration tools such as Bamboo, Jenkins, or TFS
Bhaskar kumar Koppisetti
Email :kumar.koppisetti@3sbc.com
Hangout : bhaskarkumar3sbc@gmail.com
An E-Verified Company
Note: If you are not able to reach me over the phone please email me , I will reply to you ASAP.
To be removed from our mailing list reply with "remove@3sbc.com" and include your "original email address/addresses" in the subject heading. Include complete address/addresses and/or domain to be removed. We will immediately update it accordingly. We apologize for the inconvenience if any caused. Please consider the environment before printing this email. Go Green
onville, AR
The mandatory skills are GCP, Data proc, Airflow, Python, Spark , Hive, Terradata.
We need Immediate joiners.
Resource Request ID: 11101618
Description:
UST Global® is looking for a highly energetic and collaborative Data Engineer with experience building enterprise data intelligence on Cloud platforms. The Data Engineer will be responsible for delivering quality reports and data intelligence solutions to the organization and assist client teams with drawing insights to make informed, data driven decisions for a leading Retailer in the United States. The ideal candidate is expected to be experienced in all phases of the data management lifecycle, gather and analyze requirements, collect, process, store and secure, use, share and communicate, archive, reuse and repurpose data. Identify and manage existing and emerging risks that stem from business activities and ensure these risks are effectively identified and escalated to be measured, monitored and controlled. The candidate should be a proven self-starter with demonstrated ability to make decisions and accept responsibility and risk. Excellent written and verbal communication skills with the ability to collaborate effectively with domain experts and IT leadership team is key to be successful in this role.
Responsibilities:
As a Data Engineer, you will
• Excepted to provide hands on software development for a large data project, hosted in a cloud environment.
• Develop and refine the technical architecture used with Teradata, Python, Spark and Hadoop development teams.
• Provide expertise in the development of estimates for EPICs and User Stories for planning and execution.
• Be able to help others break down large team goals into specific and manageable tasks.
• Be involved and supportive of agile sprint model of development, helping to enforce the practice and the discipline.
• Coach and mentor team members on Teradata, Python, Spark and Hadoop development best practices.
• Define and enforce application coding standards and best practices.
• Identify and resolve technical and process impediments preventing delivery teams from meeting delivery commitments.
• Align and collaborate with architects, other team leads, and IT leadership to develop technical architectural runways supporting upcoming features and capabilities.
• Diagnose and troubleshoot performance and other issues.
• Collaborate with peers, reviewing complex change and enhancement requests.
• Evaluate potential changes and enhancements for objectives, scope and impact.
• Take a proactive approach to development work, leading peers and partners to strategic technical solutions in a complex IT environment.
• Document functional/technical requirements and design based on requirements or objectives.
• Mentor peers on coding standards, patterns and strategy.
• Guide the team on best practices in Teradata, Python, Spark and Hadoop as well as perform code reviews.
• Build and maintain active relationships with customers to determine business requirements.
• Partner with other IT teams during integration activities to facilitate successful implementations.
• Participate in on-call application support and respond to application issues when identified.
• Communicate effectively with technical peers in in a clear manner, while also being able to articulate complex solutions in ways nontechnical business partners can understand.
• Have a good understanding of where their project fits into the larger goals for engineering and adapts their work so that the priorities of the systems they are creating match those of the organization
Requirements:
• BA/BS degree or technical institute training or equivalent work experience
• 4+ years of hands on Teradata, Python, Spark and Hadoop development experience
• 1+ years combined of hands on Google Cloud Platform (GCP) development experience
• Expertise working in GCS Connector, DataProc, Bigquery
• Experience working in ADF Python will be an added advantage
• Experience with Big Data processing frameworks (Spark, Hadoop) is required.
• Experience with DevOps tools and techniques (Continuous Integration, Jenkins, Puppet) is required.
• Experience with one or more software version control systems (e.g. Git, Subversion)
• Experience overseeing team members.
• Excellent communication and presentation skills.
• Experience in agile environment
• Experience with Sprint Boot, Maven, Bamboo and great debugging skills.
• Great understanding with builds, software development and GIT.
• Strong effective communication skills, both written and verbal
The most successful candidates will also have experience in the following:
• Gitflow
• Atlassian products – BitBucket, JIRA, Confluence etc.
• Continuous Integration tools such as Bamboo, Jenkins, or TFS
Bhaskar kumar Koppisetti
Email :kumar.koppisetti@3sbc.com
Hangout : bhaskarkumar3sbc@gmail.com
An E-Verified Company
Note: If you are not able to reach me over the phone please email me , I will reply to you ASAP.
To be removed from our mailing list reply with "remove@3sbc.com" and include your "original email address/addresses" in the subject heading. Include complete address/addresses and/or domain to be removed. We will immediately update it accordingly. We apologize for the inconvenience if any caused. Please consider the environment before printing this email. Go Green
--
You received this message because you are subscribed to the Google Groups "hotrequirements223" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hotrequirements223+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hotrequirements223/d5d87880-228c-4c5f-b9a3-f54ca0496864n%40googlegroups.com.
Comments
Post a Comment
Thanks