Hello, I have below opening. please share with me your consultant profile to sandeepp@vbeyond.com
Data Engineer Portland, OR (Onsite from day one. candidate has to go to office in Oregon)
Minimum 10+Years experience
Responsibilities: (The primary tasks, functions and deliverables of the role) · Design and build reusable components, frameworks and libraries at scale to support analytics products · Design and implement product features in collaboration with business and Technology stakeholders · Identify and solve issues concerning data management to improve data quality · Clean, prepare and optimize data for ingestion and consumption · Collaborate on the implementation of new data management projects and re-structure of the current data architecture · Implement automated workflows and routines using workflow scheduling tools · Build continuous integration, test-driven development and production deployment frameworks · Analyze and profile data for designing scalable solutions · Troubleshoot data issues and perform root cause analysis to proactively resolve product and operational issues
Requirements: Experience: · Strong understanding of data structures and algorithms · Strong understanding of solution and technical design · Has a strong problem solving and analytical mindset? · Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders · Able to quickly pick up new programming languages, technologies, and frameworks · Experience building cloud scalable, real time and high-performance data lake solutions · Fair understanding of developing complex data solutions · Experience working on end-to-end solution design · Willing to learn new skills and technologies · Has a passion for data solutions
Required and Preferred Skill Sets: · Hands on experience in AWS - EMR [Hive, Pyspark], S3, Athena or any other equivalent cloud · Familiarity with Spark Structured Streaming · Minimum experience working experience with Hadoop stack dealing huge volumes of data in a scalable fashion · Hands-on experience with SQL, ETL, data transformation and analytics functions · Hands-on Python experience including Batch scripting, data manipulation, distributable packages · Experience working with batch orchestration tools such as Apache Airflow or equivalent, preferable Airflow · Working with code versioning tools such as GitHub or BitBucket; expert level understanding of repo design and best practices · Familiarity with deployment automation tools such as Jenkins · Hands-on experience designing and building ETL pipelines; expert with data ingest, change data capture, data quality; hand on experience with API development; · Designing and developing relational database objects; knowledgeable on logical and physical data modelling concepts; some experience with Snowflake · Familiarity with Tableau or Cognos use cases · Familiarity with Agile; working experience preferred
Thanks & Regards, Sandeep Pandey VBeyond Corporation sandeepp@vbeyond.com
Disclaimer: We respect your Online Privacy. This is not an unsolicited mail. Under Bill S 1618 Title III passed by the 105th US Congress this mail cannot be considered Spam as long as we include contact information and a method to be removed from our mailing list. If you are not interested in receiving our e-mails then please reply to (Sandeepp@vbeyond.com) subject=Remove. Also mention all the e-mail addresses to be removed which might be diverting the e-mails to you. We are sorry for the inconvenience.
|