Search This Blog

Google Cloud Big Data Engineer -Morristown, NJ

Job title: Google Cloud Big Data Engineer
Location: Morristown, NJ
Start date: ASAP
Duration: Long Term
 
 
 Qualified candidates should demonstrate:
• Solid hands-on experience with the GCP Big Data Stack
• GCP, BigQuery, DataProc, Cloud SQL, Cloud Storage, PubSub
• Solid hands-on experience with the Hadoop Ecosystem
• Spark, Hive, HDFS, HBase, Cassandra, Kafka, Sqoop, Flume
• Strong scripting proficiency with Python
• Production experience designing and developing Database Schema (SQL & NoSQL)
• Production experience designing and developing Data Warehousing
• Production experience designing, developing, scaling, and optimizing Data Pipelines (ETL, Batch, Stream)
• Understanding of Machine Learning models and algorithms is a huge asset
 
 General Requirements
·        Good domain understanding of Pharma industry especially Commercial and Operations
·        Understanding of current trends of healthcare and life sciences infrastructure, (including databases, enterprise integration buses, protocols, APIs and formats), security, networking and cloud-based delivery models.
·        Work closely with Business Partners to design Enterprise Data Warehouse
·        Build the strategy, direction and roadmap for data warehousing, business intelligence and analytics for the client
·        Rapidly architect, design, prototype, and implement solutions to tackle traditional BI, Big Data needs for clients
·        Work in subject matter experts to understand client needs and ingest variety of data sources such as social media, internal/external documents, financial data, and operational data
·        Research, experiment, and utilize leading Cloud Services, Traditional BI, Big Data methodologies
·        Translate client business problems into technical approaches that yield actionable results across multiple, diverse domains; communicate results and educate others through design and build of insightful visualizations, reports, and presentations
·        Provide technical design leadership with the responsibility to ensure the efficient use of resources, the selection of appropriate technology and use of appropriate design methodologies
·        Experience on some horizontally scalable file systems (or databases) like S3, HDFS. Experience in technology areas such as Big data appliances, in-memory, NoSQL DBs (such as HBase, MongoDB, Cassandra, Redis).
·        Knowledge of MPP, Graph databases for networks analysis.
·        Knowledge of Open Source tools and services - such as python, apache Spark, apache Airflow, apache Nifi, apache Flink etc
·        Experience on traditional databases (RDBMS) like MySQL, DB2, PostgreSQL, MariaDB. Oracle.
·        Additional focus areas would-be real-time analytics (such as Kafka, Spark Streaming, Flink), in-memory processing (Spark), Next Best Action, and Internet of Things (IoT).
·        Knowledge of MPP, Graph and NOSQL databases
·        Knowledge of Open Source tools and services - such as python, apache spark, apache Airflow
·        Knowledge of designing security architectures on AWS and Hortonworks 
·        Good to have knowledge on the teamwork tools and product management tools like Git and Jira.
·        Good to have knowledge of container technologies like Mesos, Docker and Kubernettes.
·        Key responsibilities in this role would be Data Lake Implementation, BI system migration / Transformation, Tools / Technology comparison and selection, etc.
·        Be able to work with globally distributed teams with flexible working hours around the clock.
 
Qualifications
·        Ability to think strategically and translate plans into phased actions in a fast paced, high pressure environment
·        Strong client facing presentation skills necessary to communicate with, and persuade, a wide range of audiences
·        Have good working knowledge of AWS Cloud Services, Hortonworks Data Platform and Hortonworks Data Flow and Opensource Big Data Ecosystem components.
·        Bachelor's degree in Computer Science, a related field or equivalent work experience required
·        At least 8 years of IT experience with several years in hands on Data Architecture, Modeling and Strategy; with majority of it earned in building enterprise level platforms.
·        At least 2 years in developing big data solutions and architecture.
·        Experience in engaging with customer’s BI organization to drive BI/machine-learning/AI Program / projects implementation
·        Experience in analysis, design, development, deployment & documentation of BI/AI solutions
·        Be well conversed with either the consumer-packaged goods, retail or consumer lending industry.
·        Pharma / life science experience would be a plus but not mandatory.
·        Excellent oral and written communication skills.
·        Be willing to travel.
 
 
Thanks
 

Company Name | Website

No comments:

Post a Comment

Thanks

Gigagiglet
gigagiglet.blogspot.com

Featured Post

Fwd: Senior QA Automation Engineer_ Sunrise, FL (LOCAL CANDIDATES ONLY!) In Person Interview

Greetings, We have the below requirement with Client. Kindly go through the Job Description and let me know your interest.   J...

Contact Form

Name

Email *

Message *

Total Pageviews