New openings with my client so please share me the resume with the position you are sending profile to
Hi Team,
Please share profiles on below roles. Don’t hold any profile because of the rate.
- Sr. Data Architect
Location – Fort Mill, SC / Austin, TX
Job Description:
- Experience in on-prem to cloud data migration.
- Hands on AWS services.
- Provide his thought leadership on reference architecture, Data movement, metadata.
- Good understanding of AWS/Snowflake and supporting AWS services.
2. AWS/Big Data Engineer (TECH LEAD) -- 3 Openings
Location – Fort Mill, SC / Austin, TX (Remote Start)
Job Description:
- Extensive experience setting up core data pipeline services,
- Modeling experience in Amazon Redshift, and establishing an API layer for data access within and across Virtual Private Cloud (VPC).
- Strong in AWS native services , GLU, Redshift. understanding of Python, pyspark, Apche spark , someone who can provide strategic thinking
- Hands on guy who has created ingestion pipeline , worked on applying business transformation
3. Data Analyst -- 3 Openings
Location – Fort Mill, SC / Austin, TX (Remote Start)
Mandatory:
Cloud experience .
Big data ecosystem experience.
Need a strong analyst with cloud migration project ep .
Open API, DATA modeling , ETL , good exposure to AWS, strong on SQL.
- This position is responsible for supporting high profile projects and efforts as a lead data analyst, adhering to an enterprise-wide data governance framework to achieve the required level of consistency, quality, understanding, and protection of data.
- MUST HAVE experience in Open API , data modeling , ETL exp , data analysis.
- Perform data analysis and profiling using standard toolsets as well as manual analysis methods to interpret trends or patterns, identify relationships, understand anomalies, assess data quality, and add context to requirements.
- Perform data mapping and gap analysis.
- Document Data Requirements Specifications.
- Understand data models consisting of business-user-friendly data marts from online and offline data sources.
- Work closely with data engineers in order to automate the collection and analysis of the raw data required for the models you build.
- Clearly, scope, track, execute and communicate on projects (ticketing system experience is a plus) in an agile environment.
4. Location – SC / Austin, TX
JD-
DATABASE DEVOPS ENGINEER
Responsibilities
· Develop software, scripts and process to automate the Database deployments using Liquibase or RedGate.
· Design and rollout Rollback process, reporting of database changes, sql rules/scanning and linting tools.
· Collaborate with stakeholders to better understand the needs for their databases, make suggestions and ultimately deploy solutions in the form of databases and their orchestrations.
· Design, Maintain and Support standard DevSecOps tools and platforms that enable continuous integration and delivery such as GitHub, Artifactory, Docker, Octopus Deploy and AWS Platforms.
Skills and Qualifications
· Min 5 Years of Experience in IT
· Min 2yrs experience of deploying database changes through Liquibase-Datical/Redgate
· Experience with SQL Server/Oracle/PostgreSQL/MySQL and knowledge of admin settings/setup for these DB.
· Sound Experience using AWS cloud platforms, AWS Database Services, CI/CD solutions (including Docker), DevOps principles
· Deployment to Amazon RDS/Aurora/Redis/MongoDB
· Experience in Using DevOps tool like Jenkins/Circle CI/ Teamcity/Octopus deploy
· Understanding of Rollback process, reporting of database changes, sql rules/scanning and linting tools.
5. Title: Data Warehouse Release Manager -
Location: Remote
- Release management around data warehouse/data lake set of applications in AWS cloud environment
- Provides release schedule for Run and Build calendars
- Oversees PRE-production deployment tasks
- Oversees production deployment tasks
- Leads deployment weekend activities from a status reporting perspective for all release items
- Role is responsible for compiling production change requests
- Role is responsible for securing agreement from all supporting teams (internal & external) for agreement on support for all releases
- Responsible to assist with escalation in the event of any errors or failures
- Role starts/ends with pulling in the owners/responsible parties to resolve the problem and insuring resolution is achieved.
- Role is expected to assist with tracking down on-call resources from supporting teams
- Responsible to assist with paperwork for all CHG requests including emergency and expedited changes
- Conduct release readiness reviews and business Go/No-Go reviews
- Include all involved teams
- Develop implementation plan, call outs, job run sequencies, etc.
- Develop on-call/responsible party list for each release in the event of errors/issues.
- Must be able to read/understand project plans
- Must be technically minded (able to understand application components; job executions; dependencies; and technical errors) … while not expected to write code, the ability to understand and interpret job failures and urgencies related to fixes is critical for a successful candidate.
6. Data engineer with AWS Glue –
Remote Role
JD :
10+ yr IT experience Excellent knowledge of OOPS programming, file system, database access, ETL transformation through Python Prior experience with AWS Cloud Services, Redshift, S3, boto3 package, Glue is required Very good knowledge of SQL, data warehouse Concepts Prior experience with Informatica or any other ETL tool is preferred
7. AWS Solution Architect
Princeton, NJ (Remote Start)
Job Description:
- Need a Solid combination of Redshift and DevOps.
- Hands on candidate with AWS experience and Redshift.
- Candidate will architect and code POC components in Redshift, Glue to mockup and demonstrate the best solutions for our environment.
- Will have an understanding of Redshift at a SQL tuning level, knowing ins and outs of the engine and optimizer.
- Hands on with Redshift WLM and setting up performance groups, views, skews, and cluster based tables.
Skills/Qualification:
- 10 years’ experience plus degree required.
- Must have excellent communication and interpersonal skills, and the ability to lead by example. Required: AWS, Redshift, Python Nice to Have: AWS Solutions Architect certifications
8. Sr. Data Cloud Analytics Specialist
Princeton, NJ
Job Description:
The successful candidate will have a master’s degree in Computer Science or related technical field, or a bachelor’s degree with 10 years of experience in Computer Science or related field. The candidate should have full-stack experience implementing distributed data analytics health data systems. Cloud Implementation
Experience: • - Kubernetes • - Docker • - Apache Spark • - Distributed datastores (Hadoop, Snowflake, Redshift, Mongo DB) • - AWS EMR • - RESTful API Implementation • - Microservices Implementation • - AWS Simple Queue Service Analytics Workspace Experience: • - Jupyter Notebooks • - R-Studio • - SAS 9 • - AWS EMR Studio • - Machine Learning • - Bioinformatics Health Data Models and Data Format Experience: • - OMOP CDM • - Bridge • - NCI Thesaurus • - HL7 FHIR (STU3, R4, SMART) • - HL7 v2 • - LOINC, SNOMED, ICD-CM-10, CPT Programming Knowledge: • - Javascript/Typescript • - Python • - Java/Scala • - R • - Go • - React/Angular DevOps, CI/CD Experience • - Github • - Terraform © 2020 Coforge 3 • - AWS SAM Successful candidates will have excellent written and oral communication skills, can work well in a distributed team environment and be available for full-time effort with occasional off hours deadline driven extra effort.
9. Job Title: Redshift DBA
Pay -- $55/hr
Job Description
• Excellent understanding of the AWS architecture (RDS, S3) • Redshift and Aurora Cluster Administration support • Redshift and Aurora Database Administration support • Excellent Performance tuning skills • ADFS and User Access management • WLM • Data Sharing • DMS Replication • DR configuration and support • Cloud monitoring on memory usage. • QuickSight • Cloud Watch • Good communication and team player
10. Principal Data Architect
Location – Princeton, NJ (Remote start)
Job Description:
As Principal, Data Architecture, you will play a hands-on role in driving the overall technical and solutions architecture within Fidelity Brokerage Technology. Specifically, you will help implement FBT’s cloud[1]based data strategy that includes database design, ETL/ELT Pipelines, Data Lakes, automation, and orchestration. You will work closely with business partners and technology teams to understand business objectives and help define the appropriate architecture direction for them. The Expertise and Skills You Bring Bachelor’s or master’s degree in a technology related field (e.g., Engineering, Computer Science, etc.) required Demonstrated technology and personal leadership experience in architecting, designing, and building highly scalable analytical and reporting applications Expertise in data management standard methodologies such as data integration, data security, data warehousing, data analytics, metadata management and data quality Ability to evaluate, prototype and recommend emerging data technologies and platforms Experience designing and developing data warehouse and data lake ETL/ELT pipelines using data integration frameworks Experience with DevOps, CI/CD pipeline technologies is desirable Understanding of Agile methodologies (Scrum and Kanban) 10+ years of IT experience with a demonstrated ability as data architect with prior experience as tech/solution designer in data and/or business intelligence and analytics •Relevant certifications in public cloud services (e.g., AWS, Snowflake) is highly desirable
11. Director, Data Architect
Location – Princeton, NJ (Remote start)
Job Description:
Bachelor’s or master’s degree in a technology related field (e.g., Engineering, Computer Science, etc.) required Demonstrated technology and personal leadership experience in architecting, designing, and building highly scalable transactional and operational systems Extensive knowledge of brokerage systems and processing preferred Expertise in data management standard methodologies such as event driven processing, technology architecture, data operations, and data management strategies Ability to evaluate, prototype and recommend emerging data technologies and platforms Understanding of Agile methodologies (Scrum and Kanban) 15+ years of IT experience with a demonstrated ability as data architect with prior experience as tech/solution designer in data and/or trading/brokerage systems Relevant certifications in public cloud services (e.g., AWS, Azure) is highly desirable Leadership skills to architect and design end-to-end data solutions Ability to collaborate and partner with business domain leads, product owners, enterprise architects and other functional leads Demonstrated experience in architecting and implementing transactional and/or operational database solutions using Oracle or related technologies Proven ability in architecting and implementing batch and bulk processing systems Experience with relevant AWS technologies such as S3, IAM, KMS, SQS, Lambda and CloudFormation Expertise in relational database technologies such as Oracle; cloud data services experiences such as AWS RDS, DynamoDB, Aurora is desirable Demonstrated experience in event driven architecture design and development using industry technologies such as Kafka Proficient in Master Data Management (MDM) strategies and frameworks Relevant understanding of DevOps technologies such as Jenkins, uDeploy, Concourse, and Datadog
12. Snowflake Data Architect
Location – Princeton, NJ (Remote start)
Job Description:
8 – 10+ years of experience in Data pipeline engineering for both batch and streaming applications. Experience with data ingestion process, creating data pipelines and performance tuning with Snowflake and AWS. Implementing SQL query tuning, cache optimization, and parallel execution techniques. Must be hands-on coding capable in at least a core language skill of (Python, Java or Scala) with Spark. Expertise in working with distributed DW and Cloud services (like Snowflake, Redshift, AWS etc) via scripted pipeline Leveraged frameworks and orchestration like Airflow as required for ETL pipeline This role intersects with “Big data” stack to enable varied analytics, ML etc. Not just DataWarehouse type workload. Handling large and complex sets of XML, JSON and CSV form various sources and databases. Solid grasp of database engineering and design Identify bottlenecks and bugs in the system and develop scalable solutions Unit Test and document deliverables Capacity to successfully manage a pipeline of duties with minimal supervision 3. Skills required • Python, SQL, DW concepts and logic • Airflow orchestration • Cloud DWH – Snowflake • AWS cloud knowledge Nice to have: Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
=========================================
For other Roles click HERE
Thanks & Regards,
|
Note – VBeyond is fully committed to Diversity and Equal Employment Opportunity.
Disclaimer: We respect your Online Privacy. This is not an unsolicited mail. Under Bill S 1618 Title III passed by the 105th US Congress this mail cannot be considered Spam as long as we include Contact information and a method to be removed from our mailing list. If you are not interested in receiving our e-mails then please reply to DeepakM@VBeyond.com subject=Remove. Also mention all the e-mail addresses to be removed which might be diverting the e-mails to you. We are sorry for the inconvenience.
Please do not print unless absolutely necessary. Spread environmental awareness
Comments
Post a Comment
Thanks