Intuceo Requirements for MemSQADBA, Druid DBA , Machine Learning Engineer/AWS Engineer/Data Engineer/Data Scientist, Data Bricks/Cloud Data Engineer
Dear Partner,
We have below requirements open with us. Please take a look and share me your best fit candidates resume, also share Rate, DL and work authorization copy.
Role: MemSQA DBA
Experience: 8+ Years
Rate: $85/ Hr C2C.
Location: Remote
P1: Architect maximum availability cluster configuration and implement as feasible
P1: Configure and administer data replication between clusters and generate reports to capture replication status
P1: Deploy Rowstore, Columnstore and Hybrid architectures as required
P1: Setup and manage cluster resources & pools, conduct performance analysis and fine tune
P1: Discuss and implement best practices for partition management
P2: Setup and optimize database backups using S3 or S3 like devices; Validate backups to ensure they can be used for recoverability
P2: Migrate/Copy data across clusters or databases
P2: Analyze query plans using Studio and/or command line interfaces and recommend query rewrite
P2: Implement security best practices - least privileged access, data encryption, access monitoring/logging etc.
P2: Analyze inter-node traffic
P3: Troubleshoot issues for stability/availability/performance by opening service requests with database vendor support
P3: Patch and upgrade database clusters
P3: Generate operational playbook and review with the team
Role: Druid DBA
Experience: 8+ Years
Rate: $85/Hr C2C
Location: Remote
Design
P1: Design all of the required building blocks with flow of data/objects to support various sizing and shaping requirements.
P1: Design automated HA Configuration for Druid with Load Balancers and for metadata store(mysql / postgresql ) using replica / Active-Active
P3: Develop and conduct failover tests to meet the availability SLAs
Build
P1: Build using the features of Apache Druid to achieve the desired design to meet the business requirements
P1: Define "golden" standard for all database environments and create standard MOPS
Automation
P1: Expertise with Orchestration and Automation tools - Jenkins, Docker, Ansible, etc..
P2: K8s and Docker expertise to manage a druid cluster on K8s
P1: Deploy with Ansible automation to manage Apache Druid components and maintain their configurations along with Deep storage/Zookeeper/metadata database configurations
Security
P1: Druid and Zookeeper Security/User Access Control and DB hardening
P3: Create tool to automate user account creation and password management
Alerting
P3: Design and setup Monitoring and Alerting (varying priority levels)
P3: Deploy NewRelic
Optimization
P2: Tuning expertise with segment sizing, indexing, roll-ups, partitioning, compaction, compression and query tuning
P2: Build and maintain data retention and compaction schedules per the audit and security
Development
P2: DRUID native query development and tuning skills on top of SQL
Optimization
P2: Develop and deploy stress testing procedures covering various application functionalities
P3: Druid historical metrics repository to pull Oracle AWR/ASH type reports
Troubleshooting
P2: Druid troubleshooting graphical workflow; Develop tool to show Druid real-time performance similar to how Spotlight does for SQL Server
P3: Monitoring, support 24*7 and maintain performance of the transitioned production system to meet SLAs
Requisites: Druid Solutions Architecture certification; Working knowledge of Ansible
Role: Machine Learning Engineer/AWS Engineer/Data Engineer/Data Scientist
Location: 100% Remote (NY doesn't matter)
Work Authorization: Any status is OK as long as there are willing to come on our payroll
Candidates should have strong experience with AWS
JD: Looking for ML/Data Engineers to build an ML accelerator on top of AWS infrastructure. Ideal candidate has experience deploying ML models that are developed through AWS sagemaker at scale. It's primarily ml-engineering role. It is not a typical Data Scientist role who can only build models or work with algorithms.
You need to be familiar with:
- AWS specific SDKs and APIs used to for model deployment and load balancing
- Within the AWS components, how we can design the infrastructure and what databases and storages are needed and why
- Challenges faced for live upgrading of a model that's already deployed in AWS
- Machine Learning tools and technologies Specifically in AWS cloud
- Coding in Python
- AWS Lambda Step functions
- AWS Glue
- Amazon EMR for Sagemaker
- DynamoDB and its importance in ML space
- Feature store
- AWS Workbench
- Jupyter
Knowledge of Cloud Platform AWS
Role: Data Bricks Cloud Data Engineer
Location: Detroit, Michigan
Duration: Long term
Rate: Market best.
skills :
Amazon Cloud exposure, Java/Scala are mandatory.
Thanks & Regards.
Vinay
Recruiter,
Phone No : 9042041368
vthadichettu@intuceo.com | www.intuceo.com
4110 Southpoint Blvd. Suite 124 Jacksonville, FL 32216
Comments
Post a Comment
Thanks