Search This Blog

NO GC - DL PROOF REQUIRED OF VIRGINIA & MARYLAND | DATA ENGINEER | DATA SCIENTIST

Hi ALL 

INPERSON INTERVIEW


Position 1 

Title: Senior Data Scientist Specialist (Principal Data Scientist w/GenAI focus)

Location: McLean, VA 22102 (Needs to be onsite for 5 days a week)

Duration: Long Term

Interview Mode: Interview will be in person

 

Supplier Vetting Questions:
Assessment testing on the must have listed is required for candidates to be considered.
Need GitHub Code Repository Link for each candidate.


Call Notes:

Looking for a Principal Data Scientist with strong focus on Generative AI (GenAI) with expertise in Machine Learning transitioned into GenAI. Need someone with good experience in Rag, Python- Jupyter, other Software knowledge, using agents in workflows, strong understanding of data.

Someone with advanced proficiency in Prompt Engineering, Large Language Models (LLMs), RAG, Graph RAG, MCP, A2A, multi-modal AI, Gen AI Patterns, Evaluation Frameworks, Guardrails, data curation, and AWS cloud deployments.

Highly preferred for someone who can built AI agent, MCP, A2A, Graph Rag, deployed Gen AI applications to production.

 

Supplier notes: Manager wants the resumes to be clean and easy to read. Please do not place large vendor summaries, manager will not read and may not consider your candidate.

 

Job Description:
We are seeking a highly experienced **Principal Gen AI Scientist** with a strong focus on **Generative AI (GenAI)** to lead the design and development of cutting-edge AI Agents, Agentic Workflows and Gen AI Applications that solve complex business problems. This role requires advanced proficiency in Prompt Engineering, Large Language Models (LLMs), RAG, Graph RAG, MCP, A2A, multi-modal AI, Gen AI Patterns, Evaluation Frameworks, Guardrails, data curation, and AWS cloud deployments. You will serve as a hands-on Gen AI (data) scientist and critical thought leader, working alongside full stack developers, UX designers, product managers and data engineers to shape and implement enterprise-grade Gen AI solutions.

 

Key Responsibilities:

* Architect and implement scalable AI Agents, Agentic Workflows and GenAI applications to address diverse and complex business use cases.

* Develop, fine-tune, and optimize lightweight LLMs; lead the evaluation and adaptation of models such as Claude (Anthropic), Azure OpenAI, and open-source alternatives.

* Design and deploy Retrieval-Augmented Generation (RAG) and Graph RAG systems using vector databases and knowledge bases.

* Curate enterprise data using connectors integrated with AWS Bedrock's Knowledge Base/Elastic

* Implement solutions leveraging MCP (Model Context Protocol) and A2A (Agent-to-Agent) communication.

* Build and maintain Jupyter-based notebooks using platforms like SageMaker and MLFlow/Kubeflow on Kubernetes (EKS).

* Collaborate with cross-functional teams of UI and microservice engineers, designers, and data engineers to build full-stack Gen AI experiences.

* Integrate GenAI solutions with enterprise platforms via API-based methods and GenAI standardized patterns.

* Establish and enforce validation procedures with Evaluation Frameworks, bias mitigation, safety protocols, and guardrails for production-ready deployment.

* Design & build robust ingestion pipelines that extract, chunk, enrich, and anonymize data from PDFs, video, and audio sources for use in LLM-powered workflows—leveraging best practices like semantic chunking and privacy controls

* Orchestrate multimodal pipelines** using scalable frameworks (e.g., Apache Spark, PySpark) for automated ETL/ELT workflows appropriate for unstructured media

* Implement embeddings drives—map media content to vector representations using embedding models, and integrate with vector stores (AWS KnowledgeBase/Elastic/Mongo Atlas) to support RAG architectures

 

**Required Qualifications:**

* PhD in AI/Data Science

* 10+ years of experience in AI/ML, with 3+ years in applied GenAI or LLM-based solutions.

* Deep expertise in prompt engineering, fine-tuning, RAG, GraphRAG, vector databases (e.g., AWS KnowledgeBase / Elastic), and multi-modal models.

* Proven experience with cloud-native AI development (AWS SageMaker, Bedrock, MLFlow on EKS).

* Strong programming skills in Python and ML libraries (Transformers, LangChain, etc.).

* Deep understanding of Gen AI system patterns and architectural best practices, Evaluation Frameworks

* Demonstrated ability to work in cross-functional agile teams.

* Need Github Code Repository Link for each candidate. Please thoroughly vet the candidates.

 

**Preferred Qualifications:**

* Published contributions or patents in AI/ML/LLM domains.

* Hands-on experience with enterprise AI governance and ethical deployment frameworks.

* Familiarity with CI/CD practices for ML Ops and scalable inference APIs.



Position 2

Title: Senior Cloud/Data Engineer

Location: McLean, VA 22102 (Needs to be onsite for 5 days a week)

Duration: 03 Months Contract (Possible Extensions)

 

Supplier Vetting Questions:
Candidate needs to take GliderAI assessment


Call Notes:

We need a Senior Data Engineer with expertise in Python, PySpark, AWS and Kubernetes

Need strong experience in SAS and Informatica is required.

 

Top Skills:

Data Engineer (Required)

Python (Required)

PySpark (Required)

AWS (Required)

EKS/Kubernetes (Required)

Automation Testing: Cucumber (Required)

Jenkins (Highly Preferred)

Snowflake (Required)

AutoSys, DB2 (Highly Preferred)

 

Job Description:
Development of microservices based on python, PySpark, AWS EKS, AWS Postgres for a data-oriented modernization project
○ New System: Python and PySpark, AWS Postgres DB, Cucumber for automation

Perform System, functional and data analysis on the current system and create technical/functional requirement documents.
○ Current System: Informatica, SAS, AutoSys, DB2

Write automated tests using cucumber, based on the new micro-services-based architecture
Promote top code quality and solve issues related to performance tuning and scalability.
Strong skills in DevOps, Docker/container-based deployments to AWS EKS using Jenkins and experience with SonarQube and Fortify
Able to communicate and engage with business teams and analyze the current business requirements (BRS documents) and create necessary data mappings.
Preferred strong skills and experience in reporting applications development and data analysis.
Knowledge in Agile methodologies and technical documentation.
Nice to Have: Snowflake, AMQ's, AWS, Kubernetes/Amazon EKS, Java, Sprint Boot, Informatica, SAS, AutoSys, DB2






Best Wishes,  
Pramod Kumar Meher | Business Development Lead
Sagiant Tech Solutions | 13809 Research Blvd, Suit 500, Austin, Texas, 78750
  File:E-Verify logo.svg - Wikimedia Commons
At Sagiant Tech Solutions Inc, we believe in using our expertise to serve our clients and the greater field of software development by building trusting relationships with clients and colleagues, fostering a respectful and inclusive workplace, and growing our team of developers, entrepreneurs, and leaders. Sagiant Tech Solutions Inc believes that the way forward is through innovation, problem solving, and building our community.


No comments:

Post a Comment

Thanks

Gigagiglet
gigagiglet.blogspot.com

Featured Post

Fwd: Exchange Administrator _ San Antonio, TX / Plano, TX (LOCAL CANDIDATES ONLY)

Greetings, We have the below requirement with Client. Kindly go through the Job Description and let me know your interest.   J...

Contact Form

Name

Email *

Message *

Total Pageviews