Hi, Please go through the following requirement and let me know if interested. Note : Clients accepts US Citizens Green Card Holders H4-EAD, L2-EAD, J2-EAD,TN and E3 Who are authorized to work in USA without sponsorship. JC: JD: Job Description The candidates do not need to be very senior; anything from 3 to 8 years of experience is fine. We need resumes as soon as possible for this position. For the Big Data position, the candidate will work on the following process, and thus should know How to use the tools listed: Skills/Tools in bold and red font are the most critical. They start the process with SysLog-NG Client and SysLog-NG Server but knowledge of these is not critical, just nice to have (i.e., resumes with this will be favored). Next the process uses Flume and Kafka around the Hadoop Cluster. It will be nice if they have exposure to the Cloudera Hadoop cluster, but it is OK if they have knowledge of other vendor solutions for Hadoop instead, like HortonWorks. Then, the process uses the ELK stack, i.e., ElasticSearch-LogStash-Kibana. (but not having all of these on the resume should not be a show stopper) Any candidate who says they have worked on this, should have written Spark jobs using Scala or Java. Thus, the candidates are most likely from a Java background, because that is how they can work on Scala. And the Scala/Java jobs will have their processes running on JVM (Java Virtual Machine). Finally, because of these Spark jobs, we may see some reference in the resumes to using Python |