Role: Big Data – Kafka with Database(SQL) Location: Remote Responsibilities: - Experience with Big Data solutions such as Cassandra, Google Pub/Sub, Hadoop, Spark, Kafka, ElasticSearch and Solr is a plus;
- Working knowledge and experience of Bitbucket, Git or Gitflow;
- Continuous integration and automated testing;
- Knowledge of Kafka Schemas and use of the Schema Registry;
- Strong fundamentals in Kafka client configuration and troubleshooting;
- Development of RESTful Services employing Spring and Spring Boot frameworks;
- Building Cloud Native applications on Google Cloud platform with a focus and understanding of Google Cloud Data Products is desired;
- Best practices to optimize the Kafka ecosystem based on use-case and workload;
- Knowledge of Kafka clustering, and its fault-tolerance model supporting High Availability;
- Strong familiarity of wire formats such as XML, JSON, Avro, Protobuf, Thrift, CSV, etc. along with serialization/deserialization options.
Requirements: - Experience with Big Data solutions such as Cassandra, Google Pub/Sub, Hadoop, Spark, Kafka, ElasticSearch and Solr is a plus;
- Working knowledge and experience of Bitbucket, Git or Gitflow;
- Continuous integration and automated testing;
- Knowledge of Kafka Schemas and use of the Schema Registry;
- Strong fundamentals in Kafka client configuration and troubleshooting;
- Development of RESTful Services employing Spring and Spring Boot frameworks;
- Building Cloud Native applications on Google Cloud platform with a focus and understanding of Google Cloud Data Products is desired;
- Strong familiarity of wire formats such as XML, JSON, Avro, Protobuf, Thrift, CSV, etc. along with serialization/deserialization options;
- Knowledge of Kafka clustering, and its fault-tolerance model supporting High Availability;
- Best practices to optimize the Kafka ecosystem based on use-case and workload.
|
Comments
Post a Comment
Thanks