"DescriptionJob DescriptionZscaler is one of the most exciting technology companies around. Our mission is simple - it is to revolutionize Internet security through the magic of cloud computing. We are the leading, most innovative firm in a $35 billion market - security â€“ and our passion is to bring cloud computing to Internet security just like Salesforce.com did to CRM. More than 5,000 organizations around the world are already using Zscaler - including amazing brands like General Electric, Nestle, NBC, and NATO - and more than 12 million people across more than 185 countries are protected by our systems every single day. We are a well-funded Software as a Service company and are growing extremely fast, which means incredible career and growth opportunities for passionate and talented people that are ambitious and driven to be the best. Weâ€™ll offer you an opportunity to make a difference, and you will get to work in a fun, fast paced environment where you can excel at what you do and create. The Big Data Engineer will work on building the next generation of Zscaler's security analytics platform. The candidate will contribute to building a platform to collect and ingest several billion (and growing) log events from Zscaler's globally distributed security infrastructure and provide actionable insights to customers and Zscaler's security researchers. Responsibilities: - Design and build multi-tenant systems capable of loading and transforming large volume of structured and semi-structured fast moving data - Build robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users Required: - 2+ years of experience working with data processing infrastructure - Proficiency in Java - Experience with data serialization techniques and data stores for persisting events - Familiarity with implementing services following REST model - Excellent interpersonal, technical and communication skills - Ability to learn, evaluate and adopt new technologies - Bachelor's Degree in computer science or equivalent experience. Highly Desirable: Familiarity with Hadoop and other data processing frameworks such as Spark, Kafka, Storm, Elastic search ~"