Skip to content

Zscaler Associate Data Engineer

  • Jobs

Zscaler Associate Data Engineer is an excellent opportunity for data professionals and software engineers to work on large-scale AI data pipelines and cutting-edge cybersecurity platforms. This hybrid role, based in Bangalore, Pune, or Mohali, offers hands-on experience in Python, SQL, Apache Spark, and modern data orchestration frameworks while contributing to enterprise-scale AI data solutions.

At Zscaler, Associate Data Engineers play a pivotal role in designing, building, and maintaining robust unstructured data pipelines for Vector and Graph databases, enabling secure, scalable, and high-performance AI applications. This is ideal for candidates passionate about data engineering, cloud technologies, and AI-driven cybersecurity solutions.

Zscaler Associate Data Engineer

About Zscaler

Zscaler is a global pioneer in zero trust security, enabling the world’s largest organizations and government agencies to secure users, applications, and data. The Zscaler Zero Trust Exchange platform, powered by AI, mitigates billions of cyber threats daily while reducing costs and complexity for modern enterprises. Zscaler’s culture emphasizes customer obsession, collaboration, ownership, and accountability, with a focus on innovation and high-quality execution.

The company champions an “AI Forward, People First” philosophy, empowering employees to grow, innovate, and make a global impact while fostering an inclusive and collaborative work environment.

Role Overview

As a Zscaler Associate Data Engineer, you will:

  • Collaborate with data architects, integration, and engineering teams to capture data pipeline requirements
  • Design and implement large-scale unstructured data pipelines for Vector DB, Graph DB, and Snowflake Enterprise Warehouse
  • Profile and quantify data quality while building pipelines for integration
  • Develop in-house products to improve scalability and efficiency across the organization
  • Apply modern cloud and big data architectures while continuously learning next-generation technologies

Key Responsibilities

  • Collaborate with Data & Technical architects and engineering teams to develop technical solutions
  • Build and maintain unstructured data pipelines for enterprise AI platforms
  • Implement data management standards and best practices with the Data Platform Lead
  • Develop mission-critical pipelines using modern cloud and big data technologies
  • Ensure data pipelines meet quality, scalability, and reliability standards
  • Partner with cross-functional teams to deliver high-value data solutions

Help a friend land their next role. Share now!

Internship/Role Details

DetailInformation
RoleAssociate Data Engineer
CompanyZscaler
Job TypeFull-Time / Hybrid
Work ModeHybrid (Bangalore, Pune, Mohali)
Reports ToPrincipal Data Engineer
DurationFull-Time / Permanent
LocationIndia (Bangalore, Pune, Mohali)
Stipend/SalarySee Expected Salary section below

Expected Salary 💰

💰 ₹8 – ₹12 LPA (CTC)

This is an estimated market range for early-career data engineers in India with Python, SQL, and distributed data processing experience. Actual compensation may vary based on skills, experience, and Zscaler policies.

Skills and Learning Opportunities

  • Hands-on experience with Python, SQL, and distributed data processing frameworks like Apache Spark, Hadoop, or Apache Flink
  • Exposure to orchestration frameworks such as Airflow, Prefect, or Dagster
  • Learn to work with Vector and Graph databases and Snowflake Enterprise Warehouse
  • Familiarity with AI/ML tools like LangChain and AutoGen
  • Opportunity to explore scalable Python frameworks such as Ray or Dask
  • Experience with building dashboards or data apps using tools like Streamlit

Who Can Apply

  • Candidates with foundational knowledge of DBMS concepts, normalization, ACID principles, and transactions
  • Familiarity with distributed data processing frameworks and cloud architectures
  • Proficiency in Python and scripting languages like Korn Shell or Scala
  • Strong problem-solving, analytical thinking, and communication skills
  • Positive, proactive learners who enjoy collaborative, high-impact environments
  • Individuals passionate about AI, data engineering, and cybersecurity

How to Apply

  • Prepare a resume highlighting Python, SQL, and data pipeline projects
  • Include any experience with Apache Spark, Airflow, or AI data tools
  • Showcase problem-solving skills and understanding of data engineering best practices
  • Click the Apply button below to submit your application
  • Prepare for technical interviews assessing coding, data processing, and data pipeline design

Benefits of Working at Zscaler

  • Comprehensive health plans, parental leave, and retirement options
  • Education reimbursement and professional development opportunities
  • Flexible work schedules and hybrid working model
  • Exposure to enterprise AI data platforms and zero trust security technologies
  • Opportunity to work on high-impact projects with global reach
  • Inclusive and collaborative work culture that fosters growth and innovation

About Zscaler Culture

Zscaler promotes a culture of execution and results, where every team member can make an impact regardless of title. The company values transparency, constructive debate, and high-quality execution while championing diversity and inclusion. Employees are encouraged to innovate, learn continuously, and contribute to a secure digital future for global enterprises.

Disclaimer:
This job information is collected from official/public sources. No fees are charged, and selection is not guaranteed. We are not responsible for any losses arising from reliance on this information.

Find your dream job tap the heart!

Share the opportunity

Leave a Reply

Your email address will not be published. Required fields are marked *