Skip to content

Amdocs Hiring Software Engineer

  • Jobs

If you are looking for a high-impact software engineering role in data engineering, cloud computing, and GenAI systems, this opportunity at Amdocs is one of the strongest entry-level openings in the telecom and data platform space.

Amdocs is known for building large-scale mission-critical systems used by global communication and media companies. This role is designed for candidates who want to work on scalable data pipelines, distributed systems, and modern cloud-based architectures.

Amdocs Hiring Software Engineer

About the Company

Amdocs is a global leader in software and services for communications and media companies. The organization enables enterprises to deliver advanced customer experiences, network performance optimization, and revenue-driven solutions.

With over 40 years of industry experience, Amdocs processes billions of transactions daily, powering systems that support global connectivity. The company focuses heavily on cloud modernization, automation, and AI-driven transformation.

Role Overview

As a Software Engineer (Data Platform Team), you will be responsible for designing and maintaining data pipelines, distributed systems, and cloud-based data solutions.

This role sits at the intersection of:

  • Data Engineering
  • Cloud Computing
  • Distributed Systems
  • Generative AI (GenAI) applications

You will not only build pipelines but also contribute to system design, performance optimization, and data quality frameworks.

Key Responsibilities

  • Design and build scalable data pipelines (ETL/ELT) for structured and unstructured data
  • Develop distributed systems for data ingestion, transformation, and storage
  • Work with data lakes and cloud-based architectures
  • Implement monitoring systems to ensure data quality and reliability
  • Troubleshoot and optimize large-scale data workflows
  • Work with multi-cloud environments (AWS, Azure, or GCP)
  • Explore and integrate AI and GenAI-based enhancements into data systems
  • Collaborate with engineering teams to improve system design and performance

Help a friend land their next role. Share now!

Who Can Apply

CriteriaDetails
EducationB.Tech / M.Tech / CS / IT or related fields
ExperienceFreshers / Entry-level
LocationPune
Work TypeFull-time
SkillsPython, SQL, Spark

Required Skills

  • Strong knowledge of Python and SQL
  • Hands-on experience with Apache Spark (PySpark preferred)
  • Understanding of distributed systems and data engineering principles
  • Familiarity with cloud platforms (AWS / Azure / GCP)
  • Problem-solving and system design skills
  • Knowledge of Docker and Kubernetes (preferred)

Good to Have Skills

  • Java, Spring Boot, Microservices architecture
  • Experience with Kafka and Databricks
  • Exposure to CI/CD and DevOps pipelines
  • Interest in GenAI or AI-driven tools and copilots

What You’ll Learn in This Role

This role is highly valuable for career growth because it gives exposure to:

  • Large-scale data engineering systems
  • Real-world cloud infrastructure
  • Distributed computing with Spark and Kafka
  • Enterprise-level software architecture
  • Emerging GenAI applications in data systems

These skills are highly in demand in companies like Google, Microsoft, and Amazon.

Stipend / Salary (Market Estimate) 💰

Since this is a full-time software engineer role (not internship), typical compensation in Pune for similar entry-level roles is:

👉 ₹6 LPA – ₹12 LPA (estimated range)

Actual compensation depends on interview performance, skill level, and internal band.

Why Join Amdocs?

Working at Amdocs gives you exposure to:

  • Enterprise-scale systems handling billions of transactions
  • Cloud-first and AI-driven architecture
  • Strong engineering culture focused on scalability and reliability
  • Opportunities to grow in data engineering, cloud, or AI domains

It is an excellent role for candidates aiming to build a career in backend systems, data platforms, or GenAI engineering.

How to Apply 🚀

To improve your chances:

👉 Focus on:

  • Python + SQL mastery
  • Basic Spark projects
  • Data pipeline mini-projects (ETL workflows)
  • GitHub portfolio with backend/data projects

👉 Highlight:

  • Any cloud exposure (AWS/GCP/Azure)
  • Internships or academic projects in data engineering
  • Problem-solving experience (LeetCode / coding platforms)

Disclaimer: This job information is collected from official or public sources. We do not charge any fees for job updates and do not guarantee recruitment. Please verify details from the official source before applying. We are not responsible for any loss arising from the use of this information.

Find your dream job tap the heart!

Share the opportunity

Leave a Reply

Your email address will not be published. Required fields are marked *