Poshmark, one of the world’s leading fashion resale marketplaces, is hiring a Software Engineer Intern – Data Engineering for 2026 graduating students at its Chennai location. This internship is a golden opportunity for students passionate about Big Data, scalable systems, analytics platforms, and cloud technologies to gain hands-on industry experience while working on real, production-grade data systems.
As part of Poshmark’s Big Data team, you will help build and maintain the data infrastructure that powers analytics, machine learning, and business intelligence across the company. This role is ideal for students who want to understand how large-scale data platforms operate at terabyte-to-petabyte scale in a real-world environment.

About Poshmark
Founded in 2011, Poshmark is a global fashion resale marketplace built on a strong, highly engaged social community. With more than 130 million users and over $10 billion in Gross Merchandise Value (GMV), Poshmark has transformed how people buy and sell fashion online. The company combines real-time social experiences with powerful data-driven insights to deliver personalized and engaging user experiences.
Data plays a critical role at Poshmark, influencing everything from recommendations and pricing to seller growth and sustainability initiatives. The Big Data team ensures that accurate, timely, and accessible data powers these decisions.
Role Overview – Software Engineer Intern, Data Engineering
As a Data Engineering Intern, you will work closely with senior data engineers to design, build, and support real-time and batch data pipelines. You will gain exposure to modern data engineering tools and frameworks such as Apache Spark, Airflow, Databricks, Kafka, Hive, Redshift, and AWS services.
This internship emphasizes learning by doing. You will contribute to systems that are actively used by Analytics, Data Science, and Engineering teams, giving you practical insight into how data products are built and scaled in a high-growth technology company.
What You’ll Work On
Assist in designing and developing scalable data pipelines for real-time and batch processing
Support integration of external data sources such as APIs, S3 transfers, and Kafka streams
Contribute to ETL development using Spark, Airflow, Databricks, and AWS services
Help maintain Hive and Redshift tables, workflows, and internal dashboards
Write clean, modular, and maintainable code in Python, Scala, or Ruby
Collaborate with analytics, data science, and engineering teams to understand data requirements
Gain experience handling very large datasets ranging from terabytes to petabytes
Assist in monitoring, debugging, and troubleshooting data pipeline failures
Help a friend land their next role. Share now!
Who Can Apply
Students currently pursuing a Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field
Candidates graduating in 2026 only
Strong foundation in programming fundamentals, especially Python or Scala
Good understanding of computer science basics including data structures and algorithms
Basic knowledge of SQL and relational databases
Academic or project exposure to Big Data concepts such as Spark, Hadoop, Kafka, or cloud platforms
Strong problem-solving skills and attention to detail
Eagerness to learn, ask questions, and work collaboratively in a team environment
Bonus Skills (Nice to Have)
Experience or coursework involving Spark, Hadoop, or Kafka
Exposure to AWS services such as S3, EMR, Lambda
Experience with Databricks, API integrations, or Google Apps Script
Personal or academic projects involving large-scale data processing
What You’ll Gain from This Internship
Hands-on experience with modern Big Data and cloud technologies
Direct mentorship from experienced Data Engineers
Opportunity to work on business-critical data systems used company-wide
Exposure to real-world data challenges at massive scale
Strong foundation for future careers in Data Engineering, Analytics, or Machine Learning Engineering
Valuable industry experience at a globally recognized technology company
Estimated Stipend
While Poshmark has not officially disclosed the stipend for this internship, based on industry standards and similar data engineering internships at global product companies, the expected stipend is estimated to be between ₹40,000 to ₹60,000 per month. The final stipend may vary depending on location, internship duration, and company policies.
Follow us on
LinkedIn for the latest updates
Follow us on
Threads for the latest updates
Subscribe ▶️ YouTube Channel for Latest Updates
Why Choose Poshmark for Your Data Engineering Internship
Poshmark offers a unique combination of scale, mentorship, and real impact. Unlike purely academic internships, this role allows you to contribute directly to live data platforms that support millions of users globally. You will learn not just tools, but also best practices in data reliability, scalability, and collaboration, which are critical for long-term success in data engineering roles.
Career Growth Opportunities
Interns who perform well may be considered for future full-time roles or extended internships. The experience gained at Poshmark significantly strengthens your profile for careers in Data Engineering, Big Data Analytics, ML Engineering, and Backend Engineering.
How to Apply
Prepare a resume highlighting your programming skills, SQL knowledge, Big Data coursework or projects, and any cloud or data pipeline experience. Emphasize hands-on academic or personal projects involving Spark, Kafka, or AWS. Apply by clicking the Apply button below and ensure your application clearly states your 2026 graduation year, as this role is exclusively for 2026 graduates.
Share the opportunity