Job Description
Specialization:
Data Processing \& Pipelines, Internal \& Client Focus
Stack:
Python, SQL, Apache Airflow, Spark, ETL/ELT Tools, Cloud Data Warehouses (BigQuery, Redshift)
Role Summary:
Design and maintain robust data pipelines for timely, accurate delivery of datasets for internal and client use cases.
Key Responsibilities
- Build and manage ETL/ELT data pipelines.
- Integrate diverse data sources into centralized repositories.
- Optimize database performance and ensure data quality.
- Collaborate with AI/ML teams to provide clean, usable datasets.
Experience Requirements
- 2–3\+ years in data engineering or related roles.
- Proficiency in SQL and Python.
- Understanding data modeling and pipeline orchestration tools.
Preferred
- Experience with real-time data streaming frameworks (Kafka, Flink).
- Familiarity with cloud and on-premises data environments.
Looking for more opportunities?
Browse thousands of graduate jobs and entry-level positions.