Skip to main content
U

Software Engineer II - Data

Uber

Location

Bengaluru, Karnataka, India

Salary

Not specified

Type

fulltime

Posted

Today

via linkedin

Job Description

About The Role

As an Engineer on the Data Intelligence team, you will be dealing with large-scale data pipelines and data sets that are critical and foundational for Uber to make decisions for a better customer experience. You will be working on a petabyte scale of analytics data from multiple Uber applications. Help us build the software systems and data models that will enable data scientists to understand our user behavior better and thrive on the data-driven mindset at Uber.

About The Team

The Data Intelligence team is responsible for designing core foundational data sets that are critical to understanding customers' needs and help business teams make the right decisions in solving these critical problems. The team's mission is to ensure high quality for all the critical data flows for analytics purposes across all verticals in Uber and enable faster implementation of data needs by building standardized tools and frameworks for accurate analysis. We are currently revamping all critical analytical data flows across domains to build high-quality data sets and frameworks that are used across Uber.

What The Candidate Will Need / Bonus Points

---- What the Candidate Will Do ----

  • Responsible for defining the Source of Truth (SOT) and dataset design for multiple Uber teams.
  • Identify unified data models, collaborating with Data Science teams
  • Streamline data processing of the original event sources and consolidate them in the source of truth event logs
  • Build and maintain real-time/batch data pipelines that can consolidate and clean up usage analytics
  • Build systems that monitor data losses from the different sources and improve the data quality
  • Own the data quality and reliability of the Tier-1 \& Tier-2 datasets, including maintaining their SLAs, TTL, and consumption
  • Devise strategies to consolidate and compensate for the data losses by correlating different sources
  • Solve challenging data problems with cutting-edge design and algorithms.

Basic Qualifications

  • 3\+ years of Data engineering experience
  • Demonstrated experience of working with large data volumes and backend services.
  • Good working knowledge of SQL (mandatory) and any other languages ( Java, Scala, Python)
  • ̇Working Experience of ETL, Data pipelines, Data Lake, Data Modeling fundamentals.
  • Good problem-solving and analytical skills
  • Good team player and collaboration skills.

Preferred Qualifications

  • Experience in data engineering and working with Big data
  • Experience with ETL or Streaming data and one or more of, Kafka, HDFS, Apache Spark, Apache Flink, Hadoop
  • Good to have experience with backend services and familiarity with one of the cloud platforms ( AWS/ Azure / Google /Oracle cloud)

Looking for more opportunities?

Browse thousands of graduate jobs and entry-level positions.

Browse All Jobs