Location
Austin, TX, US
Salary
Not specified
Type
NaN
Posted
Today
via indeed
Job Description
- Location: Austin, Texas
- Type: Direct Hire
- Job #11442
- Salary: $
Location Type: Hybrid
We are seeking a seasoned Databricks Data Engineer with expertise in Azure cloud services and the Databricks Lakehouse platform. This role involves designing and optimizing large-scale data pipelines, modernizing cloud-based data ecosystems, and enabling secure, governed data solutions. The ideal candidate has strong skills in SQL, Python, PySpark, ETL/ELT frameworks, and experience with Delta Lake, Unity Catalog, and CI/CD automation.
Key Responsibilities
- Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform, ensuring reliability, scalability, and governance.
- Modernize cloud-based data ecosystems, contributing to architecture, distributed data engineering, data modeling, security, and CI/CD automation.
- Utilize orchestration and workflow automation tools such as Apache Airflow.
- Work with sensitive or regulated datasets, applying compliance and governance best practices.
- Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks notebooks.
- Design and optimize Delta Lake data models for performance, scalability, and reliability.
- Implement and manage Unity Catalog for RBAC, lineage, governance, and secure data sharing.
- Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables.
- Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems.
- Automate API ingestion and workflows using Python and REST APIs.
- Support data governance, lineage, cataloging, and metadata initiatives.
- Enable downstream consumption for BI, data science, and application workloads.
- Write optimized SQL/T-SQL queries, stored procedures, and curated datasets for reporting.
- Automate deployments, DevOps workflows, testing pipelines, and workspace configuration.
Qualifications
- 8\+ years of experience designing and developing scalable data pipelines in modern data warehousing environments, with full ownership of end-to-end delivery.
- Expertise in data engineering and data warehousing, consistently delivering enterprise-grade solutions.
- Proven ability to lead and coordinate data initiatives across cross-functional teams.
- Advanced proficiency in SQL, Python, and ETL/ELT frameworks, including performance tuning and optimization.
- Hands-on experience with Azure, Databricks, and integration with enterprise systems.
Looking for more opportunities?
Browse thousands of graduate jobs and entry-level positions.