Location
Dallas, TX
Salary
Not specified
Type
fulltime
Posted
Today
Job Description
Senior Data Engineer
At
Billee
, we’re building the next generation of utility billing. Our goal is simple: make a complex, manual, and fragmented process feel seamless, transparent, and intelligent.
Our intelligence platform turns raw utility and billing data into actionable intelligence and regulatory peace of mind for multifamily property operators. Our platform is past the "greenfield" phase — foundational modeling is underway, our stack is chosen, and the roadmap is set. We're hiring our second dedicated data engineer to partner with our existing data engineer and help us move from foundation to scale.
What you'll work on
- Reporting \& analytics.
Contribute to the modeled data and pipelines behind customer-facing reports on consumption, cost, and rate trends
- AI-ready data infrastructure.
Ingestion, semantic models, storage solutions, and retrieval (SQL, RAG, vector, or graph-based.)
- Dimensional modeling at the core.
Contribute robust facts and dimensions that power analysis us and for our customers
- Platform reliability.
Own testing, lineage, freshness monitoring, and alerting so data issues are caught before a customer sees them
- Cross-team partnership.
Translate vague product and compliance questions into concrete models, working directly with engineers, analysts, and PMs
Our stack
- Warehouse:
MotherDuck / DuckDB
- Orchestration:
Dagster
- Transformation:
DBT
- Language:
Python, SQL
- Cloud:
Azure
- Adjacent:
Hex, MCP
Requirements
- 4\+ years in data platform engineering.
Bonus if you've been a primary builder on a platform from early stages
- Strong Python and SQL.
- Hands-on DBT experience.
- Dimensional modeling fluency.
Star schemas, facts and dimensions
- Direct experience with our stack is a significant plus.
In order of preference:
+ Dagster
(asset-based orchestration, sensors, partitions) — strongly preferred over Airflow experience alone
+ DuckDB or MotherDuck
— even side-project or exploratory use
- Comfort analyzing data directly.
Bonus
- Experience with AI/LLM-adjacent data work: RAG pipelines, embedding stores, evaluation frameworks (e.g. LangSmith or PydanticAI), or knowledge-graph approaches for structured retrieval
- Azure experience
- Utility, energy, PropTech, or billing domain background
- Experience building data products for external customers (not just internal BI)
Who we're looking for
- Collaborative builder.
You turn vague requirements into concrete solutions by asking good questions, not by guessing
- Comfortable with ambiguity.
Our roadmap shifts; you can prioritize and make progress on incomplete information
- Ownership mindset.
You treat the platform as a product — monitoring and proactive thinking about future needs
- Quality advocate.
You believe tests, observability, and lineage are features, not overhead
- Curious about new tooling.
You've been watching the modern data stack evolve and have opinions — about DuckDB, about Dagster vs. Airflow, about where LLMs do and don't belong in data pipelines
- No-task-too-small mindset.
Small team, lots of surface area. You'll occasionally build a quick report or debug someone else's pipeline
Looking for more opportunities?
Browse thousands of graduate jobs and entry-level positions.