Skip to main content
F

Senior Data Engineer

Foundation Partners Group

Location

Remote

Salary

Not specified

Type

fulltime

Posted

Today

via linkedin

Job Description

Who We Are

Every life tells a story worth honoring. At Foundation Partners Group, we are privileged to help families create meaningful goodbyes during their most vulnerable moments.

Since 2010, our team of nearly 1,600 compassionate professionals has served communities across 21 states, delivering funeral, cremation, and cemetery services with care, respect, and personalization. We're not just a network of locations--we're a team united by purpose, a community committed to ensuring every farewell reflects the individuality of the life it celebrates.

What You Will Do

We are looking for a Senior Data Engineer to design, build, and operate modern data platforms at scale. You will work across cloud-native Azure infrastructure, Microsoft Fabric, Azure SQL, and AWS, building reliable pipelines, performant data models, and self-service analytics that drive real business decisions. A meaningful part of this role involves containerized workload execution -- designing and deploying pipeline jobs using Azure Container Apps Jobs for scheduled and event-driven data processing. This is a hands-on role suited for an engineer who thrives in ambiguity, values clean architecture, and moves comfortably between strategy and implementation.

RESPONSIBILITIES

Data Platform and Architecture

  • Design and build scalable data pipelines using Python and cloud-native orchestration tools, including Azure Data Factory, Azure Container Apps Jobs, and Fabric Data Pipelines.
  • Architect data solutions across Microsoft Fabric Warehouses, Azure SQL Database, and AWS (S3, Redshift), selecting the right tool for the workload.
  • Implement Medallion/layered architecture patterns (Bronze to Silver to Gold) for structured, governed data delivery.
  • Manage and optimize large-scale data warehouse environments with a focus on performance, cost, and maintainability.

Pipeline Development and Integration

  • Develop Python-based ETL/ELT pipelines to ingest and transform data from APIs, flat files, databases, and SaaS platforms.
  • Build and deploy containerized pipeline jobs using Azure Container Apps Jobs, including scheduling, scaling rules, secrets management via Azure Key Vault, and integration with Azure Container Registry.
  • Build and maintain data movement between on-premises SQL Server environments and cloud targets.
  • Design idempotent, fault-tolerant pipeline patterns with robust logging, alerting, and retry logic.
  • Collaborate with analytics and reporting teams to deliver clean, well-documented data models for Power BI or similar BI tools.

Cloud Infrastructure and Operations

  • Manage data infrastructure across Azure (Fabric, Azure SQL, Azure Data Lake, Key Vault, Container Apps, Container Registry) and AWS (S3, EC2, RDS/Redshift).
  • Containerize data workloads using Docker; deploy and operate them as Azure Container Apps Jobs for scheduled batch processing and event-triggered pipeline execution.
  • Implement infrastructure-as-code principles and version-controlled deployment practices using GitHub, Bicep or Terraform, and CI/CD tooling (Azure DevOps or GitHub Actions).
  • Monitor platform health, optimize compute and storage costs, and enforce data security and access governance.

Collaboration and Engineering Excellence

  • Partner with data analysts, BI developers, software engineers, and business stakeholders to translate requirements into technical solutions.
  • Maintain thorough technical documentation: pipeline specs, data dictionaries, runbooks, and architecture diagrams.
  • Champion engineering best practices: code reviews, testing, modular design, and reusable frameworks.
  • Mentor junior engineers and contribute to team standards and knowledge sharing.

REQUIREMENTS

  • Python: fluent in writing production-grade pipelines, data transformations, and automation scripts.
  • RDBMS: Advanced T-SQL and/or ANSI SQL; experience with SQL Server, Azure SQL DB, and cloud warehouse query engines (Redshift, Fabric).
  • MS Fabric: Warehouses, Lakehouses, Data Pipelines, OneLake, and Fabric's unified analytics model.
  • Azure ecosystem: Azure Data Factory, Azure SQL Database, Azure Data Lake Storage, Azure Key Vault, Azure Container Apps Jobs, Azure Container Registry, and related services.
  • Containerization: Docker image development, container registry management, and deploying workloads as Container Apps Jobs with schedule and event triggers, scaling rules, and environment variable/secret injection.
  • AWS data services: S3 for data lake storage, Redshift for cloud data warehousing.
  • Data modeling: dimensional modeling, star/snowflake schema design, and entity-relationship modeling for both OLTP and OLAP workloads.
  • Version control and DevOps: Git, GitHub, pull request workflows, and CI/CD pipelines.
  • Data Visualization: Power BI, Tableau.
  • Strong analytical problem-solving -- able to decompose ambiguous business problems into clean technical solutions.
  • Clear written and verbal communication with both technical peers and non-technical stakeholders.
  • Self-directed with strong attention to detail; comfortable owning work end-to-end.

Experience

  • 5 to 8 years of hands-on data engineering experience in production environments.
  • Proven track record designing and delivering data platforms on Azure and/or AWS.
  • Demonstrated experience migrating or modernizing legacy on-premises data infrastructure to cloud-native solutions.
  • Hands-on experience running workloads with Azure Container Apps Jobs or a comparable containerized job execution platform.

PREFERRED QUALIFICATIONS

  • Experience with MS Fabric in a production capacity, including Fabric Warehouses and OneLake integration.
  • Familiarity with dbt (data build tool) or similar transformation frameworks.
  • Exposure to streaming or near-real-time data ingestion patterns (Event Hub, Kafka, Kinesis).
  • Experience with Workday, Adaptive Planning, or other ERP/FP\&A source systems.
  • Power BI experience including semantic model development, dataset optimization, or DirectQuery/Import mode tradeoffs.
  • Agile/Scrum team experience; comfort working in iterative delivery cycles.
  • Relevant cloud certifications: Microsoft Azure Data Engineer (DP-203), AWS Certified Data Analytics, or equivalent.
  • Bachelor's degree in Computer Science, Information Systems, Data Science or a related field. In lieu of formal education, equivalent professional experience demonstrating the same depth of knowledge is accepted.

-

Why Join Our Support Center Team

Purpose-Driven Impact

Contribute to meaningful work that supports our field teams and helps deliver compassionate, high-quality experiences for families during life's most important moments.

Competitive Compensation \& Comprehensive Benefits

  • Medical, dental, prescription, and vision coverage
  • Generous paid time off, including vacation, sick time, and holidays
  • 401(k) with company match
  • Company-paid life insurance, short-term disability, and long-term disability

Collaborative \& Inclusive Culture

Work alongside a mission-driven, collaborative team that values accountability, innovation, and respect--where diverse perspectives are encouraged and leadership is accessible.

Looking for more opportunities?

Browse thousands of graduate jobs and entry-level positions.

Browse All Jobs