Senior Data Engineer

Published March 3rd, 2026

Summary

The Senior Data Engineer will design, build, and maintain the scalable data pipelines, models, and infrastructure that power analytics, business intelligence, and machine?learning products across the company. Partnering closely with business, product, and analytics teams, you will translate complex requirements into elegant, reliable data solutions and help drive the delivery of innovative data products. This role reports to the Senior?Manager, Data Engineering.

Duties and Responsibilities

  • Build E2E Azure Databricks-based data solutions.
  • Design, develop, and maintain scalable ETL and streaming data pipelines on Azure Databricks, leveraging Apache Spark, Delta Lake, and Azure Data Lake Storage (ADLS Gen2) to enable reliable lakehouse architectures and ensure efficient ingestion, transformation, and storage of data
  • Build and optimize data models and schemas for analytics, reporting, and operational data stores
  • Build and optimize Delta Lake / Lakehouse patterns (Bronze/Silver/Gold), including schema evolution and time travel
  • Develop high-quality PySpark / Spark SQL transformations, optimize joins, partitioning, caching, and shuffle behavior.
  • Implement and maintain data quality frameworks, including data validation, monitoring, and alerting mechanisms.
  • Collaborate closely with data architects, analysts, data scientists, and product teams to align data engineering activities with business goals.
  • Leverage cloud data platforms (Azure, AWS or GCP) to build and optimize data storage solutions, including data warehouses, data lakehouses, and real-time data processing.
  • Develop automation processes and frameworks for CI/CD supported by version control, linting, automated testing, security scanning, and monitoring
  • Contribute to the maintenance and improvement of data governance practices, helping to ensure data integrity, accessibility, and compliance with regulations such as GDPR.
  • Provide technical mentorship and guidance to junior team members, promoting best practices in software engineering, data engineering, and agile development.
  • Troubleshoot and resolve complex Azure Databricks platform data infrastructure and pipeline issues, ensuring minimal downtime and optimal performance.

Qualifications

Education and/or Experience: 

Required:

  • Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field
  • A minimum of 5 years of hands-on experience in data engineering, designing and building scalable data pipelines, ETL/ELT processes
  • A minimum of 5 years of hands-on experience designing, building, and operating data solutions 
  • Strong knowledge of Databricks architecture and core components, including Databricks Lakehouse, Delta Lake, Databricks SQL, Apache Spark Clusters, Unity Catalog, Databricks Workflows (Jobs), MLflow, and Databricks Notebooks
  • Extensive experience with cloud data platforms in Azure, AWS, or Google
  • Strong proficiency with Python, SQL, and Apache Spark for data processing
  • Proven experience building reusable, metadata-driven data ingestion frameworks using Python and Scala
  • Hands-on experience with modern data-platform components (object storage, Lakehouse engines, orchestration tools, columnar warehouses, streaming services).
  • Proven experience with data modeling, schema design, and performance tuning of large-scale data systems.
  • Deep understanding of data engineering best practices: code repositories, CI/CD pipelines, test automation, monitoring, and alerting systems.
  • Skilled at crafting compelling data narratives through tables, reports, dashboards, and other visualization tools
  • Strong problem-solving and analytical skills with excellent attention to detail.
  • Excellent communication skills and experience collaborating with technical and business stakeholders.

Preferred:

  • Master’s degree in Computer Science, Engineering
  • Experience building data pipelines in an Azure Databricks environment
  • Hands-on experience integrating Azure Databricks with Azure DevOps, Azure Blob Storage / ADLS Gen2, Azure Key Vault, and Azure Data Factory
  • Familiarity with enterprise data modeling tools such as ERwin Data Modeler, including the ability to interpret and apply logical and physical data models to analytical and lakehouse architectures
  • Experience migrating to—or building—data platforms from the ground up
  • Experience with Infrastructure as Code (IAC) and Governance as Code
  • Familiarity with machine-learning workloads and partnering on feature engineering
  • Experience working in an Agile delivery model

Other Skills and Abilities:

The following will also be required of the successful candidate:

  • Strong organizational skills
  • Strong attention to detail
  • Good judgment
  • Strong interpersonal communication skills
  • Strong analytical and problem-solving skills
  • Able to work harmoniously and effectively with others
  • Able to preserve confidentiality and exercise discretion
  • Able to work under pressure
  • Able to manage multiple projects with competing deadlines and priorities

 

#toponehire

Attach a resume file. Accepted file types are DOC, DOCX, PDF, HTML, and TXT.

We are uploading your application. It may take a few moments to read your resume. Please wait!

Skip to content