Welcome!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

SignUp Now!

Forum Overview New Posts

Senior Data Engineer

remotive

⌨️Bô Lão Gõ Phím
Joined
Jun 16, 2025
Messages
299
This description is a summary of our understanding of the job description. Click on 'Apply' button to find out more.


Role Description

We are seeking a Senior Data Engineer with deep expertise in Databricks and a proven track record in modern data platform architecture, ETL implementation, and best practices. The successful candidate will play a crucial role in designing, building, and optimizing our big data solutions, with Databricks as our core platform. Additionally, you will contribute to shaping our overall data strategy and driving innovation.

  • Serve as Databricks Subject Matter Expert (SME): Lead the adoption, optimization, and evangelization of Databricks and its ecosystem (Delta Lake, Unity Catalog, MLflow, Databricks SQL).
  • Architect, build, and optimize big data solutions using Databricks and other modern data platforms such as Azure SQL, Synapse, and Snowflake.
  • Drive our data strategy, ensuring alignment with business goals and best practices. Lead platform governance and ensure compliance with industry standards.
  • Continuously optimize Databricks clusters, jobs, and workflows for performance, reliability, and cost-efficiency.
  • Lead the development and implementation of best practices for coding, data management, and platform operations. Mentor and upskill engineers, data scientists, and analysts on Databricks features and data engineering best practices.
  • Stay updated on the latest industry trends and innovations in data engineering. Engage in Databricks forums, conferences, and user groups to share knowledge and learn from the community.
  • Lead incident response efforts, troubleshoot complex issues, and implement long-term solutions.

Qualifications

  • At least 7 years of experience in data engineering, with a minimum of 3 years in architecting and operating Databricks solutions in production environments.
  • Expert-level proficiency in Databricks, including Delta Lake and Unity Catalog.
  • Experience implementing and optimizing ETL/ELT on modern platforms such as Azure SQL, Synapse, Databricks, and Snowflake, with a demonstrated ability to integrate multiple data sources (including structured and unstructured data, APIs, and streams).
  • Expertise in Delta Lake and Apache Iceberg formats for scalable batch and near real-time processing.
  • Strong programming skills in Python and PySpark with experience developing production-grade data processing applications and optimizing Spark jobs for performance.
  • Strong proficiency in SQL and modern data modeling methods (including medallion architecture, dimensional modeling, and star/snowflake schemas), with the capability to design efficient data structures for analytics.

Requirements

  • Experience building CI/CD pipelines for data platforms using Databricks CLI, Terraform for infrastructure as code, and Git-based workflows to automate testing and deployment of data solutions.
  • Practical experience with machine learning workflows and MLOps practices, such as model training, versioning, and deployment using Databricks ML tools and MLflow.
  • Demonstrated ability to lead teams and mentor junior engineers and data scientists.
  • Experience in leading migrations from legacy platforms to modern data platforms, including Databricks.
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field (or equivalent experience).

Benefits

  • Comprehensive benefits package including medical, dental, vision, 401k, PTO/paid sick leave, and other great benefits like an employee stock purchase plan.

Apply for this position...
 
Back
Top