Databricks Data Engineer

Spinnaker Search

Databricks Data Engineer

Bedminster, NJ
Full Time
Paid
  • Responsibilities

    This opportunity is with a highly respected, data driven organization operating within the financial services and insurance ecosystem. The company is known for the scale and complexity of its data, the critical nature of its analytics platforms, and its long standing reputation for rigor and independence.

    This role is a hands on Data Engineer position focused on building and expanding a modern Databricks based data lakehouse environment. The team is actively modernizing data ingestion, transformation, and analytics workflows, and this role plays a central part in that effort. It is an exciting opportunity for a data engineer who enjoys working with Spark based platforms, scalable pipelines, and real production data volumes.

    This is a full-time, direct-hire role based in the Bedminster, NJ area, operating in a hybrid environment.

    The Role You’ll Play

    This position supports the design, development, and optimization of data pipelines and data platforms built on Databricks. You will work closely with other data engineers, analytics teams, and platform stakeholders to ensure scalable, reliable, and performant data solutions.

    Key responsibilities include:

    • Design and develop data pipelines using Databricks and Python

    • Build and optimize ETL and ELT workflows supporting a data lakehouse architecture

    • Develop and maintain Spark based processing using Databricks notebooks and jobs

    • Implement data quality checks and validation processes across pipelines

    • Optimize performance of data pipelines, Spark jobs, and SQL queries

    • Support data modeling and warehousing best practices

    • Collaborate with cross functional teams on data architecture and downstream integration

    • Deploy and manage code across non production and production environments using CI/CD practices

    Background Profile

    The ideal candidate has hands on experience with Databricks in a production environment and is comfortable owning data pipelines end to end. You understand how to design for scale, reliability, and performance, and you are comfortable troubleshooting issues across the data stack.

    Required experience includes:

    • Strong, hands on experience with Databricks in a production setting

    • Solid Python experience for data engineering and pipeline development

    • Experience building ETL or ELT pipelines using Spark based technologies

    • Strong SQL skills and understanding of data modeling concepts

    • Experience working with relational databases, Oracle preferred

    • Familiarity with orchestration tools such as Airflow or similar

    • Experience working within cloud based data platforms

    • Ability to communicate effectively with both technical and non technical stakeholders

    Preferred or nice to have experience includes:

    • Databricks certification (Associate or higher)

    • Experience working with Spark outside of Databricks

    • Experience with NoSQL data stores

    • Exposure to R or PowerShell

    • Experience supporting analytics or reporting teams

  • Compensation
    $120,000-$135,000 per year