Sorry, this listing is no longer accepting applications. Don’t worry, we have more awesome opportunities and internships for you.

Data Engineer

Dennis Earl Hardy

Data Engineer

Richardson, TX
Full Time
Paid
  • Responsibilities

    Job Title: Senior Data Engineer

    We seek an experienced Senior Data Engineer to join our team and lead the design, development, and maintenance of scalable data pipelines and cloud-based infrastructure. You will work with data scientists, analysts, and cross-functional teams to ensure seamless data integration and performance across our platforms.

    Key Responsibilities:

    • Data Pipeline Development : Build and maintain reliable ETL/ELT processes using Python and SQL to efficiently extract, transform, and load data from various sources into our data platforms.
    • Database Optimization : Manage and optimize SQL databases and data warehouses, ensuring efficient data retrieval and high performance.
    • Data Integration : Merge structured and unstructured data from diverse sources into a unified format for analysis and decision-making.
    • Data Governance : Implement data quality checks, validation protocols, and governance standards to maintain data integrity and consistency.
    • Collaboration : Partner with data scientists, software engineers, and product teams to define data requirements and develop solutions that meet business needs.
    • Performance Tuning : Enhance the performance of large-scale data processing systems, optimizing for faster data access and usability.
    • Documentation : Maintain detailed documentation of data engineering processes, pipelines, and system architectures.
    • Innovation : Stay current with emerging trends and technologies in data engineering and cloud services, integrating new tools and approaches as appropriate.

    Required Qualifications:

    • 6+ years of experience in advanced Python for data manipulation, automation, and scripting.
    • 6+ years of experience in advanced SQL , with expertise in complex query writing and database optimization.
    • Hands-on experience with Apache Airflow or Kubeflow for scheduling and workflow automation.
    • Strong experience with Azure Cloud (preferred), AWS , or GCP managing data solutions and infrastructure.
    • Experience with Azure Kubernetes and cloud-native data architectures.
    • Familiarity with big data technologies like Spark or Hadoop and data lake architectures.
    • Knowledge of CI/CD pipelines , version control (Git), and containerization tools like Docker.
    • ** _ C2C Only_**

    Preferred Skills:

    • Experience in data governance and quality assurance.
    • Familiarity with DevOps practices and collaboration within a cross-functional engineering environment.

    Education:

    • Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field.