Sorry, this listing is no longer accepting applications. Don’t worry, we have more awesome opportunities and internships for you.

Data Engineer

Cooper Smith Advertising

Data Engineer

Toledo, OH
Paid
  • Responsibilities

    Cooper Smith Advertising in Toledo, OH is seeking a talented Data Engineer to join our innovative team. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure. You will work closely with our cross-functional teams to gather requirements and implement efficient data solutions that drive business growth. With a competitive salary of $60,000 - $75,000, you'll have the opportunity to work in a collaborative and dynamic environment that values creativity and innovation. Join us and be part of a company that is revolutionizing the advertising industry with data-driven insights and strategies. Apply today and unleash your potential! Responsibilities: • Assist in the development of data infrastructure capable of ingesting and storing data and serving multitudes of queries quickly • Prepare and capture data for machine learning and automation • Build fault-tolerant, self-healing, adaptive, and highly accurate data and event computational pipelines • Design enhancements, updates, and programming changes for portions and subsystems of data pipelines, repositories, and models for structured/unstructured data • Mine data using modern tools and programming languages • Analyze, design, and determine coding, programming, and integration activities required • Execute and write portions of testing plans, protocols, and documentation • Identify and debug issues with code, and suggest changes and/or improvements • Collaborate with the Analytics team regarding project progress and issue resolutions • Enable projects requiring data engineering solutions expertise Job Type: Full-time (Hybrid) Schedule: • 8-hour shift • Monday to Friday Ability To Commute/Relocate: • Toledo, OH 43617: Reliably commute or planning to relocate before starting work (Preferred) Qualifications: • Bachelor’s or master’s degree in computer science, information systems, engineering, or related field • 2-4 years of data engineering experience • Exposure to GitHub, knowledge of source code repositories like GIT • Experience developing and maintaining ETL/ELT tools, solutions, and processes • Fluent in structured and unstructured data, its management, and modern data transformation methodologies • Extensive experience and proficiency in scripting languages and programming frameworks, Python required • Strong relational database skills, familiarity with DBT and version control, SQL • Experience working with public, private, and hybrid clouds • Proven history of building data solutions • Passion for working with vast data sets • Ability to provide data-backed recommendations • Proficiency in troubleshooting and solving complex questions • Strong attention to detail and accuracy • Team-oriented, collaborative, and effective communicator who contributes to a positive workplace environment • Flexible/adaptable to the changing needs of the agency • Must be a critical thinker, self-starter, and able to work in a fast-paced environment • Usage of big data open source frameworks a plus, using Python • Data visualization experience using Tableau or a similar BI tool • Knowledge of Python, SQL, R, VBA, Java, Git, Jenkins, Seq, PyCharm, DataSpell, Jupyter, VS Code • Experience with Snowflake, Tableau Server, Azure or other cloud solutions • Marketing/Advertising/Tech agency background a plus • Local preferred, not required (hybrid) In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire. • Are you authorized to work in the United States? • Will you require sponsorship now or in the future to maintain work authorization in the United States? Compensation: $60,000 - $75,000 yearly

    • Assist in the development of data infrastructure capable of ingesting and storing data and serving multitudes of queries quickly • Prepare and capture data for machine learning and automation • Build fault-tolerant, self-healing, adaptive, and highly accurate data and event computational pipelines • Design enhancements, updates, and programming changes for portions and subsystems of data pipelines, repositories, and models for structured/unstructured data • Mine data using modern tools and programming languages • Analyze, design, and determine coding, programming, and integration activities required •  Execute and write portions of testing plans, protocols, and documentation • Identify and debug issues with code, and suggest changes and/or improvements • Collaborate with the Analytics team regarding project progress and issue resolutions • Enable projects requiring data engineering solutions expertiseJob Type: Full-time (Hybrid)Schedule: • 8-hour shift • Monday to FridayAbility To Commute/Relocate: • Toledo, OH 43617: Reliably commute or planning to relocate before starting work (Preferred)