Sorry, this listing is no longer accepting applications. Don’t worry, we have more awesome opportunities and internships for you.

Machine Learning Engineer - Annapolis

G11 Technology Partners

Machine Learning Engineer - Annapolis

Annapolis, MD
Full Time
Paid
  • Responsibilities

    MACHINE LEARNING (ML) DATA PIPELINE ENGINEER

    WHO WE ARE

    AmplioAI is a motivated start-up operating at the bleeding edge of Artificial Intelligence. We use AI to enhance athletic performance by providing athletes and their coaches dynamic data tracking, performance projections, and insights. This detailed information generates the metrics needed to identify performance strengths as well as weaknesses. This means both the coaches and athletes can create optimal training regimens and better prepare for matchups and competitions. We are currently perfecting our craft in the team-based sector but are planning to expand to the mass market shortly. Allowing more individuals to make use of their athletic data insights.

    Detailed information can be viewed here:

    https://youtu.be/SROckWQmHoE

    AmplioAI is part of the G11 Technology Partners (g11.tech) family. G11 helps to connect top notch engineers with quality firms. 

    WHAT WE NEED

    We are currently seeking a Machine Learning (ML) engineer. Your primary function is working as a data pipeline engineer with a strong focus on applied machine learning. The ideal candidate will have experience writing in SQL and PHP for data pipelines. 

    You will be expected to start by February 2020. This is a full time, 100% remote position. The ideal candidate will have the ability to work in a fast-paced start-up environment. This is a Long-term commitment (1+ years).

    SKILL SET

    • Extensive experience in PyTorch, Python, and/or R
    • SQL and PHP
    • Experience working with neural networks
    • Ability to develop and maintain state-of-the-art models 
    • Strong critical thinking and reasoning skills, including the ability to think about what learned representations are most useful to the final product
    • Help to identify and/or come up with new metrics to address needs even before they are known or voiced 
    • Strong data and machine learning experience or capabilities
    • Able to process product/business vision and apply to development.
    • Ability to quickly utilize state-of-the-art Machine Learning models and incorporate them into production
    • Ability to quickly read, understand, and implement ideas from new machine learning papers
    • Ability to reason about ideas in new Machine Learning papers, and implement modifications to them that suit specific product needs
    • Ability to scale up and automate Machine Learning pipelines so that they run independently on remote servers. (i.e. making use of Amazon Web Services or Google Cloud.)
    • General programming acuity (i.e. you may have to modify data pipelines, application code, C++ implementations)

    WITHIN 3 MONTHS, YOU WILL:

    • Develop and modify clean, scalable pipelines for a top-tier university athletic program that interfaces with our web and mobile apps for running data (Stryd sensor)
      • For body-pose data (video and algorithmic output) 
      • For nutrition data (various biomarkers done through blood testing and health sensors)

    WITHIN 6 MONTHS, YOU WILL:

    • Continue maintaining the listed pipelines above, and fixing issues as they scale
    • Develop the psychometrics pipeline
    • Develop various other pipelines needed for a top-tier university athletic program
    • Develop additional pipelines for sensors we may add as new clients come in 
    • Integrate the pipelines into a common API that our apps and other products use

    WITHIN 12 MONTHS, YOU WILL:

    • Continue maintenance of all pipelines and APIs listed above 
    • Identify common problems of data input to be addressed as we add new sensors 
    • Develop new pipelines for sensors
    • Troubleshoot non-virtual problems (i.e. physical problems that arise with users collecting data with sensors)
    • Help identify new sensors that could provide additional valuable 
    • Figure out what data transformations and normalization are optimal for each data source
    • Interface with coaches and domain experts about what type of data representations are most useful