Sorry, this listing is no longer accepting applications. Don’t worry, we have more awesome opportunities and internships for you.

Data Scientist

Triunity Software

Data Scientist

Los Angeles, CA
Full Time
Paid
  • Responsibilities

    Data Scientist (NYC, LA, SF or Seattle)

    Onsite Hybrid role - 3 days a week

    We are seeking a highly motivated and talented Senior Data Scientist to join our team of experts in developing and maintaining recommendation and personalization algorithms for Disney Streaming's suite of streaming video apps. As a member of our team, you will play a pivotal role in shaping the future of our streaming services by applying state-of-the-art machine learning methods to meet strategic product personalization goals.

    • Algorithm Development and Maintenance:

    o Utilize cutting-edge machine learning techniques to develop and enhance algorithms for personalization, recommendation, and predictive systems.

    o Take ownership of maintaining and optimizing algorithms deployed in production environments.

    o Serve as the point person for explaining methodologies to both technical and non-technical teams, fostering clear communication.

    • Analysis and Algorithm Optimization:

    o Conduct in-depth analysis of user interactions within our apps and user profiles to drive improvements in key personalization metrics.

    o Collaborate with data scientists and engineers to refine algorithms and enhance their performance continually.

    • MVP Development:

    o Innovate and develop machine learning products that can be used for new production features or by downstream production algorithms.

    o Work closely with cross-functional teams to prototype and operationalize personalization solutions.

    • Development Best Practices:

    o Maintain and establish best practices for algorithm development, testing, and deployment, ensuring high-quality code and efficient processes.

    • Collaboration with Product and Business Stakeholders:

    o Identify and define new personalization opportunities by collaborating with product and business stakeholders.

    o Collaborate with other data teams to improve data collection, experimentation, and analysis methods.

    Required Qualifications:

    • 7+ years of analytical experience

    • 5+ years of experience developing machine learning models and performing data analysis with Python and tensor-based model development frameworks (e.g. PyTorch, Tensorflow)

    • 5+ years writing production-level, scalable code (e.g. Python, Scala)

    • 5+ years of experience developing algorithms for deployment to production systems

    • In-depth understanding of modern machine learning (e.g. deep learning methods), models, and their mathematical underpinnings for recommendation engines

    • In-depth understanding of the latest in natural language processing techniques and contextualized word embedding models

    • Experience deploying and maintaining pipelines (AWS, Docker, Airflow) and in engineering big-data solutions using technologies like Databricks, S3, and Spark

    • Familiarity with data exploration and data visualization tools like Tableau, Looker, etc.

    • Understanding of statistical concepts (e.g., hypothesis testing, regression analysis)

    • Ability to gauge the complexity of machine learning problems and a willingness to execute simple approaches for quick, effective solutions as appropriate

    • Strong written and verbal communication skills

    • Ability to explain how models are used and algorithms behave to both technical and non-technical audiences

    Additional Preferred Qualifications:

    • MS or PhD in computer science, data science, statistics, math, or related quantitative field

    • Production experience with developing content recommendation algorithms at scale

    • Experience building and deploying full stack ML pipelines: data extraction, data mining, model training, feature development, testing, and deployment

    • Experience with graph-based data workflows such as Apache Airflow

    • Experience engineering big-data solutions using technologies like EMR, S3, Spark, Databricks

    • Familiar with metadata management, data lineage, and principles of data governance

    • Experience loading and querying cloud-hosted databases such as Snowflake

    • Familiarity with automated deployment, AWS infrastructure, Docker or similar containers

    Flexible work from home options available.