Sorry, this listing is no longer accepting applications. Don’t worry, we have more awesome opportunities and internships for you.

Principal Data Engineer

Technacity Group

Principal Data Engineer

Los Angeles, CA
Full Time
Paid
  • Responsibilities

    Job Description

    IS THIS YOU?

    You’re ready to take the next step in your Data Engineering career - to a fast-moving, successful company building out their next-generation streaming analytics infrastructure!  You love data consistency and integrity. You consider yourself scrappy and a technologist, passionate about data infrastructure... with your attention to detail and insistence on doing things correctly, you know you can make a big impact on a small team! You’re an excellent communicator and know that you grow faster from being able to mentor others.

    WHAT YOU’LL DO:

    • Build new systems to provide real-time streaming analytics and event processing pipeline based on fast data architecture
    • Build enterprise grade data lake to support both business analytical needs and next generation data infrastructure
    • Building data integration toolkit for backend services
    • Support our data science team in deploying new algorithms for matchmaking, fraud and cheat detection
    • Find better ways to move massive amounts of data from a variety of sources to formats consumable by reporting systems and people
    • Improve monitoring and alarms that impact data integrity replication lag
    • Support our product development team in creating new events to measure/track

    Basic Qualifications:

    • At least 4-5 years of experience in Scala/Java or Python programming
    • AWS data products (Data pipelines, Athena, Pinpoint, S3, etc)
    • Experience deploying data infrastructure
    • Experience with recognized industry patterns, methodologies, and techniques

    Bonus:

    • Familiarity with Agile engineering practices
    • 2+ years experience on Kubernetes, Helm chart
    • 4+ years of experience with Spark, Scala and/or Akka
    • 4+ years of experience with Spark Streaming, Storm, Flink, or other Stream Processing technologies
    • 2+ years of experience working with Kafka or similar data pipeline backbone
    • 4+ years of experience with Unix/Linux systems with scripting experience in Shell, Perl or Python
    • 3+ years’ experience with NoSQL implementation (ElasticSearch, Cassandra, etc. a plus)
    • At least 4-5 years of experience with Unix/Linux systems with scripting experience
    • Familiarity with Snowflake or OLAP
    • Familiarity with Kinesis, Lamda
    • Prior experience in gaming
    • Prior experience in finance