Sorry, this listing is no longer accepting applications. Don’t worry, we have more awesome opportunities and internships for you.

Principal Data Engineer (Remote)

Cloudbeds

Principal Data Engineer (Remote)

San Diego, CA +2 locations
Full Time
Paid
  • Responsibilities

    Job Description

    Cloudbeds is a travel SaaS technology company that works to make the world a more welcoming place. We build advanced cloud-based hospitality software for hotels, hostels, vacation rentals, and groups that manages reservations and guests, distributes room availability, sells inventory, and collects payments. Our hundreds of team members are globally distributed across over 40 countries and, altogether, we speak 20+ languages. How do we do it? On a #remotefirst platform that allows every member of our team to work from wherever they are around the globe. We’re looking for people who want to disrupt the travel industry and love to travel as much as we do.

    As a Principal Data Platform Engineer at Cloudbeds, you will implement our company-wide data strategy across all teams and departments to deliver a best in class data experience to our customers and partners in over 150 countries, as well as internally within Cloudbeds. You will work closely with our Data Platform, Architecture and Platform Services, Data Products and Infrastructure teams to progress our Data Platform and Data Services vision to process terabytes of the platform and industry data from multiple databases and origins in an automated fashion.  We are in the early days of designing a data platform to accelerate all our data operations and unlock the creation of our future products, and we’d love you to join us, embark and expand upon endless opportunities to innovate and drive the industry-leading, comprehensive, and global data experience for travel.

     

    Location: US/Canada (Remote)

     

    What You Will Do:

    • Own the end-to-end delivery of projects related to our data platform initiative, from conception and design to development and production monitoring
    • Enable our cross-domain data and software products teams to perfect our products and expand our offering and offer easy and secure access to data for engineering teams to deliver faster
    • You will democratize access to data and aim to automate operations of large amounts of sensitive data efficiently, securely, and reliably
    • You’ll participate in the strategic development of methods, techniques, and evaluation criteria for Data related projects. This will include assessment of build vs buy decisions at certain stages, backed by proof of concepts, benchmarking, etc
    • Take a consultative approach to enable platform and product engineering teams such that enablement of Data Durability, Consistency, Conformity and Scalability can be transparently achieved through the means of Data Platform and Services
    • Ship your code with our continuous integration process
    • Consult and partner with the Operations group as they evolve the maturity of our monitoring systems and processes to improve visibility and failures detection in our data platform infrastructure
    • Build and maintain high throughput data pipelines with state of the art technologies including but not limited to Kafka/Kinesis Streams, Spark, AWS Data Migration Services/Debezium, EMR, Hudi/Delta Lake, Redshift, Cassandra/DynamoDB, Airflow or equivalents, etc
    • Optimize and maintain current platform data pipelines while we modernize with a new platform
    • Implement logging and debugging approaches in a standardized fashion
    • Collaborate with Data Architecture, Business Intelligence, Analytics, and Infrastructure teams on a daily basis
    • Develop a framework for future extensions through standardized modern workflows

     

    You’ll Succeed With:

    • 8+ years experience as a Data Platform Engineer
    • 2+ years experience working with Amazon Web Services
    • Bachelor’s or Masters’ degree in computer science or related field, or equivalent experience
    • You like to think at scale and design, develop and operate production data stores, messaging applications, data pipelines, and data services that meet the goals of low latency, high availability, resiliency, security, and quality
    • Development style with empathy for people and how they use your work
    • You use your technical experience to educate your peers in data platform engineering technologies, best practices, and platform thinking
    • Hands-on experience with data and platform design patterns and knowledge regarding trade-offs of using them along with any anti-patterns depending upon use cases at offer
    • You have know how and/or experience in building and implementing large-scale, high throughput, low latency, and secure data microservices applications
    • A background in partnering and consulting with platform and product teams that are on a journey of building domain and event-driven systems 
    • Expert knowledge and experience developing high throughput and low latency data microservices applications
    • Strong knowledge of how to compose and implement structural data models with relational stores, such as PostgreSQL and MySQL
    • Experience with Data Stream processing, Messaging platforms, and ecosystem frameworks such as Kafka/Kinesis, RabbitMQ, Spark Streaming, Flink, Debezium/AWS DMS, KSQL, Streaming Microservices, etc
    • Hands-on knowledge of one or more of the NoSQL, Caching and/or Search stores such as DynamoDB/Cassandra, Redis / Memcached, Elasticsearch or Lucene 
    • Hands-on knowledge with Docker, Kubernetes, Terraform, or other similar tools
    • Ability to work in an Agile Scrum environment
    • Ability to thrive in a fast-paced environment
    • Ability to work remotely and manage your own time in an international team
    • Exceptional written and verbal communication in English

     

    Nice-to-Have:

    • Know-how of application and dimensional model implementation
    • Hands-on knowledge of ETL technologies and techniques
    • A background in DataOps and Service ownership, and a clear understanding of bounded contexts and how they map onto microservices 
    • Experience moulding fresh environments into efficient mature data platforms
    • You have experience working with data technologies that power analytics (e.g. Apache Sqoop, Airflow, Hadoop, Hive, Spark, Flink, etc. or similar technologies)

     

    Our company culture supports flexible working schedules with an open vacation policy, personal and professional development for individual growth, and the opportunity to travel and work remotely with great people. If you think you have the skills and passion, we’ll give you the support and opportunity to grow your career. If you would like to be considered for the role, we would love to hear from you!

     

    Company Awards to Check Out! 

    Best Startup Employers in 2020 | Forbes

    Best Places to Work | HotelTechReport (2018, 2019, 2020)

    Deloitte’s North America Technology Fast 500 (2019)

    Inc. 500 Fastest Growing Companies (2018 & 2019) 

    Inc. Best Places to Work (2017 & 2018) 

    Best Places to Work | Inc Magazine (2017 & 2018)

    Start-Ups to Watch in 2018 | Forbes

    Connect MIP Award (Technology)

     

    Powered by JazzHR

    nWBWZu6UKx

  • Locations
    San Diego, CA • San Francisco, CA • Seattle, WA