Sorry, this listing is no longer accepting applications. Don’t worry, we have more awesome opportunities and internships for you.

Python Backend/Data Engineer

Cloud Agronomics

Python Backend/Data Engineer

Boulder, CO
Full Time
Paid
  • Responsibilities

    BIG DATA ENGINEER

    Job Type: Full-Time

    ABOUT US:

    Cloud Agronomics is an AgTech startup applying remote sensing and novel analytics to power the next wave of actionable, real-time farm management insights. Backed by Lightspeed Venture Partners, Cloud Agronomics is changing the way the agriculture industry makes decisions. WE'RE LOOKING FOR A TALENTED, MOTIVATED BIG DATA ENGINEER WHO WILL BUILD THE CLOUD-BASED DATA PIPELINES AND STORAGE SYSTEMS THAT ENABLE OUR COMPANY'S CORE ML EXPERIMENTS. 

    Right now, farmers spend millions of dollars on agronomy solutions, yet 20% of the food they plant never makes it to harvest. Nationwide, this amounts to a $440 BILLION ANNUAL LOSS and means that critical resources like land, water, and fertilizer are often overused. At Cloud Agronomics, we're using recent advances in broadband spectral imaging technology to generate insights with a precision never before seen in the Ag industry. Our airborne sensing packages collect up to 300x more data features than existing solutions and enable novel advances in ML and analytics. The insights we generate have substantial real-world impact: they help increase food production, reduce resource use, and fight climate change. 

    An ideal candidate for the Big Data Engineering role should have PRACTICAL EXPERIENCE, A LOVE OF WRITING HIGH QUALITY PRODUCTION READY CODE, AND A PASSION FOR THE PROBLEM WE'RE TRYING TO SOLVE. Using a mixture of distributed cloud computing technologies, you will work with our data science and spectroscopy teams to create big data pipelines that process terabytes of high-dimensional spectral data. You'll be automating the workflow and processes from raw data collection to actionable, impactful insights for our customers.

    WHAT YOU'LL DO:

    • Leverage Azure, AWS, and platform-agnostic distributed cloud-based technologies to build production-ready big data pipelines.
    • Engineer choreographed and/or orchestrated big data pipelines to process and collate terabytes of raw hyperspectral data.
    • Help to implement new technical architectures utilizing the appropriate tools.
    • Write functional requirement documents and guides with accompanying documentation.
    • Plan, coordinate, and communicate within your team, as well as cross-functionally with our data science and spectroscopy teams, to optimize existing pipelines and processes.

    ABOUT YOU:

    • You're conscientious, humble, driven, and adaptable in a startup environment, with 2+ years industry experience and have at least a BS in Computer Science, Statistics, Mathematics, Applied Mathematics, or a related technical field.
    • You have significant experience with OOP and building production-quality codebases in Python for cloud-based environments.
    • You have experience using containerization platforms to standardize, package, and deploy applications.
    • You have excellent software development skills with the ability to write, test, deploy, and maintain high-quality production code.
    • You're a conscientious team-player that will work with others at Cloud Agronomics to scope MVPs and associated product deliverables.
    • You're able to think creatively and critically and thrive in a fast-paced, dynamic work environment.

    PLUSES:

    • Experience using Apache Airflow together with Kubernetes for data pipelines.
    • Experience working with remote sensing and/or geospatial data.
    • You have knowledge of microservices and event-driven architectures and the relevant tools for implementing them.
    • Experience with more than one of the following: Kafka, HDF5 and Parquet data formats, Neo4J, Apache Airflow, ML Flow, Scala, Serverless Framework, AWS Batch, AWS EC2, AWS S3, AWS Step Functions, AWS Lambda Functions, AWS SQS, AWS ECR/ECS, Boto3, Azure Blob Storage, Azure Containers, Azure Functions, Azure ML service, Azure SDK, Azure Databricks, Docker, Kubernetes, Pyspark, RabbitMQ, Dask.

    JOB PERKS:

    • Competitive compensation.
    • Work with cutting-edge scientific data in a modern technology stack.
    • Join a fast-paced, growing team with a mission that has substantial real-world impact.
    • Friendly + motivating company culture with benefits such as PTO, 401k, and health savings contribution (HSA).