Sorry, this listing is no longer accepting applications. Don’t worry, we have more awesome opportunities and internships for you.

Cloud Engineer (Data)

DoiT International

Cloud Engineer (Data)

Austin, TX
Full Time
Paid
  • Responsibilities

    As a Cloud Engineer (Data), you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud. You will work on data migrations and transformational tasks, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues.

    In this role you are the engineer working with our most strategic Google Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more.

    The Google Cloud team helps customers transform and evolve their business through the use of Google's global network, web-scale data centers and software infrastructure. As part of an entrepreneurial team in this rapidly growing business, you will help shape the future of businesses of all sizes use technology to connect with customers.

    RESPONSIBILITIES:

    • Act as a trusted technical advisor to customers and solve complex Big Data challenges.
    • Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders.
    • Communicate effectively via video conferencing for meetings, technical reviews and onsite delivery activities.

    REQUIREMENTS:

    • BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience.
    • Experience with data processing software (such as Hadoop, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Spark).
    • Experience in writing software in one or more languages such as Java, C++, Python, Go and/or R.
    • Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments.
    • Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Amazon Web Services, Azure and Google Cloud.
    • Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, MongoDB, SparkML, Tensorflow)