Sorry, this listing is no longer accepting applications. Don’t worry, we have more awesome opportunities and internships for you.

Teller- Full Time

Flexion

Teller- Full Time

Hull, MA
Full Time
Paid
  • Responsibilities

     

    CLOUD DATA ENGINEER

     

    WHAT THE JOB LOOKS LIKE: 

    The Cloud Data Engineer is a specialized role participating in designing and implementing systems on Public Cloud infrastructure to deliver more analytical and business value from a wide range of data sources.  You will work with the team to design and develop high-performance, resilient, automated data pipelines, streams, and applications, adapting technologies for ingesting, transforming, classifying, cleansing and exposing data using creative design to meet objectives.  Your broad experience with data management technologies will enable you to match the right technologies to the required schemas and workloads.  Our focus in on the AWS and GCP platforms, with a strong serverless bias.  We rely heavily on Python, PySpark, BigQuery and related technologies, and work in an Agile, DevOps team culture.  We expect you to bring an array of specialized skills noted below, and to lead by learning.

     

    LOCATION: Boston, MA – 1 day in a week required to work from the Boston office.

    DURATION: Temp to hire (3 months contract then conversion to full-time perm)

     

    RESPONSIBILITIES:

    • Build and Maintain serverless data pipelines in terabyte scale using AWS and GCP services – AWS Glue, PySpark and Python, AWS Redshift, AWS S3, AWS Lambda and Step Functions, AWS Athena, AWS DynamoDB, GCP BigQuery, GCP Cloud Composer, GCP Cloud Functions, Google Cloud Storage and others
    • Integrate new data sources from enterprise sources and external vendors using a variety of ingestion patterns including streams, SQL ingestion, file and API.
    • Maintain and provide support for the existing data pipelines using the above-noted technologies
    • Work to develop and enhance the data architecture of the new environment, including recommending optimal schemas, storage layers and database engines including relational, graph, columnar, and document-based, according to requirements
    • Develop real-time/near real-time data ingestion from a range of data integration sources, including business systems, external vendors and partner and enterprise sources
    • Provision and use machine-learning-based data wrangling tools like Trifacta to cleanse and reshape 3rd party data to make suitable for use.
    • Participate in a DevOps culture by developing deployment code for applications and pipeline services
    • Develop and implement data quality rules and logic across integrated data sources.
    • Serve as internal subject matter expert and coach to train team members in the use of distributed computing frameworks and big-data services and tools, including AWS and GCP services and projects

     

    REQUIRED EXPERIENCE AND SKILLS: (Experience is expected to be hands-on, and not through team exposure alone)

    • Master’s degree in Computer Science, Mathematics, Engineering, or equivalent work experience
    • Four years working with datasets with very high volume of records or objects
    • Expert level programming experience in PYTHON and SQL
    • Two years working with SPARK or other distributed computing frameworks (may include: HADOOP, CLOUDERA)
    • Four years with relational databases (typical examples include: POSTGRESQL Microsoft SQL Server, MySQL, Oracle)
    • Two years with AWS services including S3, LAMBDA, REDSHIFT, ATHENA, S3
    • One year working with Google Cloud Platform (GCP) services, which may include any combination of: BIGQUERY, CLOUD STORAGE, CLOUD FUNCTIONS, CLOUD COMPOSER, PUB/SUB and others (this may be via POC or academic study, though professional experience is preferred)
    • Some knowledge of AWS services: DYNAMODB, STEP FUNCTIONS
    • Experience with contemporary data file formats like APACHE PARQUET and AVRO, preferably with compression codecs, like SNAPPY and BZIP.
    • Experience analyzing data for data quality and supporting the use of data in an enterprise setting.

     

    DESIRED EXPERIENCE AND SKILLS:

    • Streaming technologies (e.g.: AMAZON KINESIS, KAFKA)
    • Graph Database experience (e.g.: NEO4J, NEPTUNE)
    • Distributed SQL query engines (e.g.: ATHENA, REDSHIFT SPECTRUM, PRESTO)
    • Experience with caching and search engines (e.g.: ELASTICSEARCH, REDIS)
    • ML experience, especially with Amazon SAGEMAKER, DATAROBOT, AUTOML
    • IAC coding tools, including CDK, TERRAFORM, CLOUDFORMATION, CLOUD BUILD

     

    The most efficient way to reach our recruiting team is to submit your resume through the URL provided. If you have questions or would like more information about this job posting or if you’d like to know more about Flexion Inc.

     

     EQUAL EMPLOYMENT OPPORTUNITY/AFFIRMATIVE ACTION EMPLOYER

    If you require a reasonable accommodation to complete any part of the application process, or are limited in the ability or unable to access or use this online application process and need an alternative method for applying, you may contact us at 608-478-2598 for assistance.

     

    Required Skills Required Experience

  • Qualifications

    REQUIRED SKILLS

    • Bachelor’s degree in Computer Science, or Business or Engineering.

    • Evaluating and implementing Oracle Financials, Financial Planning and Analysis, Financial Reporting, Consolidation, Intercompany, Procure to Pay, Order to Cash and Fixed Assets for Manufacturing industries, Cost module experience is a plus.

    • Defining and developing the functional scope and strategy for the design, conducting design review sessions, configuration, testing, and support for the implementation of Oracle e-business suite

    • Gathering requirements, writing functional specifications (RICEW), application configuration and coordinating implementation of projects and system changes for all Oracle Financial applications

    • Familiarity with integration with other related applications such as (card-connect, vertex, etc.)

    • Managing projects using established procedures pertaining to Program Management Life Cycle (PMLC) and Software Development Life Cycle (SDLC)

    • Researching, troubleshooting, and resolving Oracle Applications issues (product related) working with Oracle support (on-line and technicians) and team members (business users and IT)

    • Supporting the end-to-end financial process in Oracle eBusiness Suite (EBS) R12 as well as using EBS Financials and subledger accounting

    • Designing, testing, and implementing Oracle application extensions, interfaces, and data conversions