Sorry, this listing is no longer accepting applications. Don’t worry, we have more awesome opportunities and internships for you.

Data Engineer – Kafka

nFolks Ltd

Data Engineer – Kafka

Dallas, TX
Full Time
Paid
  • Responsibilities

    Job Description

    Hi,

     

    DATA ENGINEER – KAFKA ROLE.

    WORK LOCATION: REMOTE

    LONG TERM

     

    RESPONSIBILITIES:

    • Function as the solution lead for building the data pipelines to support the development / enablement of Information Supply Chains within our client organizations – this could include building (1) data provisioning frameworks, (2) data integration into data warehouse, data marts and other analytical repositories (3) integration of analytical results into operational systems, (4) development of data lakes and other data archival stores.
    • Optimally leverage the data integration tool components for developing efficient solutions for data management, data wrangling, data packaging and integration. Develop overall design and determine division of labor across various architectural components
    • Deploy and customize Standard Architecture components
    • Mentor client personnel. Train clients on the Integration Methodology and related supplemental solutions
    • Provide feedback and enhance intellectual property related to data management technology deployments
    • Assist in development of task plans including schedule and effort estimation

     

    SKILLS AND QUALIFICATIONS:

    • Bachelor’s Degree or foreign equivalent in Computer Science, Electrical Engineering, Mathematics, Computer Applications, Information Systems or Engineering is required
    • Experience building high-performance, and scalable distributed systems
    • Experience in ETL and ELT workflow management
    • Continuous Data Movement/ Streaming/ Messaging:

    Experience using Kafka as a distributed messaging system

    Experience with Kafka producer and consumer APIs

    Understanding of event-based application patterns & streaming data

    Experience with related technologies ex Spark streaming or other message brokers like MQ is a PLUS

    • Batch Data Movement – ETL (DataStage experience is preferred, but more important is a deep knowledge of data integration concepts)

    3+ years of Data Management Experience

    3+ years’ experience developing, deploying and supporting scalable and high-performance data pipelines (leveraging distributed, data movement technologies and approaches, including but not limited to ETL and streaming ingestion and processing)

    2+ years’ experience with Hadoop Ecosystem (HDFS/S3, Hive, Spark)

    3+ years’ experience in a software engineering, leveraging Java, Python, Scala, etc.

    2+ years’ advanced distributed schema and SQL development skills including partitioning for performance of ingestion and consumption patterns

    2+ years’ experience with distributed NoSQL databases (Apache Cassandra, Graph databases, Document Store databases)

    • Experience in the financial services, banking and/ or Insurance industries is a nice to have

     

    SINCERELY,

    HR MANAGER

    NFOLKS DATA SOLUTIONS LLC 

    PHONE:  425-999-4933

     EMAIL: ARUN(AT)NFOLKSDATA.COM

  • Qualifications

    Additional Information

    if interested please send me the resumes on arun(at)nfolksdata.com