Job Description
We are looking for an Azure Big Data developer responsible for the development and maintenance of a data platform for Development Data. As member of the technical team, you will play a critical role in shaping new systems architecture and technical direction, as well as the future of existing system and services. This role requires working closely with cross-functional teams to effectively coordinate interdependencies.
ESSENTIAL JOB FUNCTIONS:
- Production experience in large-scale SQL, NoSQL data infrastructures such as Cosmos DB, Cassandra, MongoDB, HBase, CouchDB, Apache Spark etc.
- Application experience with SQL databases such as Azure SQL Data Warehouse, MS SQL, Oracle, PostgreSQL, etc.
- Proficient understanding of code versioning tools {such as Git, CVS or SVN}
- Strong debugging skills with the ability to reach out and work with peers to solve complex problems
- Ability to quickly learn, adapt, and implement Open Source technologies.
- Familiarity with continuous integration (DevOps)
- Proven ability to design, implement and document high-quality code in a timely manner.
- Excellent interpersonal and communication skills, both written and oral.
EDUCATIONAL QUALIFICATIONS AND EXPERIENCE:
- Education: Bachelor's degree in Computer Science, Math, or Engineering.
- Role Specific Experience: 2+ years of experience in Big Data platform development.
CERTIFICATION REQUIREMENTS (DESIRED):
- Azure Designing and Implementing Big Data Analytics Solutions
REQUIRED SKILLS/ABILITIES:
- Experience with NoSQL databases, such as HBase, Cassandra or MongoDB.
- Proficient in designing efficient and robust ETL/ELT using Data Factory, workflows, schedulers, and event-based triggers.
- 1+ experience with SQL databases (Oracle, MS SQL, PostgreSQL, etc.).
- 1+ years of hands on experience with data lake implementations, core modernization and data ingestion.
- 3+ years of Visual Studio C# or core Java.
- Experience at least in one of the following programming languages: R, Scala, Python, Clojure, F#.
- 1+ years of experience in Spark systems.
- Good understanding of multi-temperature data management solution.
- Practical knowledge in design patterns.
- In depth knowledge of developing large distributed systems.
- Good understanding of DevOps tools and automation framework.
DESIRED SKILLS/ABILITIES (NOT REQUIRED BUT A PLUS):
- Experience in designing and implementing scalable, distributed systems leveraging cloud computing technologies.
- Experience with Data Integration on traditional and Hadoop environments.
- Experience with Azure Time Series Insights.
- Some knowledge of machine learning tools and libraries such as Tensor flow, Turi, H2O, Spark ML lib, and Carrot (R).
- Understanding of AWS data storage and integration with Azure.
- Some knowledge of graph database.