Job Description
Designs, develops, and implements Hadoop eco-system-based applications to support business requirements. Follows approved life cycle methodologies, creates design documents, and performs program coding and testing. Resolves technical issues through debugging, research, and investigation.
Experience/Skills Required:
- Bachelor’s degree in Computer Science, Information Technology, or related field
- 5 years’ experience in computer programming, software development or related
- 3+ years of solid Java and
- 2+ years’ experience in design, implementation, and support of solutions big data solution in Hadoop using Hive, Spark, Drill, Impala, HBase
- Hands on experience with Unix, Teradata and other relational databases.
- Experience with @Scale a plus.
- Strong communication and problem-solving skills.
- Need someone who is an expert at Java Development and also has expertise with Spark and writing Spark jobs using Scala and/or Python.
- Need to have experience with migrating REST API services into GCP or AWS/Azure and be able to pick up GCP quickly.