overview
Required skills & experience (the 4 “must haves” to be considered)
Must Have Skills: Hive, Shell Script, Unix, Python, Hadoop Concepts (Sqoop, YARN, MapReduce ,etc.)
Three (3) or more years of progressively complex related experience.
Has strong knowledge of large scale search applications and building high volume data pipelines.
Experience building data transformation and processing solutions.
Knowledge in Hadoop architecture, HDFS commands and experience designing & optimizing queries against data in the HDFS environment.
What you need to know
This is a full-time role with full-time benefits included
Green Card holders accepted
Job Summary
Develops large scale data structures and pipelines to organize, collect and standardize data that helps generate insights and addresses reporting needs.
Collaborates with other data teams to transform data and integrate algorithms and models into automated processes.
Uses knowledge in Hadoop architecture, HDFS commands and experience designing & optimizing queries to build data pipelines.
Builds data marts and data models to support Data Science and other internal customers.
skills
UNIX, Shell Scripting, Hadoop, Python
years experience
3+ years
work authorizations
Green Card
schedule details
5 days/week
perks
company
Headquartered in Nashville, we are a global provider of healthcare technology expertise and consulting services and solutions that serve both payer and provider organizations. Our organization helps bridge the critical gaps in accessible, affordable, high-quality healthcare by providing advisory consulting services, custom application development, and data solutions.