We are looking for data engineers to build and monitor high quality infrastructure to analyse data.
What you will do:
- Write clean, scalable and testable code to be run on large Hadoop and Spark clusters
- Assemble large, complex data sets that meet functional/non-functional business requirements
- Build the infrastructure required for optimal ETL of data from a wide variety of data sources using SQL and big data technologies
- Build analytics tools that utilize the data pipeline to provide actionable insights for analytics and data scientist team members.
- Monitor performance and continuously improve the infrastructure
Skills you need to possess:
- xcellent analytical and problem-solving skills.
- Follow industry best practices
- Technical know-how of at least one programming stack - ideally Java
- Showcase expertise in data warehousing, relational database architectures (Oracle, SQL, DB2, Teradata), and big data storage and processing platforms required. (Hadoop, HBASE, Hive, Spark)
- Have working knowledge in cloud-based deployments in AWS, Azure or GCP
- Understand Machine Learning, NLP, IR, Algorithms etc.
- Be comfortable working with Linux and Shell
- Should be able to thrive in a fast-paced, quickly evolving, tech start-up environment
- You should also have a minimum of 6 years of experience.
Do you have a LinkedIn account? Import your resume and save time!
Upload your photo
Please provide an image in PNG, JPG or JPEG format.