Description
Job Description
We are looking for a strong developer with good hands-on experience in Big Data technologies and who has worked end-to-end in application development.
In this role, the candidate will design and develop robust and scalable analytics processing applications. They will implement data integration pipelines that operate with maximum throughput and minimum latency. They shall bring their knowledge of solving big data analytics use cases and frameworks for managing high volume data processing and analytics.
Ideally The Candidate Will Be Responsible For
• Understanding the data patterns and solving the analytics use case
• Converting the solution to Spark and other big data tools-based implementation
• Building data pipelines and ETL using heterogeneous sources to Hadoop using Kafka, Flume, Sqoop, Spark Streaming etc.
We are looking for a strong developer with good hands-on experience in Big Data technologies and who has worked end-to-end in application development.
In this role, the candidate will design and develop robust and scalable analytics processing applications. They will implement data integration pipelines that operate with maximum throughput and minimum latency. They shall bring their knowledge of solving big data analytics use cases and frameworks for managing high volume data processing and analytics.
Ideally The Candidate Will Be Responsible For
• Understanding the data patterns and solving the analytics use case
• Converting the solution to Spark and other big data tools-based implementation
• Building data pipelines and ETL using heterogeneous sources to Hadoop using Kafka, Flume, Sqoop, Spark Streaming etc
• Tuning performance optimization of Spark data pipelines
• Managing the data pipelines and delivering enhancements to solve technical issues
• Automation of day-to-day tasks
• Providing technical support and guidance to business users
To excel in this role, the candidate will ideally bring:
• 3+ years of experience working in Hadoop Ecosystem and big data technologies with 5-8 years of software development experience
• Reading and writing Spark transformation jobs using Java, Scala (Preferably in Java)
• Experience with Hadoop, Spark
• Good knowledge of the core AWS services (S3, IAM, EC2, ELB, CloudFormation)
• Good understanding of Linux and networking concepts
• Good understanding of data architecture principles, including data access patterns and data modelling
• Tertiary degree in IT or similar subjectcc.
• Tuning performance optimization of Spark data pipelines
• Managing the data pipelines and delivering enhancements to solve technical issues
• Automation of day-to-day tasks
• Providing technical support and guidance to business users
To excel in this role, the candidate will ideally bring :
• 8+ years of experience working in Hadoop Ecosystem and big data technologies with 6-8 years of software development experience
• Reading and writing Spark transformation jobs using Java, Scala (Preferably in Java)
• Experience with Hadoop, Spark
• Good knowledge of the core AWS services (S3, IAM, EC2, ELB, CloudFormation)
• Good understanding of Linux and networking concepts
• Good understanding of data architecture principles, including data access patterns and data modelling
• Tertiary degree in IT or similar subject
(ref:hirist.com