Data Engineer
Role: Data Engineer
Compensation: 8 to 15 LPA (Based on expertise and experience)
Number of Openings: 50
Experience: 3 - 5 years
Role:
We are actively looking for 50 Data Engineers for a startup IT consultancy with a focus on Cloud, Data Engineering, Artificial Intelligence, and Strategy & Consulting. The Data Engineering team is responsible for creating modern data platforms for Enterprise customers, including cloud-based data infrastructure. This is a high-growth position with incredible opportunities for migration upstream into composite data engineering and data science roles.
Who are we?
At Univ.AI, we help build great futures for professionals in all things data, and in management careers related to data/data science/data engineering and data-driven platforms. We are also a premier international AI consultancy that hires top talent for projects at our client network of top companies. We are committed to hiring the best for the best jobs available.
Our Founding Team
• Siddharth Das - Former CEO, Reliance Jio Financial Technologies; M.S., Massachusetts Institute of Technology
• Rahul Dave - Former Faculty, Harvard University; PhD. UPenn
• Pavlos Protopapas - Director of M.S. Program in Data Science-IACS, Harvard University; PhD. UPenn
• Achuta Kadambi - Assistant Professor and Computer Vision leader; PhD. Massachusetts Institute of Technology
• Raghu Meka - Associate Professor, UCLA; Leader in computer science theory, Researcher, Microsoft; PhD, The University of Texas, Austin
Job/Role details:
We are looking for people who can:
• Design stable, reliable and effective databases, ETL processes and streaming solutions
• Build processes supporting data transformation, data structures, metadata, dependency and workload management.
• Ensure all database programs meet company and performance requirements
• Research and suggest new database products, services and protocols
• Modify databases and streaming systems according to specifications and perform tests
• Provide data management support to users
• Solve database usage issues and malfunctions
• Liaise with other developers to improve applications and establish best practices
• Gather user requirements and identify new features
• Develop technical and training manuals
• Performance tuning, optimization and maintenance of data systems
These are some mandatory skills we expect:
• Amazon Web Services (AWS) - EMR, Glue, S3, RDS, EC2, Lambda, SQS, SES
• Working experience with PySpark, Spark Core, Spark SQL, Spark Streaming, MLlib, Hive and Oozie.
• Experience using Spark with Scala/Java/Python and Version control using GitHub
• Data Management
• Technical expertise in data models, data systems and pipelines
• Experience in handling customer interactions
• Experience as a Database and ETL developer
You will have an extra edge if you have knowledge of the following:
• Apache NiFi
• Apache Kafka
• Oracle
• MongoDB
• Docker
• Amazon Certification
• Airflow