Description
The Risk Tribe is looking for a motivated data engineer to contribute toward its mission of strengthening the bank by proactively delivering analytics, data and services for robust Risk management. You will be part of the Data Engineering chapter, which is a community of highly skilled engineers that have a large toolbox of skills that allows them to handle the technical and analytical aspects of data management including data modelling, data access, quality assurance, data cleansing and preparation, maintenance and enhancement of ETL jobs etc.
Requirements
• An analytical mind with an eye for detail and demonstrated history of solving complex data challenges.
• Background in data analysis, engineering, and experience in working on data warehousing projects.
• Experience in Data modelling techniques.
• Experience in working with ETL tools like IBM Info sphere Data Stage, Informatica, Alteryx, AWS Glue etc. to develop data pipelines ingesting data from structured semi-structured data sources.
• Experience of data orchestration using tools such as TWS, Airflow or similar.
• Proficient SQL knowledge
• Proficient in programming with Python.
• Preferred to have an understanding of AWS technologies like S3, EMR, Glue, Redshift, Lambda, Kinesis etc.
• Educational background in computer sciences, mathematics or statistics.
• Articulate in English language to be able to effectively communicate with a global team.
Responsibilities
• Automating the ingestion of data from internal and external sources into the enterprise Data Warehouse / Data Mart.
• Understanding requirements from key stakeholders and designing structured tables with appropriate access levels / controls to make the data clear, understandable, and accessible to data consumers
• Create automated, orchestrated, scalable data pipelines to populate tables at the agreed frequency to enable faster time to market and adaptation to organization changes.
• Create relevant data quality rules, documentation to enable data lineage and governance.
• Analyses of existing data pipelines to improve performance and eliminate bottlenecks.
• Participate in peer reviews, squad scrum ceremonies and be part of a continuous learning culture