Job description
Data Architect
Location – Jaipur
Exp – 8 – 10 yr
Responsibilities:
• Design, develop, and maintain scalable data pipelines and ETL processes to ingest, transform, and load large volumes of data from diverse sources.
• Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and translate them into technical solutions.
• Optimize data processing and storage solutions for performance, cost-effectiveness, and scalability in cloud environments (e.g., AWS, Azure, Google Cloud Platform).
• Implement data quality checks, monitoring, and alerting mechanisms to ensure the integrity and reliability of our data assets.
• Stay abreast of emerging technologies and best practices in data engineering, and recommend innovative solutions to enhance our data infrastructure.
• Mentor junior members of the team and contribute to knowledge sharing initiatives to foster a culture of continuous learning and improvement.
Qualifications:
• Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
• 8 + years of experience in data engineering roles, with a strong focus on building and optimizing data pipelines in cloud environments.
• Proficiency in Python programming and SQL, with experience in developing complex data transformations and querying large datasets.
• Hands-on experience with cloud-based data technologies such as AWS (e.g., S3, Glue, Redshift), Azure (e.g., Blob Storage, Data Factory, Databricks), or Google Cloud Platform (e.g., BigQuery, Dataflow, Dataproc).
• Solid understanding of distributed computing principles and experience with technologies such as Spark, Hadoop, or similar frameworks.
• Strong analytical and problem-solving skills, with a keen attention to detail and a commitment to delivering high-quality solutions.
• Excellent communication and collaboration skills, with the ability to work effectively in a fast-paced, cross-functional team environment.
Shaily Pareek
[email protected]