Data Engineer (Palantir Foundry and PySpark)
Place of work Work from home
Job details
Job description, work day and responsibilities
workflows that support ML, BI, analytics, and software products. This individual will work closely with data scientists, engineers, analysts, software developers, and SMEs within the business to deliver new and exciting products and services. The main objectives are to develop data pipelines and fully automated workflows. The primary platform used will be Palantir Foundry.
Responsibilities:
• Develop high-quality code for the core data stack, including a data integration hub, warehouse, and pipelines.
• Build data flows for data acquisition, aggregation, and modeling, using both batch and streaming paradigms
• Empower data scientists and data analysts to be as self-sufficient as possible by building core systems and developing reusable library code
• Support and optimize data tools and associated cloud environments for consumption by downstream systems, data analysts, and data scientists
• Ensure code, configuration, and other technology artifacts are delivered within agreed time schedules, and any potential delays are escalated in advance
• Collaborate across developers as part of a SCRUM team, ensuring collective team productivity
• Participate in peer reviews and QA processes to drive higher quality
• Ensure that 100% of the code is well documented and maintained in the source code repository.
• Strive for engineering excellence by simplifying, optimizing, and automating processes and workflows.
• Ensures their workstation and all processes and procedures follow organization standards
Experience And Skills:
• Minimum of 8 years of professional experience as a data engineer
• Hands-on experience with Palantir Foundry is Must
• Experience with relational and dimensional database modelling (Relational, Kimball, or Data Vault)
• Proven experience with aspects of the Data Pipeline (Data Sourcing, Transformations, Data Quality, etc.)
• Bachelor’s or Master’s in Computer Science, Information Systems, or an engineering field
• Prefer experience with event-driven architectures and data streaming pub/sub technologies such as IBM MQ, Kafka, or Amazon Kinesis.
• Strong capabilities in Python, SQL, and stored procedures; interpersonal, communication, problem-solving, and critical thinking skills; agile/Scrum experience.
• Prefer travel, transportation, or hospitality experience, especially with fleet management and vehicle maintenance.
• Prefer experience with designing application data models for mobile or web applications
You will be redirected to another website to apply. Offer ID: #1229722, Published: 1 hour ago, Company registered: 2 months ago