Middle/Senior Big Data Engineer (with Palantir Foundry familiarity)

N-iX is looking for a proactive Middle/Senior Big Data Engineer to join our vibrant team!

You will play a critical role in designing, developing, and maintaining sophisticated data pipelines, and using Foundry tools such as Ontology, Pipeline Builder, Code Repositories, etc. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.

Tools and skills you will use in this role: Palantir Foundry, Python, PySpark, SQL, basic TypeScript.
 

Responsibilities:

  • Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
  • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
  • Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
  • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency.
  • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
  • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.
  • Be eager to get familiar with new tools and technologies
     

Requirements:

  • 4+ years of experience in data engineering;
  • Strong proficiency in Python and PySpark;
  • Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
  • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
  • Expertise in data modeling, data warehousing, and ETL/ELT concepts;
  • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
  • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities;
  • Strong communication and teamwork abilities;
  • Understanding of data security and privacy best practices;
  • Strong mathematical, statistical, and algorithmic skills.
Published 26 March
28 views
ยท
2 applications
100% read
ยท
100% responded
Last responded 5 days ago
To apply for this and other jobs on Djinni login or signup.

Similar jobs

Countries of Europe or Ukraine
Countries of Europe or Ukraine