Senior Real-Time Data Engineer

Join a global leader in sports and lifestyle innovation, where technology drives efficiency across one of the world’s most recognizable supply chains.

We’re talking about a brand that inspires millions, shapes culture, and pushes the boundaries of technology.

The person we’re looking for will work on an Advanced Capacity Engine (ACE) within a cross-functional engineering team for a leading global brand. ACE is a one-stop hub for capacity planning across EMEA distribution centers, where fully automated scenarios, powered by Artificial Intelligence, can be tested. These scenarios drive optimal capacity consumption – such as staffing hours – to help achieve operational targets across a complex and dynamic supply chain.

Ready to make your mark with a team that never stops moving forward?

 

Job Description

We are seeking a skilled Real-Time Data Engineer to join our team and help build robust, scalable, and efficient streaming data solutions. The ideal candidate will be experienced in working with real-time processes and streaming data architecture, leveraging modern tools and technologies to ensure seamless data flow and processing.

 

Must-Have Qualifications:

  • Proficiency in streaming data frameworks and tools.
  • Hands-on experience with Apache Kafka or equivalent messaging systems.
  • Deep understanding of Lambda functions and their use in real-time applications.
  • Expertise in Simple Queue Service (SQS) or similar queuing technologies.
  • Experience with DynamoDB or other NoSQL databases (e.g., equivalent services in Azure or GCP).
  • Strong programming skills in languages such as Python, Java, or Scala.
  • Knowledge of event-driven architectures and real-time data processing.


Nice-to-Have Qualifications:

  • Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP).
  • Familiarity with monitoring and logging tools for real-time applications.
  • Understanding of CI/CD pipelines and deployment strategies for streaming applications.
  • Exposure to Big Data ecosystems (e.g., Hadoop, Spark) in combination with real-time processes.
  • Excellent problem-solving skills and a team-oriented mindset.

     

Job Responsibilities

  • Design, develop, and maintain real-time data pipelines using streaming technologies such as Kafka.
  • Implement and manage event-driven architectures to process data in real-time.
  • Develop and optimize Lambda functions to ensure high-performance data processing.
  • Configure and manage queuing systems like SQS to ensure seamless data flow.
  • Work with DynamoDB or equivalent databases to support low-latency, high-throughput operations.
  • Collaborate with cross-functional teams to define data requirements and ensure efficient data integration.
  • Troubleshoot and resolve issues in real-time systems, ensuring maximum uptime and performance.
  • Stay updated on emerging technologies and recommend improvements to existing systems.
  • Ensure compliance with security and data governance policies in all streaming processes.