Data Engineer ID47465
Important: after confirming your application on this platform, you’ll receive an email with the next step: completing your application on our internal site, LaunchPod. So keep an eye on your inbox and don’t miss this step — without it, the process can’t move forward.
Why join us
If you’re looking for a place to grow, make an impact, and work with people who care, we’d love to meet you! :)
About the role
As a Middle Data Engineer, you will play a pivotal role in evolving our patented CDI™ Platform by transforming massive streams of live data into predictive insights that safeguard global supply chains. This role offers a unique opportunity to directly influence product vision by building models and streaming architectures that address real-world disruptions. You will work in an innovative environment where your expertise in Spark and Python drives meaningful growth and delivers critical intelligence to industry leaders.
What you will do
● Become an expert on platform solutions and how they solve customer challenges within Supply Chain & related arenas;
● Identify, retrieve, manipulate, relate, and exploit multiple structured and unstructured data sets from thousands of various sources, including building or generating new data sets as appropriate;
● Create methods, models, and algorithms to understand the meaning of streaming live data and translate it into insightful predictive output for customer applications and data products;
● Educate internal teams on how data science and resulting predictions can be productized for key industry verticals;
● Keep up to date on competitive solutions, products, and services.
Must haves
● 2+ years of experience in cloud-based data parsing and analysis, data manipulation and transformation, and visualization;
● Programming and scripting experience with Scala or Python;
● Experience with Apache Spark or similar frameworks;
● Experience with introductory SQL;
● Ability to explain technical and statistical findings to non-technical users and decision makers;
● Experience in technical consulting and conceptual solution design;
● Understanding of Hadoop and Apache-based tools to exploit massive data sets;
● Bachelor’s degree;
● Upper-intermediate English level.
Nice to haves
● Experience with Java;
● Experience with Kafka or other streaming architecture frameworks;
● Domain knowledge in Supply Chain and/or transportation management and visibility technologies.
Perks and benefits
● Professional growth: Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps
● Competitive compensation: We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities
● A selection of exciting projects: Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands
● Flextime: Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office — whatever makes you the happiest and most productive.
Meet Our Recruitment Process Asynchronous stage — An automated, self-paced track that helps us move faster and give you quicker feedback:
● Short online form to confirm basic requirements
● 30–60 minute skills assessment
● 5-minute introduction video
Synchronous stage — Live interviews
● Technical interview with our engineering team (scheduled at your convenience)
● Final interview with your future teammates
If it’s a match—you’ll get an offer!
Required skills experience
| SQL | 6 months |
| Scala | 6 months |
| Python | 6 months |
Required languages
| English | B2 - Upper Intermediate |