Data Engineer to $4000
This is an innovative fashion intelligence platform that transforms the way global brands plan and build their collections. It combines Big Data, Artificial Intelligence, and analytics to process massive volumes of e-commerce, social media, and runway data into actionable insights.
The system recognizes thousands of fashion attributes from images, delivers interactive dashboards and live trend feeds, helping clients optimize assortments, pricing strategies, and reduce overproduction.
By joining the team, you’ll contribute to building large-scale data pipelines that directly influence decision-making across the global fashion industry.
Required skills:
- Strong knowledge of Python (pandas, PySpark, SQLAlchemy).
- SQL skills (writing and optimizing complex queries, working with large datasets).
- Hands-on experience with ETL processes (Airflow, Prefect, Luigi).
- Experience with at least one major cloud provider (AWS, GCP, or Azure).
- Integration and optimization in Data Warehouses (Snowflake, Redshift, BigQuery).
- Understanding of Big Data concepts (Spark/Hadoop).
- Familiarity with Kafka / Kinesis or other streaming platforms.
- Docker, Git, CI/CD pipelines.
- Monitoring setup experience (Prometheus, Grafana).
Strong analytical mindset, ability to translate business requirements into technical solutions.
Nice-to-have:
- Working with REST API / GraphQL.
Cost optimization in cloud data processing/storage.
We offer:
- Opportunity to work on cutting-edge data projects in the fashion intelligence sector.
- Influence on architecture decisions and large-scale data pipelines.
- Competitive salary and flexible working hours.
- A collaborative team combining domain expertise in fashion with cutting-edge technology.
- Office or Remote
Corporate Laptop
📩 Send us your CV — let’s build something great together!
Required languages
English | B2 - Upper Intermediate |