Jobs Kyiv
23-
Β· 188 views Β· 24 applications Β· 5d
Data Engineer
Countries of Europe or Ukraine Β· 2 years of experience Β· B1 - IntermediateLooking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV. Skills requirements: β’ 2+ years of experience with...Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β you have found the right place to send your CV.
Skills requirements:
β’ 2+ years of experience with Python;
β’ 2+ years of experience as a Data Engineer;
β’ Experience with Pandas;
β’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
β’ Familiarity with Amazon Web Services;
β’ Knowledge of data algorithms and data structures is a MUST;
β’ Working with high volume tables 10m+.
Optional skills (as a plus):
β’ Experience with Spark (pyspark);
β’ Experience with Airflow;
β’ Experience with Kafka;
β’ Experience in statistics;
β’ Knowledge of DS and Machine learning algorithms..Key responsibilities:
β’ Create ETL pipelines and data management solutions (API, Integration logic);
β’ Different data processing algorithms;
β’ Involvement in creation of forecasting, recommendation, and classification models.We offer:
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 82 views Β· 7 applications Β· 11d
Data Engineer
Ukraine Β· Product Β· 2 years of experience Β· B2 - Upper IntermediateRaiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country. At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT...Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country.
At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT teams, consisting of over 800 professionals. Every day, we collaborate to ensure that more than 2.7 million of our clients receive quality service, use the bankβs products and services, and develop their businesses because we are #Together_with_Ukraine.
About the project:
You will be part of our product team. Team is responsible for building data marts, creating jsons based on them and sending them via Kafka. New Data Platform built in AWS.
We are looking for motivated and result-oriented data engineer, who can join our team in development of Data Products in our new Data Platform.
Your future responsibilities:
- Building an ETL process using AWS services: (S3,Ethena, AWS Glue), Airflow, PySpark, SQL, GitHub, Kafka
- Building SQL queries from data sources on PySpark
- Data processing and writing to the Data Mart Icberg table
- Building an integration solution on the Airflow + Kafka stack
- Data processing in JSON with publication in Kafka
Your skills and experience:
- Higher education in the field of Computer Science/Engineering
- 2+ years of relevant experience in data engineering or related roles
- Knowledge of programming languages: Python, PLSQL
- 2+ years of experience in parsing, transforming and storing data in a Big Data environment (e.g., Hadoop, Spark)
- 1+ years of experience in AWS Lambda, Glue, Athena and S3
- Experience with Kafka architecture, configuration and support
- Experience with database development and optimization (Oracle/PostgreSQL)
- Experience in developing Big Data pipelines
- Experience with Avro, JSON data formats
- Experience with AWS data services and infrastructure management
- Understanding the principles of working in an Agile environment
We Offer What Matters Most to You:
- Competitive Salary: We guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
- Social Package: Official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
- Comfortable Working Conditions: Possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment
- Wellbeing Program: All employees have access to medical insurance from the first working day, as well as consultations with a psychologist, nutritionist, or lawyer. We also offer discount programs for sports and purchases, family days for children and adults, and in-office massages
- Learning and Development: Access to over 130 online training resources, corporate training programs in CX, Data, IT Security, Leadership, Agile, as well as a corporate library and English lessons
- Great Team: Our colleagues form a community where curiosity, talent, and innovation are welcomed. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
- Career Opportunities: We encourage advancement within the bank across different functions
- Innovations and Technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub Actions, ArgoCD, Prometheus, VictoriaMetrics, Vault, OpenTelemetry, ElasticSearch, Crossplane, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (iOS), Kotlin (Android). Data stores: SQL-Oracle, PgSQL, MsSQL, Sybase. Data management: Kafka, Airflow, Spark, Flink
- Support Program for Defenders: We maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and are developing the Bankβs veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes
Why Raiffeisen Bank?
- People are our main value. We support, acknowledge, educate, and actively involve them in driving change
- One of the largest IT product teams among the countryβs banks
- Recognized as the best employer by EY, Forbes, Randstad, FranklinCovey, and Delo.UA
- One of the largest lenders to the economy and agricultural business among private banks
- The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans)
- One of the largest taxpayers in Ukraine; we paid 6.6 billion UAH in taxes in 2023
Opportunities for Everyone:
- Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
- We support the principles of diversity, equality and inclusiveness
- We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
- We cooperate with students and older people, creating conditions for growth at any career stage
Want to learn more? β Follow us on social media:
Facebook, Instagram, LinkedInβ―
______________________________________________________________________
Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ β Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ ΡΠΊΡΠ°ΡΠ½ΡΡΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· ΡΠ½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡΡΠ°Π»ΠΎΠΌ. ΠΡΠ»ΡΡΠ΅ 30 ΡΠΎΠΊΡΠ² ΠΌΠΈ ΡΡΠ²ΠΎΡΡΡΠΌΠΎ ΡΠ° Π²ΠΈΠ±ΡΠ΄ΠΎΠ²ΡΡΠΌΠΎ Π±Π°Π½ΠΊΡΠ²ΡΡΠΊΡ ΡΠΈΡΡΠ΅ΠΌΡ Π½Π°ΡΠΎΡ Π΄Π΅ΡΠΆΠ°Π²ΠΈ.
Π£ Π Π°ΠΉΡΡ ΠΏΡΠ°ΡΡΡ ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ², ΡΠ΅ΡΠ΅Π΄ Π½ΠΈΡ ΠΎΠ΄Π½Π° ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠΎΠ²ΠΈΡ ΠΠ’-ΠΊΠΎΠΌΠ°Π½Π΄, ΡΠΎ Π½Π°Π»ΡΡΡΡ ΠΏΠΎΠ½Π°Π΄ 800 ΡΠ°Ρ ΡΠ²ΡΡΠ². Π©ΠΎΠ΄Π½Ρ ΠΏΠ»ΡΡ-ΠΎ-ΠΏΠ»ΡΡ ΠΌΠΈ ΠΏΡΠ°ΡΡΡΠΌΠΎ, ΡΠΎΠ± Π±ΡΠ»ΡΡ Π½ΡΠΆ 2,7 ΠΌΡΠ»ΡΠΉΠΎΠ½Π° Π½Π°ΡΠΈΡ ΠΊΠ»ΡΡΠ½ΡΡΠ² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡΡΠΈΠΌΠ°ΡΠΈ ΡΠΊΡΡΠ½Π΅ ΠΎΠ±ΡΠ»ΡΠ³ΠΎΠ²ΡΠ²Π°Π½Π½Ρ, ΠΊΠΎΡΠΈΡΡΡΠ²Π°ΡΠΈΡΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠ°ΠΌΠΈ Ρ ΡΠ΅ΡΠ²ΡΡΠ°ΠΌΠΈ Π±Π°Π½ΠΊΡ, ΡΠΎΠ·Π²ΠΈΠ²Π°ΡΠΈ Π±ΡΠ·Π½Π΅Ρ, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡΠ°ΡΠ½ΠΎΡ.β―
Π’Π²ΠΎΡ ΠΌΠ°ΠΉΠ±ΡΡΠ½Ρ ΠΎΠ±ΠΎΠ²βΡΠ·ΠΊΠΈ:
- ΠΠΎΠ±ΡΠ΄ΠΎΠ²Π° ETL ΠΏΡΠΎΡΠ΅ΡΡ Π· Π²ΠΈΠΊΠΎΡΠΈΡΡΠ°Π½Π½ΡΠΌ AWS ΡΠ΅ΡΠ²ΡΡΡΠ²: (S3,Ethena, AWS Glue), Airflow, PySpark, SQL, GitHub, Kafka
- ΠΠΎΠ±ΡΠ΄ΠΎΠ²Π° SQL Π·Π°ΠΏΠΈΡΡΠ² Π· Π΄ΠΆΠ΅ΡΠ΅Π» Π΄Π°Π½ΠΈΡ Π½Π° PySpark
- ΠΠ±ΡΠΎΠ±ΠΊΠ° Π΄Π°Π½ΠΈΡ ΡΠ° Π·Π°ΠΏΠΈΡ Π΄ΠΎ Data Mart Icberg ΡΠ°Π±Π»ΠΈΡΡ
- ΠΠΎΠ±ΡΠ΄ΠΎΠ²Π° ΡΠ½ΡΠ΅Π³ΡΠ°ΡΡΠΉΠ½ΠΎΠ³ΠΎ ΡΡΡΠ΅Π½Π½Ρ Π½Π° ΡΡΠ΅ΠΊΡ Airflow + Kafka
- ΠΠ±ΡΠΎΠ±ΠΊΠ° Π΄Π°Π½ΠΈΡ Ρ JSON Π· ΠΏΡΠ±Π»ΡΠΊΠ°ΡΡΡΡ Ρ Kafka
Π’Π²ΡΠΉ Π΄ΠΎΡΠ²ΡΠ΄ ΡΠ° Π½Π°Π²ΠΈΡΠΊΠΈ:
- ΠΠΈΡΠ° ΠΎΡΠ²ΡΡΠ° Ρ ΡΡΠ΅ΡΡ ΠΊΠΎΠΌΠΏβΡΡΠ΅ΡΠ½ΠΈΡ Π½Π°ΡΠΊ/ΡΠ½ΠΆΠΈΠ½ΡΡΠΈΠ½Π³
- 2+ ΡΠΎΠΊΠΈ Π²ΡΠ΄ΠΏΠΎΠ²ΡΠ΄Π½ΠΎΠ³ΠΎ Π΄ΠΎΡΠ²ΡΠ΄Ρ Π² Π΄Π°ΡΠ° ΡΠ½ΠΆΠΈΠ½ΡΡΠΈΠ½Π³Ρ Π°Π±ΠΎ ΡΡΠΌΡΠΆΠ½ΠΈΡ ΡΠΎΠ»ΡΡ
- ΠΠ½Π°Π½Π½Ρ ΠΌΠΎΠ² ΠΏΡΠΎΠ³ΡΠ°ΠΌΡΠ²Π°Π½Π½Ρ: Python, PLSQL
- 2+ ΡΠΎΠΊΡΠ² Π΄ΠΎΡΠ²ΡΠ΄Ρ Π² ΠΏΠ°ΡΡΡΠ²Π°Π½Π½Ρ, ΡΡΠ°Π½ΡΡΠΎΡΠΌΠ°ΡΡΡ Ρ Π·Π±Π΅ΡΠ΅ΠΆΠ΅Π½Π½Ρ Π΄Π°Π½ΠΈΡ Π² ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΡ Big Data (e.g., Hadoop, Spark)
- 1+ ΡΠΎΠΊΡΠ² Π΄ΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π² AWS Lambda, Glue,Athena and S3
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Kafka Π°ΡΡ ΡΡΠ΅ΠΊΡΡΡΠΎΡ, Π½Π°Π»Π°ΡΡΡΠ²Π°Π½Π½ΡΠΌ ΡΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΎΡ
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· ΡΠΎΠ·ΡΠΎΠ±ΠΊΠΎΡ ΡΠ° ΠΎΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡΡ Π±Π°Π· Π΄Π°Π½ΠΈΡ (Oracle/PostgreSQL)
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ·ΡΠΎΠ±ΠΊΠΈ Big Data pipelines
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Avro, JSON ΡΠΎΡΠΌΠ°ΡΠ°ΠΌΠΈ Π΄Π°Π½ΠΈΡ
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· AWS ΡΠ΅ΡΠ²ΡΡΠ°ΠΌΠΈ Π΄Π°Π½ΠΈΡ Ρ ΡΠΏΡΠ°Π²Π»ΡΠ½Π½Ρ ΡΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΠΎΡ
- Π ΠΎΠ·ΡΠΌΡΠ½Π½Ρ ΠΏΡΠΈΠ½ΡΠΈΠΏΡΠ² ΡΠΎΠ±ΠΎΡΠΈ Π² Agile ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΡ
ΠΡΠΎΠΏΠΎΠ½ΡΡΠΌΠΎ ΡΠ΅, ΡΠΎ ΠΌΠ°Ρ Π·Π½Π°ΡΠ΅Π½Π½Ρ ΡΠ°ΠΌΠ΅ Π΄Π»Ρ ΡΠ΅Π±Π΅:
- ΠΠΎΠ½ΠΊΡΡΠ΅Π½ΡΠ½Π° Π·Π°ΡΠΎΠ±ΡΡΠ½Π° ΠΏΠ»Π°ΡΠ°: Π³Π°ΡΠ°Π½ΡΡΡΠΌΠΎ ΡΡΠ°Π±ΡΠ»ΡΠ½ΠΈΠΉ Π΄ΠΎΡ ΡΠ΄ ΡΠ° ΡΡΡΠ½Ρ Π±ΠΎΠ½ΡΡΠΈ Π·Π° ΡΠ²ΡΠΉ ΠΎΡΠΎΠ±ΠΈΡΡΠΈΠΉ Π²Π½Π΅ΡΠΎΠΊ. ΠΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΎ, Ρ Π½Π°Ρ Π΄ΡΡ ΡΠ΅ΡΠ΅ΡΠ°Π»ΡΠ½Π° ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡΡΠ΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊΡ
- Π‘ΠΎΡΡΠ°Π»ΡΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ: ΠΎΡΡΡΡΠΉΠ½Π΅ ΠΏΡΠ°ΡΠ΅Π²Π»Π°ΡΡΡΠ²Π°Π½Π½Ρ, 28 Π΄Π½ΡΠ² ΠΎΠΏΠ»Π°ΡΡΠ²Π°Π½ΠΎΡ Π²ΡΠ΄ΠΏΡΡΡΠΊΠΈ, Π΄ΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΈΠΉ βΠ΄Π΅ΠΊΡΠ΅Ρβ Π΄Π»Ρ ΡΠ°ΡΡΡΡΠ², ΡΠ° ΠΌΠ°ΡΠ΅ΡΡΠ°Π»ΡΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° Π΄Π»Ρ Π±Π°ΡΡΠΊΡΠ² ΠΏΡΠΈ Π½Π°ΡΠΎΠ΄ΠΆΠ΅Π½Π½Ρ Π΄ΡΡΠ΅ΠΉ
- ΠΠΎΠΌΡΠΎΡΡΠ½Ρ ΡΠΌΠΎΠ²ΠΈ ΠΏΡΠ°ΡΡ: ΠΌΠΎΠΆΠ»ΠΈΠ²ΡΡΡΡ Π³ΡΠ±ΡΠΈΠ΄Π½ΠΎΠ³ΠΎ ΡΠΎΡΠΌΠ°ΡΡ ΡΠΎΠ±ΠΎΡΠΈ, ΠΎΡΡΡΠΈ Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΠΊΡΠΈΡΡΡΠΌΠΈ ΡΠ° Π³Π΅Π½Π΅ΡΠ°ΡΠΎΡΠ°ΠΌΠΈ, Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΡΡΠ°ΡΠ½ΠΎΡ ΡΠ΅Ρ Π½ΡΠΊΠΎΡ
- Wellbeing ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ°: Π΄Π»Ρ Π²ΡΡΡ ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ² Π΄ΠΎΡΡΡΠΏΠ½Ρ ΠΌΠ΅Π΄ΠΈΡΠ½Π΅ ΡΡΡΠ°Ρ ΡΠ²Π°Π½Π½Ρ Π· ΠΏΠ΅ΡΡΠΎΠ³ΠΎ ΡΠΎΠ±ΠΎΡΠΎΠ³ΠΎ Π΄Π½Ρ; ΠΊΠΎΠ½ΡΡΠ»ΡΡΠ°ΡΡΡ ΠΏΡΠΈΡ ΠΎΠ»ΠΎΠ³Π°, Π½ΡΡΡΠΈΡΡΠΎΠ»ΠΎΠ³Π° ΡΠΈ ΡΡΠΈΡΡΠ°; Π΄ΠΈΡΠΊΠΎΠ½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡΡ ΡΠ° ΠΏΠΎΠΊΡΠΏΠΊΠΈ; family days Π΄Π»Ρ Π΄ΡΡΠ΅ΠΉ ΡΠ° Π΄ΠΎΡΠΎΡΠ»ΠΈΡ ; ΠΌΠ°ΡΠ°ΠΆ Π² ΠΎΡΡΡΡ
- ΠΠ°Π²ΡΠ°Π½Π½Ρ ΡΠ° ΡΠΎΠ·Π²ΠΈΡΠΎΠΊ: Π΄ΠΎΡΡΡΠΏ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²ΡΠ°Π»ΡΠ½ΠΈΡ ΠΎΠ½Π»Π°ΠΉΠ½-ΡΠ΅ΡΡΡΡΡΠ²; ΠΊΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Ρ Π½Π°Π²ΡΠ°Π»ΡΠ½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ Π· CX, Data, IT Security, ΠΡΠ΄Π΅ΡΡΡΠ²Π°, Agile. ΠΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Π° Π±ΡΠ±Π»ΡΠΎΡΠ΅ΠΊΠ° ΡΠ° ΡΡΠΎΠΊΠΈ Π°Π½Π³Π»ΡΠΉΡΡΠΊΠΎΡ
- ΠΡΡΡΠ° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡΡ ΠΊΠΎΠ»Π΅Π³ΠΈ β ΡΠ΅ ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ°, Π΄Π΅ Π²ΡΡΠ°ΡΡΡΡΡ Π΄ΠΎΠΏΠΈΡΠ»ΠΈΠ²ΡΡΡΡ, ΡΠ°Π»Π°Π½Ρ ΡΠ° ΡΠ½Π½ΠΎΠ²Π°ΡΡΡ. ΠΠΈ ΠΏΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, Π²ΡΠΈΠΌΠΎΡΡ ΡΠ°Π·ΠΎΠΌ ΡΠ° Π·ΡΠΎΡΡΠ°ΡΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρ Π·Π½Π°ΠΉΡΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡΠΌΡΡΠ² Ρ ΠΏΠΎΠ½Π°Π΄ 15-ΡΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠΉΠ½ΠΈΡ ΠΊΠΎΠΌβΡΠ½ΡΡΡ, ΡΠΈΡΠ°ΡΡΠΊΠΎΠΌΡ ΡΠΈ ΡΠΏΠΎΡΡΠΈΠ²Π½ΠΎΠΌΡ ΠΊΠ»ΡΠ±Π°Ρ
- ΠΠ°ΡβΡΡΠ½Ρ ΠΌΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ: ΠΌΠΈ Π·Π°ΠΎΡ ΠΎΡΡΡΠΌΠΎ ΠΏΡΠΎΡΡΠ²Π°Π½Π½Ρ Π²ΡΠ΅ΡΠ΅Π΄ΠΈΠ½Ρ Π±Π°Π½ΠΊΡ ΠΌΡΠΆ ΡΡΠ½ΠΊΡΡΡΠΌΠΈ
- ΠΠ½Π½ΠΎΠ²Π°ΡΡΡ ΡΠ° ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡ: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
- ΠΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ Π·Π°Ρ ΠΈΡΠ½ΠΈΠΊΡΠ² Ρ Π·Π°Ρ ΠΈΡΠ½ΠΈΡΡ: ΠΌΠΈ Π·Π±Π΅ΡΡΠ³Π°ΡΠΌΠΎ ΡΠΎΠ±ΠΎΡΡ ΠΌΡΡΡΡ ΡΠ° Π²ΠΈΠΏΠ»Π°ΡΡΡΠΌΠΎ ΡΠ΅ΡΠ΅Π΄Π½Ρ Π·Π°ΡΠΎΠ±ΡΡΠ½Ρ ΠΏΠ»Π°ΡΡ ΠΌΠΎΠ±ΡΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠΌ. ΠΠ»Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² ΡΠ° Π²Π΅ΡΠ΅ΡΠ°Π½ΠΎΠΊ Ρ Π½Π°Ρ Π΄ΡΡ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ, ΡΠΎΠ·Π²ΠΈΠ²Π°ΡΡΡΡΡ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΡΠΊΠ° ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ° ΠΠ°Π½ΠΊΡ. ΠΠΈ ΠΏΡΠ°ΡΡΡΠΌΠΎ Π½Π°Π΄ ΠΏΡΠ΄Π²ΠΈΡΠ΅Π½Π½ΡΠΌ ΠΎΠ±ΡΠ·Π½Π°Π½ΠΎΡΡΡ ΠΊΠ΅ΡΡΠ²Π½ΠΈΠΊΡΠ² ΡΠ° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡΠ°Π½Ρ ΠΏΠΎΠ²Π΅ΡΠ½Π΅Π½Π½Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² Π΄ΠΎ ΡΠΈΠ²ΡΠ»ΡΠ½ΠΎΠ³ΠΎ ΠΆΠΈΡΡΡ. Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ Π²ΡΠ΄Π·Π½Π°ΡΠ΅Π½ΠΈΠΉ ΡΠΊ ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡΠ°ΡΠΈΡ ΡΠΎΠ±ΠΎΡΠΎΠ΄Π°Π²ΡΡΠ² Π΄Π»Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² (Forbes)
Π§ΠΎΠΌΡ Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ?β―
ΠΠ°ΡΠ° Π³ΠΎΠ»ΠΎΠ²Π½Π° ΡΡΠ½Π½ΡΡΡΡ β Π»ΡΠ΄ΠΈ Ρ ΠΌΠΈ Π΄Π°ΡΠΌΠΎ ΡΠΌ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΡ Ρ Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²ΡΠ°ΡΠΌΠΎ, Π·Π°Π»ΡΡΠ°ΡΠΌΠΎ Π΄ΠΎ Π·ΠΌΡΠ½. ΠΡΠΈΡΠ΄Π½ΡΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡΡ, Π°Π΄ΠΆΠ΅ Π΄Π»Ρ Π½Π°Ρ Π’Π ΠΌΠ°ΡΡ Π·Π½Π°ΡΠ΅Π½Π½Ρ!β―
- ΠΠ΄ΠΈΠ½ ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΊΡΠ΅Π΄ΠΈΡΠΎΡΡΠ² Π΅ΠΊΠΎΠ½ΠΎΠΌΡΠΊΠΈ ΡΠ° Π°Π³ΡΠ°ΡΠ½ΠΎΠ³ΠΎ Π±ΡΠ·Π½Π΅ΡΡ ΡΠ΅ΡΠ΅Π΄ ΠΏΡΠΈΠ²Π°ΡΠ½ΠΈΡ Π±Π°Π½ΠΊΡΠ²β―
- ΠΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡΠ°ΡΠΈΠΌ ΠΏΡΠ°ΡΠ΅Π΄Π°Π²ΡΠ΅ΠΌ Π·Π° Π²Π΅ΡΡΡΡΠΌΠΈ EY, Forbes, Randstad, Franklin Covey, Delo.UAβ―
- ΠΠ°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ Π³ΡΠΌΠ°Π½ΡΡΠ°ΡΠ½ΠΎΡ Π΄ΠΎΠΏΠΎΠΌΠΎΠ³ΠΈΡΠ΅ΡΠ΅Π΄ Π±Π°Π½ΠΊΡΠ² (Π§Π΅ΡΠ²ΠΎΠ½ΠΈΠΉ Π₯ΡΠ΅ΡΡ Π£ΠΊΡΠ°ΡΠ½ΠΈ, UNITED24, Superhumans, Π‘ΠΠΠΠΠΠ)β―
- ΠΠ΄Π½Π° Π· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΠ’-ΠΏΡΠΎΠ΄ΡΠΊΡΠΎΠ²ΠΈΡ ΠΊΠΎΠΌΠ°Π½Π΄ ΡΠ΅ΡΠ΅Π΄ Π±Π°Π½ΠΊΡΠ² ΠΊΡΠ°ΡΠ½ΠΈβ―
- ΠΠ΄ΠΈΠ½ ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΏΠ»Π°ΡΠ½ΠΈΠΊΡΠ² ΠΏΠΎΠ΄Π°ΡΠΊΡΠ² Π² Π£ΠΊΡΠ°ΡΠ½Ρ, Π·Π° 2023 ΡΡΠΊ Π±ΡΠ»ΠΎ ΡΠΏΠ»Π°ΡΠ΅Π½ΠΎ 6,6 ΠΌΠ»ΡΠ΄ Π³ΡΠΈΠ²Π΅Π½Ρ
ΠΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ Π΄Π»Ρ Π²ΡΡΡ :β―
- Π Π°ΠΉΡ ΠΊΠ΅ΡΡΡΡΡΡΡ ΠΏΡΠΈΠ½ΡΠΈΠΏΠ°ΠΌΠΈ, ΡΠΎ ΡΠΎΠΊΡΡΡΡΡΡΡΡ Π½Π° Π»ΡΠ΄ΠΈΠ½Ρ ΡΠ° ΡΡ ΡΠΎΠ·Π²ΠΈΡΠΊΡ, Ρ ΡΠ΅Π½ΡΡΡ ΡΠ²Π°Π³ΠΈ 5β―500 ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ² ΡΠ° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡΠ»ΡΠΉΠΎΠ½ΠΈ ΠΊΠ»ΡΡΠ½ΡΡΠ²β―β―
- ΠΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ ΠΏΡΠΈΠ½ΡΠΈΠΏΠΈ ΡΡΠ·Π½ΠΎΠΌΠ°Π½ΡΡΡΡ, ΡΡΠ²Π½ΠΎΡΡΡ ΡΠ° ΡΠ½ΠΊΠ»ΡΠ·ΠΈΠ²Π½ΠΎΡΡΡ
- ΠΠΈ Π²ΡΠ΄ΠΊΡΠΈΡΡ Π΄ΠΎ Π½Π°ΠΉΠΌΡ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² Ρ Π»ΡΠ΄Π΅ΠΉ Π· ΡΠ½Π²Π°Π»ΡΠ΄Π½ΡΡΡΡ ΡΠ° Π³ΠΎΡΠΎΠ²Ρ Π°Π΄Π°ΠΏΡΡΠ²Π°ΡΠΈ ΡΠΎΠ±ΠΎΡΠ΅ ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΠ΅ ΠΏΡΠ΄ Π²Π°ΡΡ ΠΎΡΠΎΠ±Π»ΠΈΠ²Ρ ΠΏΠΎΡΡΠ΅Π±ΠΈ
- Π‘ΠΏΡΠ²ΠΏΡΠ°ΡΡΡΠΌΠΎ Π·Ρ ΡΡΡΠ΄Π΅Π½ΡΠ°ΠΌΠΈ ΡΠ° Π»ΡΠ΄ΡΠΌΠΈ ΡΡΠ°ΡΡΠΎΠ³ΠΎ Π²ΡΠΊΡ,β―ΡΡΠ²ΠΎΡΡΡΡΠΈ ΡΠΌΠΎΠ²ΠΈ Π΄Π»Ρ Π·ΡΠΎΡΡΠ°Π½Π½Ρ Π½Π° Π±ΡΠ΄Ρ-ΡΠΊΠΎΠΌΡ Π΅ΡΠ°ΠΏΡ ΠΊΠ°ΡβΡΡΠΈ
ΠΠ°ΠΆΠ°ΡΡ Π΄ΡΠ·Π½Π°ΡΠΈΡΡ Π±ΡΠ»ΡΡΠ΅? β ΠΡΠ΄ΠΏΠΈΡΡΠΉΡΡ Π½Π° Π½Π°Ρ Ρ ΡΠΎΡ.ΠΌΠ΅ΡΠ΅ΠΆΠ°Ρ : Facebook, Instagram, LinkedInβ―
More -
Β· 178 views Β· 34 applications Β· 11d
Data Engineer
Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B2 - Upper Intermediate Ukrainian Product πΊπ¦Headway Inc is a global tech company, revolutionizing lifelong learning by creating digital products for over 150 million users worldwide. Our mission is to help people grow. Weβre proud to be ranked 4th among the Worldβs Top EdTech Π‘ompanies by TIME...Headway Inc is a global tech company, revolutionizing lifelong learning by creating digital products for over 150 million users worldwide. Our mission is to help people grow. Weβre proud to be ranked 4th among the Worldβs Top EdTech Π‘ompanies by TIME magazine. We believe lifelong learning should be accessible, personalized, and impactful to each individual. Thatβs how we change the world and why we bring together exceptional minds.
The core of our achievements is our team. We believe in people and shared values SELECT. Thatβs why, together with Yuliya Savchuk, Engineering Manager of the MIT team, weβre looking for a Data Engineer to join our team of superstars transforming the EdTech industry.
About the role:
With business scaling, we see the need to strengthen the team that is working on building a data analytics platform for Headway Inc. We need to ensure that every business area and our products have reliable data to drive deep insights and innovation.
Data is at the core of our company. You will build and maintain a reliable, efficient, and scalable data infrastructure that enables Headway Inc to leverage data as a strategic asset for informed decision-making, driving innovation, and achieving business goals.
What awaits you on our team:
- Have the opportunity to join the team of a global EdTech company that creates socially impactful products for the international market.
- Have the opportunity to collaborate with a large team of analysts and marketers β to create solutions that have a direct and tangible impact on their work.
- You'll be able to use a wide variety of modern tools and independently decide which technologies are most appropriate to apply.
- We work in an atmosphere of freedom and responsibility.
- Your decisions and ideas will actively impact the business. Youβll own the full development lifecycleβfrom solution design through to user feedback and iteration.
What will you do:
At MIT, the Engineering team develops data platforms and automation tools that help teams work more efficiently and make informed marketing decisions. We create solutions that allow us to analyze and and utilize data for effective decision-making in marketing strategies, improving results and increasing return on investment.
- Communicate and collaborate with the analytics team, being responsible for delivering data to the analytical database for visualization.
- Create and maintain optimal and scalable pipeline architecture. Develop new pipelines and refine existing ones.
- Develop ETL/ELT processes and Data Lake architecture.
- Research and collect large, complex data.
- Identify, design, and implement internal process improvements.
- Continuously learn, develop, and utilize cutting-edge technologies.
What do you need to join us:
- Experience in production development and knowledge of any programming language, including Python, Golang, Java, etc.
- Understanding of Data Lakes, Data Warehousing, OLAP/OLTP approaches, and ETL/ELT processes.
- Proficiency in SQL and experience working with databases.
- Workflow orchestration expirience.
- Problem-solving skills and a passion for creating efficient, well-tested, and maintainable solutions.
- Alignment with the values of our team (SELECT).
Good to have:
- Experience with GCP Data Services and Airflow.
- Experience with CI/CD in Data Engineering.
- Knowledge of Data Governance and Security principles.
- Experience optimizing data pipeline performance.
- Experience in MarTech or AdTech platforms, like marketing campaign orchestration.
What do we offer:
- Work within an ambitious team on a socially impactful education product.
- An office with a reliable shelter, generators, satellite internet, and other amenities.
- Access to our corporate knowledge base and professional communities.
- Personal development plan.
- Partial compensation for English language learning, external training, and courses.
- Medical insurance coverage with a $70 employee contribution and full sick leave compensation.
- Company doctor and massage in the office.
- Sports activities: running, yoga, boxing, and more.
- Corporate holidays: we go on a week-paid holiday to rest and recharge twice a year.
- Supporting initiatives that help Ukraine. Find out more about our projects here.
Working schedule:
This is a full-time position with a hybrid remote option. It means that you can decide for yourself: whether you want to work from the office, remotely, or combine these options.
Are you interested?
Send your CV!
-
Β· 35 views Β· 1 application Β· 25d
Senior Data Streaming Engineer
Hybrid Remote Β· Ukraine (Kyiv, Lviv) Β· 4 years of experience Β· B2 - Upper IntermediateWho we are! At Levi9, we are passionate about what we do. We love our work, and together in a team, we are smarter and stronger. We are looking for skilled team players who make change happen. Are you one of these players? About the role As a Data...πΉWho we are!
At Levi9, we are passionate about what we do. We love our work, and together in a team, we are smarter and stronger. We are looking for skilled team players who make change happen. Are you one of these players?
πΉAbout the role
As a Data Streaming Engineer in the customer team, you will leverage millions of daily connections with readers and viewers across the online platforms as a competitive advantage to deliver reliable, scalable streaming solutions. You will collaborate closely with analysts, data scientists and developers across all departments throughout the entire customer organisation. You will design and build cloud-based data pipelines, both batch and streaming, and their underlying infrastructure. In short: you live up to our principle, You Build It, You Run It.
You will be working closely with a tech stack that includes Scala, Kafka, Kubernetes, Kafka Streams, and Snowflake.πΉResponsibilities
- Deliver reliable, scalable streaming solutions
- Collaborate closely with analysts, data scientists and developers across all departments throughout the entire organisation
- Design and build cloud-based data pipelines, both batch and streaming, and their underlying infrastructure
- You Build It, You Run It.
- Building a robust real-time customer profile by aggregating their online behaviour and allowing the usage of this profile to recommend other articles on customers' online platforms.
- Co-develop and cooperate on streaming architectures from inception and design, through deployment, operation and refinement to meet the needs of millions of real-time interactions.
- Closely collaborate with business stakeholders, data scientists and analysts in our daily work, data engineering guild and communities of practice.
πΉRequirements
- Experience implementing highly available and scalable big data solutions
- In-depth knowledge of at least one cloud provider, preferably AWS
- Proficiency in languages such as Scala, Python, or shell scripting, specifically in the context of streaming data workflows
- Extensive experience with streaming technologies, so you can challenge the existing setup.
- Experience with Infrastructure as Code and CI/CD pipelines
- Full understanding of modern software engineering best practices
- Experience with Domain-driven design
- DevOps mindset
- You see the value in a team and enjoy working together with others, also with techniques like pair programming
- You either have an AWS certification or are willing to achieve AWS certification within 6 months (minimum: AWS Certified Associate)
- We welcome candidates living in Ukraine or Europe who are willing and able to travel for business trips to Belgium and the Netherlands.
πΉInterview stages
- HR interview
- Technical interview in English
- Test assignment
- Final interview
πΉ9 reasons to join us:
- Today we're working with the technology of tomorrow.
- We don't wait for a change. We are the change.
- We're experts in creating experts (Levi9 academy, Lead9 program for leaders).
- No micromanagement. We are free birds with a clear understanding of what the high performance is!
- Learning in Levi9 never stops (unlimited Udemy for business, meetups, English&German courses, Professional trainings).
- Here you can train your body and mind.
- We've gathered the best locations - comfortable, cosy and pet-friendly offices in Kyiv (5 minutes from Olimpiyska metro station) and Lviv (overlooking the Stryiskyi Park) with regular offline internal events
- We have a master's degree in work-life balance.
- We are actively supporting Ukraine with constant donations and volunteering
πΉSimple step to get this job
Click the APPLY NOW button and leave your contacts!
More -
Β· 60 views Β· 2 applications Β· 24d
Data Engineer (NLP-Focused)
Hybrid Remote Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - IntermediateWe are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstarβs NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text...We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstarβs NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling our data scientists and ML engineers to develop cutting-edge language models. You will work at the intersection of data engineering and machine learning, ensuring that our datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context. This is a unique opportunity to shape the data foundation of a pioneering AI project in Ukraine, working alongside NLP experts and leveraging modern big data technologies.
What you will do
- Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information. Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
- Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to our language modeling efforts.
- Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
- Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
- Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
- Automate data processing workflows and ensure their scalability and reliability. Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
- Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs. Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
- Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models. Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
- Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
- Manage data security, access, and compliance. Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.
Qualifications and experience needed
- Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
- NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given our projectβs focus. Understanding of FineWeb2 or a similar processing pipeline approach.
- Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
- Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
- Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as our NLP applications may require embedding storage and fast similarity search.
- Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
- Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
- Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.
A plus would be
- Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
- Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
- CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
- Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
- Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimising existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve our workflows.
What we offer
- Office or remote β itβs up to you. You can work from anywhere, and we will arrange your workplace.
- Remote onboarding.
- Performance bonuses.
- We train employees with the opportunity to learn through the companyβs library, internal resources, and programs from partners.β―
- Health and life insurance.
- Wellbeing program and corporate psychologist.
- Reimbursement of expenses for Kyivstar mobile communication.
-
Β· 15 views Β· 1 application Β· 20d
PHP developer/ Data Engineer
Hybrid Remote Β· Poland, Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - Intermediate Ukrainian Product πΊπ¦Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist. Join us on our mission to make photo editing...Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist.
Join us on our mission to make photo editing enjoyable, easy, and accessible to anyone. Youβll be developing products with innovative technologies, providing value and inspiration for customers, and getting inspired in return.Thanks to our incredible team of experts, weβve built a collaborative space where you can constantly develop and grow in a supportive way. At the same time, we believe in the freedom to be creative. Our work schedule is flexible, and we trust you to give your best while we provide you with everything you need to make work hassle-free. Skylum is proud to be a Ukrainian company, and we stand with Ukraine not only with words but with actions. We regularly donate to various organizations to help speed up the Ukrainian victory.
Requirements:
- Design and develop scalable backend services using PHP 7 / 8.
- Strong understanding of OOP concepts, design patterns, clean code principles,
- Extensive experience in MySQL, with expertise in database design, query optimization, and indexing.
- Experience of work with NoSQL databases (e.g., Redis).
- Proven experience working on high-load projects
- Understanding of ETL processes and data integration
- Experience of work with ClickHouse
- Strong experience with API development
- Strong knowledge of Symfony 6+, yii2
- Experience with RabbitMQ
Nice to Have:
- AWS services
- Payment API (Stripe, SolidGate etc.)
- Docker, GitLab CI
- Python
Responsibilities:
- Data Integration & ETL: Developed and maintained robust ETL pipelines using PHP to process and integrate data from diverse sources.
- API Development: Built and managed secure RESTful APIs to facilitate seamless data exchange between internal and external systems.
- Database Management: Optimized databases and data lakes, including schema design, complex query writing, and performance tuning.
- Data Quality: Implemented data validation and error-handling mechanisms to ensure data integrity and accuracy.
- Cross-Functional Collaboration: Partnered with data analysts and business teams to gather requirements and support data-driven initiatives.
What we offer:
For personal growth:
- A chance to work with a strong team and a unique opportunity to make substantial contributions to our award-winning photo editing tools;
- An educational allowance to ensure that your skills stay sharp;
- English and German classes to strengthen your capabilities and widen your knowledge.
For comfort:
- A great environment where youβll work with true professionals and amazing colleagues whom youβll call friends quickly;
- The choice of working remotely or in our office space located on Podil, equipped with everything you might need for productive and comfortable work.
For health:
- Medical insurance;
- Twenty-one days of paid sick leave per year;
- Healthy fruit snacks full of vitamins to keep you energized
For leisure:
- Twenty-one days of paid vacation per year;
- Fun times at our frequent team-building activities.
-
Β· 63 views Β· 2 applications Β· 4d
Senior Data (Analytics) Engineer
Ukraine Β· 4 years of experience Β· B2 - Upper IntermediateAbout the project: Our customer is the European online car market with over 30 million monthly users, with a market presence in 18 countries. The company is now merging with a similar company in Canada and needs support in this way. As a Data& Analytics...About the project:
Our customer is the European online car market with over 30 million monthly users, with a market presence in 18 countries. The company is now merging with a similar company in Canada and needs support in this way. As a Data& Analytics Engineer, you will play a pivotal role in shaping the future of online car markets and enhancing the user experience for millions of car buyers and sellers.
Requirements:
- 5+ years of experience in Data Engineering or Analytics Engineering roles
- Strong experience building and maintaining pipelines in BigQuery, Athena, Glue, and Airflow
- Advanced SQL skills and experience designing dimensional models (star/snowflake)
- Experience with AWS Cloud
- Solid Python skills, especially for data processing and workflow orchestration
- Familiarity with data quality tools like Great Expectations
- Understanding of data governance, privacy, and security principles
- Experience working with large datasets and optimizing performance
- Proactive problem solver who enjoys building scalable, reliable solutions
English - Upper-Intermediate + Great communication skills
Responsibilities:
- Collaborate with analysts, engineers, and stakeholders to understand data needs and deliver solutions
- Build and maintain robust data pipelines that deliver clean and timely data
- Organize and transform raw data into well-structured, scalable models
- Ensure data quality and consistency through validation frameworks like Great Expectations
- Work with cloud-based tools like Athena and Glue to manage datasets across different domains
- Help set and enforce data governance, security, and privacy standards
- Continuously improve the performance and reliability of data workflows
- Support the integration of modern cloud tools into the broader data platform
We offer*:
- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
More -
Β· 55 views Β· 1 application Β· 18d
Senior Data Engineer
Hybrid Remote Β· Ukraine (Kyiv, Lviv) Β· Product Β· 3 years of experience Β· A2 - ElementarySolidgate is a payment processing and orchestration platform that helps thousands of businesses to accept payments online. We develop cutting-edge fintech solutions to facilitate seamless payment processing for merchants across 150+ countries, spanning...Solidgate is a payment processing and orchestration platform that helps thousands of businesses to accept payments online. We develop cutting-edge fintech solutions to facilitate seamless payment processing for merchants across 150+ countries, spanning Europe to LATAM, the USA to Asia. We are proud to be a part of the history of every company we work with - our infrastructure gives a quick scale to new markets and maximizes revenue.
Key facts:
- Offices in Ukraine, Poland, and Cyprus
- 250+ team members
- 200+ clients went global (Ukraine, US, EU)
- Visa and Mastercard Principal Membership
- EMI license in the EU
Solidgate is part of Endeavor β a global community of the worldβs most impactful entrepreneurs. Weβre proud to be the first payment orchestrator from Europe to join β and to share our expertise within a network of outstanding global companies.
Here, weβre building a strong engineering culture: designing architectures trusted by global leaders. Our engineers donβt just maintain systems β they create them. We believe the payments world is shaped by people who think big, act responsibly, and approach challenges with curiosity and drive. Thatβs exactly the kind of teammate we want on our team.
Weβre now looking for a Senior Data Engineer who will own the end-to-end construction of our Data Platform. The mission of the role is to build products that allow other teams to quickly launch, scale, and manage their own data-driven solutions independently.
Youβll work side-by-side with Senior Engineering Manager of the Platform stream, and a team of four data enthusiasts to build the architecture that will become the foundation for all our data products.
Explore our technology stack β‘οΈ https://solidgate-tech.github.io/
What youβll own:
β Build the Data Platform from scratch (architecture, design, implementation, scaling)
β Implement a Data Lake approach and Layered Architecture (bronze β silver data layers)
β Integrate streaming processing into data engineering practices
β Foster a strong engineering culture with the team and drive best practices in data quality, observability, and reliability
What you need to join us:
β 3+ years of commercial experience as a Data Engineer
β Strong hands-on experience building data solutions in Python
β Confident SQL skills
β Experience with Airflow or similar tools
β Experience building and running DWH (BigQuery / Snowflake / Redshift)
β Expertise in streaming stacks (Kafka / AWS Kinesis)
β Experience with AWS infrastructure: S3, Glue, Athena
β High attention to detail
β Proactive, self-driven mindset
β Continuous-learning mentality
β Strong delivery focus and ownership in a changing environment
Nice to have:
β Background as an analyst or Python developer
β Experience with DBT, Grafana, Docker, LakeHouse approaches
Competitive corporate benefits:
- more than 30 days off during the year (20 working days of vacation + days off for national holidays)
- health insurance and corporate doctor
- free snacks, breakfasts, and lunches in the office
- full coverage of professional training (courses, conferences, certifications)
- yearly performance review
- sports compensation
- competitive salary
- Apple equipment
π© Ready to become a part of the team? Then cast aside all doubts and click "apply".
More -
Β· 80 views Β· 11 applications Β· 13d
Senior Data Engineer
Countries of Europe or Ukraine Β· Product Β· 4 years of experience Β· A2 - ElementaryOur Mission and Vision At Solidgate, our mission is clear: to empower outstanding entrepreneurs to build exceptional internet companies. We exist to fuel the builders β the ones shaping the digital economy β with the financial infrastructure they deserve....Our Mission and Vision
At Solidgate, our mission is clear: to empower outstanding entrepreneurs to build exceptional internet companies. We exist to fuel the builders β the ones shaping the digital economy β with the financial infrastructure they deserve. Weβre on an ambitious journey to become the #1 payments orchestration platform in the world.
Solidgate is part of Endeavor β a global community of the worldβs most impactful entrepreneurs. Weβre proud to be the first payment orchestrator from Europe to join β and to share our expertise within a network of outstanding global companies.
As our processing volume is skyrocketing, the number of engineering teams is growing too β weβre already at 14. This gives our Data Engineering function a whole new scale of challenges: not just building data-driven solutions, but creating products and infrastructure that empowers other teams to build them autonomously.
Thatβs why weβre launching the Data Platform direction and looking for a Senior Data Engineer who will own the end-to-end construction of our Data Platform. The mission of the role is to build products that allow other teams to quickly launch, scale, and manage their own data-driven solutions independently.
You can check out the overall tech stack of the product here https://solidgate-tech.github.io/What youβll own:
β Build the Data Platform from scratch (architecture, design, implementation, scaling)
β Implement a Data Lake approach and Layered Architecture (bronze β silver data layers)
β Integrate streaming processing into data engineering practices
β Foster a strong engineering culture with the team and drive best practices in data quality, observability, and reliabilityWhat you need to join us:
β 3+ years of commercial experience as a Data Engineer
β Strong hands-on experience building data solutions in Python
β Confident SQL skills
β Experience with Airflow or similar tools
β Experience building and running DWH (BigQuery / Snowflake / Redshift)
β Expertise in streaming stacks (Kafka / AWS Kinesis)
β Experience with AWS infrastructure: S3, Glue, Athena
β High attention to detail
β Proactive, self-driven mindset
β Continuous-learning mentality
β Strong delivery focus and ownership in a changing environmentNice to have:
β Background as an analyst or Python developer
β Experience with DBT, Grafana, Docker, LakeHouse approaches
Why Join Solidgate?
High-impact role. Youβre not inheriting a perfect system β youβre building one.
Great product. Weβve built a fintech powerhouse that scales fast. Solidgate isnβt just an orchestration player β itβs the financial infrastructure for modern Internet businesses. From subscriptions to chargeback management, fraud prevention, and indirect tax β weβve got it covered.
Massive growth opportunity. Solidgate is scaling rapidly β this role will be a career-defining move.
Top-tier tech team. Work alongside our driving force β a proven, results-driven engineering team that delivers. Weβre also early adopters of cutting-edge fraud and chargeback prevention technologies from the Schemes.
Modern engineering culture. TBDs, code reviews, solid testing practices, metrics, alerts, and fully automated CI/CD.Competitive corporate benefits:
- more than 30 days off during the year (20 working days of vacation + days off for national holidays)
- health insurance and corporate doctor
- free snacks, breakfasts, and lunches in the office
- full coverage of professional training (courses, conferences, certifications)
- yearly performance review
- sports compensation
- competitive salary
Apple equipment
π© Ready to become a part of the team? Then cast aside all doubts and click "apply".
More -
Β· 20 views Β· 0 applications Β· 12d
Middle/Senior Data Engineer (IRC274051)
Hybrid Remote Β· Ukraine (Vinnytsia, Ivano-Frankivsk, Kyiv + 7 more cities) Β· 3 years of experience Β· B2 - Upper IntermediateJob Description - 3+ years of intermediate to advanced SQL - 3+ years of Python development (intermediate level is fine: Pandas, Numpy, boto3, seaborn, requests, unittest) - Experience building ETLs, preferably in python - Experience with data tools (ex.:...Job Description
- 3+ years of intermediate to advanced SQL
- 3+ years of Python development (intermediate level is fine: Pandas, Numpy, boto3, seaborn, requests, unittest)
- Experience building ETLs, preferably in python
- Experience with data tools (ex.: Airflow, Grafana, AWS Glue, AWS Athena)
- Excellent understanding of database design
- Cloud expereince (AWS S3, Lambda, or alternatives)
- Agile SDLC knowledge
- Detail-oriented
- Data-focused
- Strong verbal/written communication and data presentation skills, including an ability to effectively communicate with both business and technical teams
- An ability and interest in working in a fast-paced and rapidly changing environment
- Be self-driven and show ability to deliver on ambiguous projects with incomplete or dirty dataWould be a plus:
- Understanding of basic SVOD store purchase workflows
- Background in supporting data scientists in conducting data analysis / modelling to support business decision making- Experience in supervising subordinate staff
Job Responsibilities
- Data analysis, auditing, statistical analysis
- ETL buildouts for data reconciliation
- Creation of automatically-running audit tools
- Interactive log auditing to look for potential data problems
- Help in troubleshooting customer support team cases
- Troubleshooting and analyzing subscriber reporting issues:
Answer management questions related to subscriber count trends
App purchase workflow issues
Audit/reconcile store subscriptions vs userdbDepartment/Project Description
Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services.
This position collaborates with a geographically diverse team to develop, deliver, and maintain systems for digital subscription and transactional products across the Customerβ SVOD portfolio.
More -
Β· 72 views Β· 8 applications Β· 6d
Data Engineer
Hybrid Remote Β· Ukraine (Kyiv, Lutsk) Β· Product Β· 1 year of experience Β· B1 - Intermediate Ukrainian Product πΊπ¦Jooble is a global technology company. Our main product jooble.org is an international job search website in 67 countries that aggregates thousands of job openings from various sources on a single page. We are ranked among the TOP-10 most visited websites...Jooble is a global technology company. Our main product jooble.org is an international job search website in 67 countries that aggregates thousands of job openings from various sources on a single page. We are ranked among the TOP-10 most visited websites in the Jobs and Employment segment worldwide. Since 2006, weβve grown from a small startup founded by two students into a major player in the online recruitment market with 300+ professionals. Where others see challenges, we create opportunities.
What You'll Be Doing
- Design & Build Pipelines: Develop, and maintain robust and scalable ETL/ELT pipelines, moving data from diverse sources into our data warehouse.
- Ensure Data Quality & Observability: Implement a comprehensive data observability strategy, including automated quality checks, monitoring, and lineage tracking to ensure data is accurate and trustworthy.
- Optimize & Automate: Write clean, efficient code to automate data processing and continuously optimize our data storage strategies and query performance.
- Govern & Document: Contribute to our data governance practices and maintain clear documentation for data processes, models, and architecture in our data catalog.
What We're Looking For Core Requirements
- Experience: 1-3 years of hands-on experience in a data engineering role.
- Ukrainian proficiency level: Upper Intermediate and higher (spoken and written).
- Core Languages: Strong proficiency in SQL (including complex queries and optimization) and Python for data processing.
- Databases: Practical experience with relational databases, specifically PostgreSQL and MSSQL.
- ETL/ELT: Proven experience designing and building pipelines using modern data orchestrators like Airflow or Dagster.
- Data Modeling: A solid understanding of data warehousing concepts and data modeling techniques (e.g., dimensional modeling).
- Ukrainian proficiency level: Upper Intermediate and higher (spoken and written)
Bonus Points (Strongly Desired)
- Streaming Data: Hands-on experience with streaming technologies like Kafka, Debezium, or message queues like RabbitMQ.
- Specialized Databases: Experience with MPP databases (Greenplum/CloudberryDB) or columnar stores (ClickHouse).
- Modern Data Stack: Familiarity with tools like dbt, Docker.
- Basic knowledge of a cloud platform like AWS, GCP, or Azure.
- A demonstrable interest in the fields of AI and Machine Learning.
Our Tech Stack Includes
- Observability & BI: DataHub, Grafana, Metabase
- Languages: Python, SQL
- Databases: PostgreSQL, MSSQL, ClickHouse, Greenplum/CloudberryDB
- Orchestration: Airflow, Dagster
- Streaming & Messaging: Kafka, Debezium, RabbitMQ
Why You'll Love Working at Jooble
Flexible Work Environment
We offer a hybrid format in Kyiv and remote options worldwide. Start your 8-hour workday between 8:00 and 10:00 AM Kyiv time, ensuring collaboration across our team in 20+ countries. We provide all the equipment you need for productivity and comfort, whether remotely or in the office.Growth and Development
We invest in your future with an individual education budget covering soft and hard skills. Career opportunities and regular performance reviews support your growth from entry-level to leadership roles.Healthcare and Well-being
We offer health insurance after three months, plus financial support for medical expenses abroad. Our mental health benefits include access to psychological consultations and 50% reimbursement for therapy sessions.Time Off
Enjoy 24 vacation days, 20 paid sick days, 4 extra sick days without a medical certificate, and 6 recharge days. Take the time you need and return refreshed!Our culture
We embrace a product mindset, continuously innovating and improving our services to meet the needs of our users. We cultivate a workplace that values support, respect, honesty, and the free exchange of ideas. Experience an environment where "stronger together" is more than just a phrase β it's how we operate, fostering creativity and growth.
Supporting Ukraine
Since the beginning of the war, Jooble has been actively supporting and organizing fundraisers to aid our country. Many of our colleagues are bravely serving on the front lines or volunteering, and we couldnβt be prouder of their dedication and efforts. We committed to supporting our nation in any way we can.
Ready to Make an Impact? If youβre passionate about this opportunity and want to join our team, please send us your CV. Our recruiter will be in touch with you soon.
More -
Β· 36 views Β· 2 applications Β· 5d
Data Engineer
Office Work Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - Intermediate MilTech πͺKey Responsibilities Design, develop, and maintain scalable data models to support analytics and reporting needs Build, monitor, and optimize ETL/ELT pipelines using best practices in data transformation and automation Collaborate with BI and analytics...Key Responsibilities
- Design, develop, and maintain scalable data models to support analytics and reporting needs
- Build, monitor, and optimize ETL/ELT pipelines using best practices in data transformation and automation
- Collaborate with BI and analytics teams on data requirements
- Ensure data integrity and consistency via automated data tests
- Proactively suggest data improvements, reporting ideas
Required Qualifications
- 3+ years of experience in analytics engineering, data engineering, or a related field
- Advanced proficiency in SQL, with experience in writing efficient data modeling queries
- Hands-on experience with modern data transformation frameworks (e.g. dbt, Dataform, or similar)
- Strong understanding of data warehousing principles and data architecture best practices
- Familiarity with ETL/ELT methodologies and workflow orchestration tools
- Experience working with cloud-based data warehouses and databases (Snowflake, PostgreSQL, Redshift, or similar)
- Knowledge of BI tools (Power BI, Tableau, Looker, or similar)
- Basic programming skills in Python or another scripting language for automation
- Solid understanding of data governance, lineage, and security best practices
- Experience with Git-based version control and CI/CD workflows for data transformations
Preferred Qualifications
- Deep understanding of data warehouse concepts and database maintenance
- Background in business intelligence, analytics, or software engineering
- Self-motivated and proactive, with the ability to independently uncover and solve problems
-
Β· 66 views Β· 1 application Β· 5d
Junior Database Engineer
Hybrid Remote Β· Ukraine (Kyiv) Β· Product Β· 1 year of experience Β· B1 - IntermediateAs a Junior Database Engineer, you will be responsible for maintaining and optimizing modern database systems. Your role will include backup management, replication monitoring, query optimization, and close collaboration with developers and DevOps...As a Junior Database Engineer, you will be responsible for maintaining and optimizing modern database systems. Your role will include backup management, replication monitoring, query optimization, and close collaboration with developers and DevOps engineers. This is an excellent opportunity for someone with a strong theoretical foundation in databases who wants to gain practical experience in real-world, high-performance environments.
Key Responsibilities
- Configure, monitor, and test backups; perform recovery checks.
- Monitor database replication and troubleshoot basic replication errors.
- Collect and analyze slow query statistics; participate in query optimization.
- Monitor database performance and apply necessary adjustments.
- Install and configure components of database architecture.
- Collaborate with developers and DevOps engineers to solve cross-team tasks.
- Participate in testing and deployment of new solutions.
- Write and debug scripts in Bash or Python to automate operations.
- Contribute to technical documentation.
Requirements
- Understanding of modern DBMS architecture (PostgreSQL, MySQL, MongoDB, etc.).
- Knowledge of relational data models and normalization principles.
- Understanding of ACID transaction properties.
- Experience installing and configuring at least one DBMS.
- Skills in writing SQL queries.
- Familiarity with monitoring systems (Prometheus, Grafana, PMM, etc.).
- Experience with Linux (Ubuntu/Debian).
- Ability to write simple automation scripts (Shell or Python).
- Strong sense of responsibility and attention to detail.
Nice-to-Have
- Technical degree or final-year student (IT, Cybersecurity, Mathematics, Informatics, etc.).
- Experience with high-load projects.
- Familiarity with Docker.
- Knowledge of replication (Master-Replica, WAL, GTID, MongoDB rs.replSet).
- Understanding of indexing and its impact on performance.
- Familiarity with cloud database services (AWS RDS, Azure Database, GCP Cloud SQL).
What We Offer
- Competitive salary based on experience and skills.
- Flexible working schedule (remote/hybrid).
- 17 paid vacation days and 14 paid sick leave.
- Mentorship and clear career growth path towards Senior Database Engineer.
- Access to courses, certifications, and conferences.
- Collaborative team and knowledge-sharing environment.
- International projects with modern tech stack.
-
Β· 22 views Β· 1 application Β· 4d
Senior Data Engineer (Python, Fast API)
Hybrid Remote Β· Ukraine (Kyiv, Lviv) Β· 5 years of experience Β· B2 - Upper IntermediateWho we are! At Levi9, we are passionate about what we do. We love our work and together in a team, we are smarter and stronger. We are looking for skilled team players who make change happen. Are you one of these players? About the project Our client...πΉWho we are!
At Levi9, we are passionate about what we do. We love our work and together in a team, we are smarter and stronger. We are looking for skilled team players who make change happen. Are you one of these players?
πΉAbout the project
Our client is a leading media company in Western Europe, delivering high-quality content across various platforms, including newspapers, magazines, radio, TV, websites, apps, and podcasts. Their brands reach millions of people daily, shaping the media landscape with independent and trusted journalism.
πΉAbout the job
Youβll be working on a personalisation engine that serves all customers of our client, across all media offerings. The customer team is a cross-functional- with data engineers, ML engineers, and data scientists.
πΉResponsibilities
- Maintain and extend our recommendation back-end.
- Support operational excellence through practices like code review and pair programming.
The entire team is responsible for the operations of our services. This includes actively monitoring different applications and their infrastructure, as well as intervening to solve operational problems whenever they arise.
πΉYour key skills:
- analyze and troubleshoot technical issues
communicate about technical and functional requirements with people outside of the team
πΉRequired qualifications:
- a positive and constructive mindset and give feedback accordingly
- high standards for the quality of the work you deliver
- a degree in computer science, software engineering, a related field, or relevant prior experience
- 5+ years of software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
- can-do and growth mentality that communicates clearly
- affinity with data analysis
- a natural interest in digital media products
πΉThe candidate should have:
- Experience in building microservices in Python and supporting large-scale applications
- Experience building APIs with FastAPI
- Experience in developing applications in a Kubernetes environment
- Developing batch jobs in Apache Spark (pyspark or Scala)
- Developing streaming applications for Apache Kafka in Python (experience with Kafka is a big plus)
- Working with CI/CD pipelines
- Writing Infrastructure as Code with Terraform
- AWS certification at the Associate level or higher, or willingness to obtain it
- Nice to have: machine learning knowledge
πΉ9 reasons to join us:
- Today we're working with the technology of tomorrow.
- We don't wait for a change. We are the change.
- We're experts in creating experts (Levi9 academy, Lead9 program for leaders).
- No micromanagement. We are free birds with a clear understanding of what the high performance is!
- Learning in Levi9 never stops (unlimited Udemy for business, meetups, English&German courses, Professional trainings).
- Here you can train your body and mind.
- We've gathered the best locations - comfortable, cozy and pet-friendly offices in Kyiv (5 minutes from Olimpiyska metro station) and Lviv, with regular offline internal events
- We have a master's degree in work-life balance.
- We are actively supporting Ukraine with constant donations and volunteering
πΉSimple step to get this job
Click the APPLY NOW button and leave your contacts!
More -
Β· 26 views Β· 0 applications Β· 3d
Big Data Engineer
Ukraine Β· 3 years of experience Β· B2 - Upper IntermediateWe are looking for a Middle Big Data Engineer to join one of the largest and strongest Data Units in Ukraine. With more than 220+ experts and over 30 ongoing projects across the EU and US, our Data Unit contributes to industries ranging from agriculture...We are looking for a Middle Big Data Engineer to join one of the largest and strongest Data Units in Ukraine.
With more than 220+ experts and over 30 ongoing projects across the EU and US, our Data Unit contributes to industries ranging from agriculture to satellite communications and fintech. We work with cutting-edge technologies, handle massive data volumes, and provide our engineers with opportunities to grow from mentoring roles to becoming solution architects.
Join our ambitious Data team, where business expertise, scientific approach, and advanced engineering meet to unlock the full potential of data in decision-making.
About the ClientOur client is a US-based global leader in in-flight Internet and entertainment services, serving 23 commercial airline partners and nearly 3,600 aircraft worldwide. They also provide connectivity solutions for maritime and government sectors and are one of the worldβs largest satellite capacity providers.
For over six years, N-iX has been supporting the client across Business Intelligence, Data Analysis, Data Science, and Big Data domains. We are now expanding the team with a Big Data Engineer who will help enhance complex data management and analytics solutions.
Role OverviewAs a Big Data Engineer, you will work closely with the clientβs Data Science team, supporting the end-to-end lifecycle of data-driven solutions β from designing and building data pipelines to deploying ML models into production. Youβll play a key role in ensuring high-quality data for model training and inference, as well as contributing to scalable architecture design.
Responsibilities:- Design, develop, and maintain data pipelines and large-scale processing solutions.
- Build and support environments (tables, clusters) for data operations.
- Work with AWS SageMaker to deploy ML models into production.
- Collaborate with Data Scientists to prepare and validate datasets.
- Implement and support data validation frameworks (e.g., Great Expectations).
- Migrate PySpark code into optimized DBT SQL queries.
- Contribute to solution architecture and ensure scalability of workflows.
Requirements:- Strong programming skills in Python (Pandas, PySpark).
- Proficiency in SQL for data modeling and transformations (DBT knowledge is a plus).
- Experience with the AWS ecosystem (Lambda, EMR, S3, DynamoDB, etc.).
- Solid understanding of data pipeline orchestration.
Nice to have:- Experience with Airflow for workflow automation.
- Knowledge of Docker for containerized deployments.
- Familiarity with data validation frameworks (Great Expectations).
- Hands-on experience with Snowflake or other cloud data warehouses.
Exposure to ML data preparation.
We offer*:
- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
More
- 1
- 2