Jobs Kyiv

23
  • Β· 188 views Β· 24 applications Β· 5d

    Data Engineer

    Countries of Europe or Ukraine Β· 2 years of experience Β· B1 - Intermediate
    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV. Skills requirements: β€’ 2+ years of experience with...

    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV.

     

    Skills requirements:
    β€’ 2+ years of experience with Python;
    β€’ 2+ years of experience as a Data Engineer;
    β€’ Experience with Pandas;
    β€’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
    β€’ Familiarity with Amazon Web Services;
    β€’ Knowledge of data algorithms and data structures is a MUST;
    β€’ Working with high volume tables 10m+.


    Optional skills (as a plus):
    β€’ Experience with Spark (pyspark);
    β€’ Experience with Airflow;
    β€’ Experience with Kafka;
    β€’ Experience in statistics;
    β€’ Knowledge of DS and Machine learning algorithms..

     

    Key responsibilities:
    β€’ Create ETL pipelines and data management solutions (API, Integration logic);
    β€’ Different data processing algorithms;
    β€’ Involvement in creation of forecasting, recommendation, and classification models.

     

    We offer:

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 82 views Β· 7 applications Β· 11d

    Data Engineer

    Ukraine Β· Product Β· 2 years of experience Β· B2 - Upper Intermediate
    Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country. At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT...

    Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country.

    At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT teams, consisting of over 800 professionals. Every day, we collaborate to ensure that more than 2.7 million of our clients receive quality service, use the bank’s products and services, and develop their businesses because we are #Together_with_Ukraine.

    About the project:

    You will be part of our product team. Team is responsible for building data marts, creating jsons based on them and sending them via Kafka. New Data Platform built in AWS.

    We are looking for motivated and result-oriented data engineer, who can join our team in development of Data Products in our new Data Platform.

    Your future responsibilities:

    • Building an ETL process using AWS services: (S3,Ethena, AWS Glue), Airflow, PySpark, SQL, GitHub, Kafka
    • Building SQL queries from data sources on PySpark
    • Data processing and writing to the Data Mart Icberg table
    • Building an integration solution on the Airflow + Kafka stack
    • Data processing in JSON with publication in Kafka

    Your skills and experience:

    • Higher education in the field of Computer Science/Engineering
    • 2+ years of relevant experience in data engineering or related roles
    • Knowledge of programming languages: Python, PLSQL
    • 2+ years of experience in parsing, transforming and storing data in a Big Data environment (e.g., Hadoop, Spark)
    • 1+ years of experience in AWS Lambda, Glue, Athena and S3
    • Experience with Kafka architecture, configuration and support
    • Experience with database development and optimization (Oracle/PostgreSQL)
    • Experience in developing Big Data pipelines
    • Experience with Avro, JSON data formats
    • Experience with AWS data services and infrastructure management
    • Understanding the principles of working in an Agile environment

    We Offer What Matters Most to You:

    • Competitive Salary: We guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
    • Social Package: Official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
    • Comfortable Working Conditions: Possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment
    • Wellbeing Program: All employees have access to medical insurance from the first working day, as well as consultations with a psychologist, nutritionist, or lawyer. We also offer discount programs for sports and purchases, family days for children and adults, and in-office massages
    • Learning and Development: Access to over 130 online training resources, corporate training programs in CX, Data, IT Security, Leadership, Agile, as well as a corporate library and English lessons
    • Great Team: Our colleagues form a community where curiosity, talent, and innovation are welcomed. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
    • Career Opportunities: We encourage advancement within the bank across different functions
    • Innovations and Technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub Actions, ArgoCD, Prometheus, VictoriaMetrics, Vault, OpenTelemetry, ElasticSearch, Crossplane, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (iOS), Kotlin (Android). Data stores: SQL-Oracle, PgSQL, MsSQL, Sybase. Data management: Kafka, Airflow, Spark, Flink
    • Support Program for Defenders: We maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and are developing the Bank’s veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes

    Why Raiffeisen Bank?

    • People are our main value. We support, acknowledge, educate, and actively involve them in driving change
    • One of the largest IT product teams among the country’s banks
    • Recognized as the best employer by EY, Forbes, Randstad, FranklinCovey, and Delo.UA
    • One of the largest lenders to the economy and agricultural business among private banks
    • The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans)
    • One of the largest taxpayers in Ukraine; we paid 6.6 billion UAH in taxes in 2023

    Opportunities for Everyone:

    • Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
    • We support the principles of diversity, equality and inclusiveness
    • We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
    • We cooperate with students and older people, creating conditions for growth at any career stage

    Want to learn more? β€” Follow us on social media: 

    Facebook, Instagram, LinkedInβ€―

    ______________________________________________________________________

    Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ β€” Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ ΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· Ρ–Π½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡ–Ρ‚Π°Π»ΠΎΠΌ. Π‘Ρ–Π»ΡŒΡˆΠ΅ 30 Ρ€ΠΎΠΊΡ–Π² ΠΌΠΈ ΡΡ‚Π²ΠΎΡ€ΡŽΡ”ΠΌΠΎ Ρ‚Π° Π²ΠΈΠ±ΡƒΠ΄ΠΎΠ²ΡƒΡ”ΠΌΠΎ Π±Π°Π½ΠΊΡ–Π²ΡΡŒΠΊΡƒ систСму Π½Π°ΡˆΠΎΡ— Π΄Π΅Ρ€ΠΆΠ°Π²ΠΈ.

    Π£ Π Π°ΠΉΡ„Ρ– ΠΏΡ€Π°Ρ†ΡŽΡ” ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡ–Π²Ρ€ΠΎΠ±Ρ–Ρ‚Π½ΠΈΠΊΡ–Π², сСрСд Π½ΠΈΡ… ΠΎΠ΄Π½Π° Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΎΠ²ΠΈΡ… Π†Π’-ΠΊΠΎΠΌΠ°Π½Π΄, Ρ‰ΠΎ Π½Π°Π»Ρ–Ρ‡ΡƒΡ” ΠΏΠΎΠ½Π°Π΄ 800 Ρ„Π°Ρ…Ρ–Π²Ρ†Ρ–Π². Щодня ΠΏΠ»Ρ–Ρ‡-ΠΎ-ΠΏΠ»Ρ–Ρ‡ ΠΌΠΈ ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ, Ρ‰ΠΎΠ± Π±Ρ–Π»ΡŒΡˆ Π½Ρ–ΠΆ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½Π° Π½Π°ΡˆΠΈΡ… ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡ‚Ρ€ΠΈΠΌΠ°Ρ‚ΠΈ якіснС обслуговування, користуватися ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Π°ΠΌΠΈ Ρ– ΡΠ΅Ρ€Π²Ρ–сами Π±Π°Π½ΠΊΡƒ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ‚ΠΈ бізнСс, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡ€Π°Ρ—Π½ΠΎΡŽ.β€―

    Π’Π²ΠΎΡ— ΠΌΠ°ΠΉΠ±ΡƒΡ‚Π½Ρ– обов’язки:

    • ΠŸΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° ETL процСсу Π· Π²ΠΈΠΊΠΎΡ€ΠΈΡΡ‚анням AWS сСрвісів: (S3,Ethena, AWS Glue), Airflow, PySpark, SQL, GitHub, Kafka
    • ΠŸΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° SQL Π·Π°ΠΏΠΈΡ‚Ρ–Π² Π· Π΄ΠΆΠ΅Ρ€Π΅Π» Π΄Π°Π½ΠΈΡ… Π½Π° PySpark
    • ΠžΠ±Ρ€ΠΎΠ±ΠΊΠ° Π΄Π°Π½ΠΈΡ… Ρ‚Π° Π·Π°ΠΏΠΈΡ Π΄ΠΎ Data Mart Icberg Ρ‚Π°Π±Π»ΠΈΡ†ΡŒ
    • ΠŸΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° Ρ–Π½Ρ‚Π΅Π³Ρ€Π°Ρ†Ρ–ΠΉΠ½ΠΎΠ³ΠΎ Ρ€Ρ–ΡˆΠ΅Π½Π½Ρ Π½Π° ΡΡ‚Π΅ΠΊΡƒ Airflow + Kafka
    • ΠžΠ±Ρ€ΠΎΠ±ΠΊΠ° Π΄Π°Π½ΠΈΡ… Ρƒ JSON Π· ΠΏΡƒΠ±Π»Ρ–ΠΊΠ°Ρ†Ρ–Ρ”ΡŽ Ρƒ Kafka

    Π’Π²Ρ–ΠΉ досвід Ρ‚Π° Π½Π°Π²ΠΈΡ‡ΠΊΠΈ:

    • Π’ΠΈΡ‰Π° освіта Ρƒ ΡΡ„Π΅Ρ€Ρ– ΠΊΠΎΠΌΠΏβ€™ΡŽΡ‚Π΅Ρ€Π½ΠΈΡ… Π½Π°ΡƒΠΊ/Ρ–Π½ΠΆΠΈΠ½Ρ–Ρ€ΠΈΠ½Π³
    • 2+ Ρ€ΠΎΠΊΠΈ Π²Ρ–Π΄ΠΏΠΎΠ²Ρ–Π΄Π½ΠΎΠ³ΠΎ досвіду Π² Π΄Π°Ρ‚Π° Ρ–Π½ΠΆΠΈΠ½Ρ–Ρ€ΠΈΠ½Π³Ρƒ Π°Π±ΠΎ суміТних ролях
    • Знання ΠΌΠΎΠ² програмування: Python, PLSQL
    • 2+ Ρ€ΠΎΠΊΡ–Π² досвіду Π² ΠΏΠ°Ρ€ΡΡƒΠ²Π°Π½Π½Ρ–, трансформації Ρ– Π·Π±Π΅Ρ€Π΅ΠΆΠ΅Π½Π½Ρ– Π΄Π°Π½ΠΈΡ… Π² ΡΠ΅Ρ€Π΅Π΄ΠΎΠ²ΠΈΡ‰Ρ– Big Data (e.g., Hadoop, Spark)
    • 1+ Ρ€ΠΎΠΊΡ–Π² досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π² AWS Lambda, Glue,Athena and S3
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Kafka Π°Ρ€Ρ…Ρ–Ρ‚Π΅ΠΊΡ‚ΡƒΡ€ΠΎΡŽ, Π½Π°Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½ΡΠΌ Ρ‚Π° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΎΡŽ
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΎΡŽ Ρ‚Π° ΠΎΠΏΡ‚ΠΈΠΌΡ–Π·Π°Ρ†Ρ–Ρ”ΡŽ Π±Π°Π· Π΄Π°Π½ΠΈΡ… (Oracle/PostgreSQL)
    • Досвід Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΈ Big Data pipelines
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Avro, JSON Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°ΠΌΠΈ Π΄Π°Π½ΠΈΡ…
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· AWS сСрвісами Π΄Π°Π½ΠΈΡ… Ρ– ΡƒΠΏΡ€Π°Π²Π»Ρ–ння Ρ–Π½Ρ„Ρ€Π°ΡΡ‚Ρ€ΡƒΠΊΡ‚ΡƒΡ€ΠΎΡŽ
    • Розуміння ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΡ–Π² Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π² Agile сСрСдовищі

    ΠŸΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ Ρ‚Π΅, Ρ‰ΠΎ ΠΌΠ°Ρ” значСння самС для Ρ‚Π΅Π±Π΅:

    • ΠšΠΎΠ½ΠΊΡƒΡ€Π΅Π½Ρ‚Π½Π° Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Π° ΠΏΠ»Π°Ρ‚Π°: Π³Π°Ρ€Π°Π½Ρ‚ΡƒΡ”ΠΌΠΎ ΡΡ‚Π°Π±Ρ–Π»ΡŒΠ½ΠΈΠΉ Π΄ΠΎΡ…Ρ–Π΄ Ρ‚Π° Ρ€Ρ–Ρ‡Π½Ρ– бонуси Π·Π° Ρ‚Π²Ρ–ΠΉ особистий внСсок. Π”ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΎ, Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” Ρ€Π΅Ρ„Π΅Ρ€Π°Π»ΡŒΠ½Π° ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡ€ΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡƒΡ‡Π΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ… ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊΡƒ
    • Π‘ΠΎΡ†Ρ–Π°Π»ΡŒΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ‚: ΠΎΡ„Ρ–Ρ†Ρ–ΠΉΠ½Π΅ ΠΏΡ€Π°Ρ†Π΅Π²Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ, 28 Π΄Π½Ρ–Π² ΠΎΠΏΠ»Π°Ρ‡ΡƒΠ²Π°Π½ΠΎΡ— відпустки, Π΄ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΈΠΉ β€œΠ΄Π΅ΠΊΡ€Π΅Ρ‚β€ для татусів, Ρ‚Π° ΠΌΠ°Ρ‚Π΅Ρ€Ρ–Π°Π»ΡŒΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° для Π±Π°Ρ‚ΡŒΠΊΡ–Π² ΠΏΡ€ΠΈ Π½Π°Ρ€ΠΎΠ΄ΠΆΠ΅Π½Π½Ρ– Π΄Ρ–Ρ‚Π΅ΠΉ
    • ΠšΠΎΠΌΡ„ΠΎΡ€Ρ‚Π½Ρ– ΡƒΠΌΠΎΠ²ΠΈ ΠΏΡ€Π°Ρ†Ρ–: ΠΌΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ Π³Ρ–Π±Ρ€ΠΈΠ΄Π½ΠΎΠ³ΠΎ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρƒ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ, офіси Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡Π΅Π½Π½Ρ– укриттями Ρ‚Π° Π³Π΅Π½Π΅Ρ€Π°Ρ‚ΠΎΡ€Π°ΠΌΠΈ, забСзпСчСння ΡΡƒΡ‡Π°ΡΠ½ΠΎΡŽ Ρ‚Π΅Ρ…Π½Ρ–ΠΊΠΎΡŽ
    • Wellbeing ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ°: для всіх співробітників доступні ΠΌΠ΅Π΄ΠΈΡ‡Π½Π΅ страхування Π· ΠΏΠ΅Ρ€ΡˆΠΎΠ³ΠΎ Ρ€ΠΎΠ±ΠΎΡ‡ΠΎΠ³ΠΎ дня; ΠΊΠΎΠ½ΡΡƒΠ»ΡŒΡ‚Π°Ρ†Ρ–Ρ— психолога, Π½ΡƒΡ‚Ρ€ΠΈΡ†Ρ–ΠΎΠ»ΠΎΠ³Π° Ρ‡ΠΈ ΡŽΡ€ΠΈΡΡ‚Π°; дисконт ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡ€Ρ‚ Ρ‚Π° ΠΏΠΎΠΊΡƒΠΏΠΊΠΈ; family days для Π΄Ρ–Ρ‚Π΅ΠΉ Ρ‚Π° Π΄ΠΎΡ€ΠΎΡΠ»ΠΈΡ…; масаТ Π² ΠΎΡ„ісі
    • Навчання Ρ‚Π° Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ: доступ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½ΠΈΡ… ΠΎΠ½Π»Π°ΠΉΠ½-рСсурсів; ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Ρ– Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½Ρ– ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π· CX, Data, IT Security, ЛідСрства, Agile. ΠšΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Π° Π±Ρ–Π±Π»Ρ–ΠΎΡ‚Π΅ΠΊΠ° Ρ‚Π° ΡƒΡ€ΠΎΠΊΠΈ Π°Π½Π³Π»Ρ–ΠΉΡΡŒΠΊΠΎΡ—
    • ΠšΡ€ΡƒΡ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡˆΡ– ΠΊΠΎΠ»Π΅Π³ΠΈ β€” Ρ†Π΅ ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π°, Π΄Π΅ Π²Ρ–Ρ‚Π°ΡŽΡ‚ΡŒΡΡ Π΄ΠΎΠΏΠΈΡ‚Π»ΠΈΠ²Ρ–ΡΡ‚ΡŒ, Ρ‚Π°Π»Π°Π½Ρ‚ Ρ‚Π° Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ—. Ми ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, вчимося Ρ€Π°Π·ΠΎΠΌ Ρ‚Π° Π·Ρ€ΠΎΡΡ‚Π°Ρ”ΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρˆ Π·Π½Π°ΠΉΡ‚ΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡƒΠΌΡ†Ρ–Π² Ρƒ ΠΏΠΎΠ½Π°Π΄ 15-Ρ‚ΠΈ профСсійних ΠΊΠΎΠΌβ€™ΡŽΠ½Ρ–Ρ‚Ρ–, Ρ‡ΠΈΡ‚Π°Ρ†ΡŒΠΊΠΎΠΌΡƒ Ρ‡ΠΈ ΡΠΏΠΎΡ€Ρ‚ΠΈΠ²Π½ΠΎΠΌΡƒ ΠΊΠ»ΡƒΠ±Π°Ρ…
    • ΠšΠ°Ρ€β€™Ρ”Ρ€Π½Ρ– моТливості: ΠΌΠΈ Π·Π°ΠΎΡ…ΠΎΡ‡ΡƒΡ”ΠΌΠΎ просування всСрСдині Π±Π°Π½ΠΊΡƒ ΠΌΡ–ΠΆ функціями
    • Π†Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ— Ρ‚Π° Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ—: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
    • ΠŸΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ захисників Ρ– Π·Π°Ρ…ΠΈΡΠ½ΠΈΡ†ΡŒ: ΠΌΠΈ Π·Π±Π΅Ρ€Ρ–Π³Π°Ρ”ΠΌΠΎ Ρ€ΠΎΠ±ΠΎΡ‡Ρ– місця Ρ‚Π° Π²ΠΈΠΏΠ»Π°Ρ‡ΡƒΡ”ΠΌΠΎ ΡΠ΅Ρ€Π΅Π΄Π½ΡŽ Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Ρƒ ΠΏΠ»Π°Ρ‚Ρƒ ΠΌΠΎΠ±Ρ–Π»Ρ–Π·ΠΎΠ²Π°Π½ΠΈΠΌ. Для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ‚Π° Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΠΎΠΊ Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ”Ρ‚ΡŒΡΡ Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΡΡŒΠΊΠ° ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π° Π‘Π°Π½ΠΊΡƒ. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π½Π°Π΄ підвищСнням обізнаності ΠΊΠ΅Ρ€Ρ–Π²Π½ΠΈΠΊΡ–Π² Ρ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡ‚Π°Π½ΡŒ повСрнСння Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Π΄ΠΎ Ρ†ΠΈΠ²Ρ–Π»ΡŒΠ½ΠΎΠ³ΠΎ Тиття. Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ Π²Ρ–Π΄Π·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ як ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΡ… Ρ€ΠΎΠ±ΠΎΡ‚ΠΎΠ΄Π°Π²Ρ†Ρ–Π² для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² (Forbes)

    Π§ΠΎΠΌΡƒ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ?β€―

    Наша Π³ΠΎΠ»ΠΎΠ²Π½Π° Ρ†Ρ–Π½Π½Ρ–ΡΡ‚ΡŒ β€” люди Ρ– ΠΌΠΈ Π΄Π°Ρ”ΠΌΠΎ Ρ—ΠΌ ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΡƒ Ρ– Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²Ρ‡Π°Ρ”ΠΌΠΎ, Π·Π°Π»ΡƒΡ‡Π°Ρ”ΠΌΠΎ Π΄ΠΎ Π·ΠΌΡ–Π½. ΠŸΡ€ΠΈΡ”Π΄Π½ΡƒΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡ„Ρƒ, Π°Π΄ΠΆΠ΅ для нас Π’И ΠΌΠ°Ρ”Ρˆ значСння!β€―

    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΊΡ€Π΅Π΄ΠΈΡ‚ΠΎΡ€Ρ–Π² Π΅ΠΊΠΎΠ½ΠΎΠΌΡ–ΠΊΠΈ Ρ‚Π° Π°Π³Ρ€Π°Ρ€Π½ΠΎΠ³ΠΎ бізнСсу сСрСд ΠΏΡ€ΠΈΠ²Π°Ρ‚Π½ΠΈΡ… Π±Π°Π½ΠΊΡ–Π²β€―
    • Π’ΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΠΌ ΠΏΡ€Π°Ρ†Π΅Π΄Π°Π²Ρ†Π΅ΠΌ Π·Π° Π²Π΅Ρ€ΡΡ–ями EY, Forbes, Randstad, Franklin Covey, Delo.UAβ€―
    • ΠΠ°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ€ Π³ΡƒΠΌΠ°Π½Ρ–Ρ‚Π°Ρ€Π½ΠΎΡ— допомогисСрСд Π±Π°Π½ΠΊΡ–Π² (Π§Π΅Ρ€Π²ΠΎΠ½ΠΈΠΉ Π₯рСст Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)β€―
    • Одна Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… Π†Π’-ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΎΠ²ΠΈΡ… ΠΊΠΎΠΌΠ°Π½Π΄ сСрСд Π±Π°Π½ΠΊΡ–Π² ΠΊΡ€Π°Ρ—Π½ΠΈβ€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΠ»Π°Ρ‚Π½ΠΈΠΊΡ–Π² ΠΏΠΎΠ΄Π°Ρ‚ΠΊΡ–Π² Π² Π£ΠΊΡ€Π°Ρ—Π½Ρ–, Π·Π° 2023 Ρ€Ρ–ΠΊ Π±ΡƒΠ»ΠΎ сплачСно 6,6 ΠΌΠ»Ρ€Π΄ Π³Ρ€ΠΈΠ²Π΅Π½ΡŒ

    ΠœΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡ‚Ρ– для всіх:β€―

    • Π Π°ΠΉΡ„ ΠΊΠ΅Ρ€ΡƒΡ”Ρ‚ΡŒΡΡ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠ°ΠΌΠΈ, Ρ‰ΠΎ Ρ„ΠΎΠΊΡƒΡΡƒΡŽΡ‚ΡŒΡΡ Π½Π° Π»ΡŽΠ΄ΠΈΠ½Ρ– Ρ‚Π° Ρ—Ρ— Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ, Ρƒ Ρ†Π΅Π½Ρ‚Ρ€Ρ– ΡƒΠ²Π°Π³ΠΈ 5β€―500 співробітників Ρ‚Π° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½ΠΈ ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π²β€―β€―
    • ΠŸΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΈ різноманіття, рівності Ρ‚Π° Ρ–Π½ΠΊΠ»ΡŽΠ·ΠΈΠ²Π½ΠΎΡΡ‚Ρ–
    • Ми Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Ρ– Π΄ΠΎ Π½Π°ΠΉΠΌΡƒ Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ– Π»ΡŽΠ΄Π΅ΠΉ Π· Ρ–Π½Π²Π°Π»Ρ–Π΄Π½Ρ–ΡΡ‚ΡŽ Ρ‚Π° Π³ΠΎΡ‚ΠΎΠ²Ρ– Π°Π΄Π°ΠΏΡ‚ΡƒΠ²Π°Ρ‚ΠΈ Ρ€ΠΎΠ±ΠΎΡ‡Π΅ сСрСдовищС ΠΏΡ–Π΄ Π²Π°ΡˆΡ– особливі ΠΏΠΎΡ‚Ρ€Π΅Π±ΠΈ
    • Π‘ΠΏΡ–Π²ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π·Ρ– ΡΡ‚ΡƒΠ΄Π΅Π½Ρ‚Π°ΠΌΠΈ Ρ‚Π° Π»ΡŽΠ΄ΡŒΠΌΠΈ ΡΡ‚Π°Ρ€ΡˆΠΎΠ³ΠΎ Π²Ρ–ΠΊΡƒ,β€―ΡΡ‚Π²ΠΎΡ€ΡŽΡŽΡ‡ΠΈ ΡƒΠΌΠΎΠ²ΠΈ для зростання Π½Π° Π±ΡƒΠ΄ΡŒ-якому Π΅Ρ‚Π°ΠΏΡ– кар’єри

    Π‘Π°ΠΆΠ°Ρ”Ρˆ дізнатися Π±Ρ–Π»ΡŒΡˆΠ΅? β€” ΠŸΡ–дписуйся Π½Π° Π½Π°Ρ Ρƒ ΡΠΎΡ†.ΠΌΠ΅Ρ€Π΅ΠΆΠ°Ρ…: Facebook, Instagram, LinkedInβ€―

    More
  • Β· 178 views Β· 34 applications Β· 11d

    Data Engineer

    Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B2 - Upper Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    Headway Inc is a global tech company, revolutionizing lifelong learning by creating digital products for over 150 million users worldwide. Our mission is to help people grow. We’re proud to be ranked 4th among the World’s Top EdTech Π‘ompanies by TIME...

    Headway Inc is a global tech company, revolutionizing lifelong learning by creating digital products for over 150 million users worldwide. Our mission is to help people grow. We’re proud to be ranked 4th among the World’s Top EdTech Π‘ompanies by TIME magazine. We believe lifelong learning should be accessible, personalized, and impactful to each individual. That’s how we change the world and why we bring together exceptional minds.
     

    The core of our achievements is our team. We believe in people and shared values SELECT. That’s why, together with Yuliya Savchuk, Engineering Manager of the MIT team, we’re looking for a Data Engineer to join our team of superstars transforming the EdTech industry.
     

    About the role:

    With business scaling, we see the need to strengthen the team that is working on building a data analytics platform for Headway Inc. We need to ensure that every business area and our products have reliable data to drive deep insights and innovation.

    Data is at the core of our company. You will build and maintain a reliable, efficient, and scalable data infrastructure that enables Headway Inc to leverage data as a strategic asset for informed decision-making, driving innovation, and achieving business goals.
     

    What awaits you on our team:

    • Have the opportunity to join the team of a global EdTech company that creates socially impactful products for the international market.
    • Have the opportunity to collaborate with a large team of analysts and marketers β€” to create solutions that have a direct and tangible impact on their work.
    • You'll be able to use a wide variety of modern tools and independently decide which technologies are most appropriate to apply.
    • We work in an atmosphere of freedom and responsibility.
    • Your decisions and ideas will actively impact the business. You’ll own the full development lifecycleβ€”from solution design through to user feedback and iteration.
       

    What will you do:

    At MIT, the Engineering team develops data platforms and automation tools that help teams work more efficiently and make informed marketing decisions. We create solutions that allow us to analyze and and utilize data for effective decision-making in marketing strategies, improving results and increasing return on investment.
     

    • Communicate and collaborate with the analytics team, being responsible for delivering data to the analytical database for visualization.
    • Create and maintain optimal and scalable pipeline architecture. Develop new pipelines and refine existing ones.
    • Develop ETL/ELT processes and Data Lake architecture.
    • Research and collect large, complex data.
    • Identify, design, and implement internal process improvements.
    • Continuously learn, develop, and utilize cutting-edge technologies.
       

    What do you need to join us:

    • Experience in production development and knowledge of any programming language, including Python, Golang, Java, etc.
    • Understanding of Data Lakes, Data Warehousing, OLAP/OLTP approaches, and ETL/ELT processes.
    • Proficiency in SQL and experience working with databases.
    • Workflow orchestration expirience.
    • Problem-solving skills and a passion for creating efficient, well-tested, and maintainable solutions.
    • Alignment with the values of our team (SELECT).
       

    Good to have:

    • Experience with GCP Data Services and Airflow.
    • Experience with CI/CD in Data Engineering.
    • Knowledge of Data Governance and Security principles.
    • Experience optimizing data pipeline performance.
    • Experience in MarTech or AdTech platforms, like marketing campaign orchestration.
       

    What do we offer:

    • Work within an ambitious team on a socially impactful education product.
    • An office with a reliable shelter, generators, satellite internet, and other amenities.
    • Access to our corporate knowledge base and professional communities.
    • Personal development plan.
    • Partial compensation for English language learning, external training, and courses.
    • Medical insurance coverage with a $70 employee contribution and full sick leave compensation.
    • Company doctor and massage in the office.
    • Sports activities: running, yoga, boxing, and more.
    • Corporate holidays: we go on a week-paid holiday to rest and recharge twice a year.
    • Supporting initiatives that help Ukraine. Find out more about our projects here.
       

    Working schedule:

    This is a full-time position with a hybrid remote option. It means that you can decide for yourself: whether you want to work from the office, remotely, or combine these options.
     

    Are you interested?

    Send your CV!

     

    More
  • Β· 35 views Β· 1 application Β· 25d

    Senior Data Streaming Engineer

    Hybrid Remote Β· Ukraine (Kyiv, Lviv) Β· 4 years of experience Β· B2 - Upper Intermediate
    Who we are! At Levi9, we are passionate about what we do. We love our work, and together in a team, we are smarter and stronger. We are looking for skilled team players who make change happen. Are you one of these players? About the role As a Data...

    πŸ”ΉWho we are!

    At Levi9, we are passionate about what we do. We love our work, and together in a team, we are smarter and stronger. We are looking for skilled team players who make change happen. Are you one of these players?

     

    πŸ”ΉAbout the role

    As a Data Streaming Engineer in the customer team, you will leverage millions of daily connections with readers and viewers across the online platforms as a competitive advantage to deliver reliable, scalable streaming solutions. You will collaborate closely with analysts, data scientists and developers across all departments throughout the entire customer organisation. You will design and build cloud-based data pipelines, both batch and streaming, and their underlying infrastructure. In short: you live up to our principle, You Build It, You Run It. 
    You will be working closely with a tech stack that includes Scala, Kafka, Kubernetes, Kafka Streams, and Snowflake.

     

    πŸ”ΉResponsibilities

    • Deliver reliable, scalable streaming solutions
    • Collaborate closely with analysts, data scientists and developers across all departments throughout the entire organisation
    • Design and build cloud-based data pipelines, both batch and streaming, and their underlying infrastructure
    • You Build It, You Run It.
    • Building a robust real-time customer profile by aggregating their online behaviour and allowing the usage of this profile to recommend other articles on customers' online platforms. 
    • Co-develop and cooperate on streaming architectures from inception and design, through deployment, operation and refinement to meet the needs of millions of real-time interactions.
    • Closely collaborate with business stakeholders, data scientists and analysts in our daily work, data engineering guild and communities of practice.

     

    πŸ”ΉRequirements

    • Experience implementing highly available and scalable big data solutions
    • In-depth knowledge of at least one cloud provider, preferably AWS 
    • Proficiency in languages such as Scala, Python, or shell scripting, specifically in the context of streaming data workflows 
    • Extensive experience with streaming technologies, so you can challenge the existing setup.
    • Experience with Infrastructure as Code and CI/CD pipelines 
    • Full understanding of modern software engineering best practices 
    • Experience with Domain-driven design 
    • DevOps mindset 
    • You see the value in a team and enjoy working together with others, also with techniques like pair programming 
    • You either have an AWS certification or are willing to achieve AWS certification within 6 months (minimum: AWS Certified Associate)
    • We welcome candidates living in Ukraine or Europe who are willing and able to travel for business trips to Belgium and the Netherlands.
       

    πŸ”ΉInterview stages

    • HR interview
    • Technical interview in English
    • Test assignment
    • Final interview

     

    πŸ”Ή9 reasons to join us:

    1. Today we're working with the technology of tomorrow.
    2. We don't wait for a change. We are the change.
    3. We're experts in creating experts (Levi9 academy, Lead9 program for leaders).
    4. No micromanagement. We are free birds with a clear understanding of what the high performance is!
    5. Learning in Levi9 never stops (unlimited Udemy for business, meetups, English&German courses, Professional trainings).
    6. Here you can train your body and mind.
    7. We've gathered the best locations - comfortable, cosy and pet-friendly offices in Kyiv (5 minutes from Olimpiyska metro station) and Lviv (overlooking the Stryiskyi Park) with regular offline internal events
    8. We have a master's degree in work-life balance.
    9. We are actively supporting Ukraine with constant donations and volunteering

     

    πŸ”ΉSimple step to get this job

    Click the APPLY NOW button and leave your contacts!

    More
  • Β· 60 views Β· 2 applications Β· 24d

    Data Engineer (NLP-Focused)

    Hybrid Remote Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - Intermediate
    We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstar’s NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text...

    We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstar’s NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling our data scientists and ML engineers to develop cutting-edge language models. You will work at the intersection of data engineering and machine learning, ensuring that our datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context. This is a unique opportunity to shape the data foundation of a pioneering AI project in Ukraine, working alongside NLP experts and leveraging modern big data technologies.

     

    What you will do

    • Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information. Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
    • Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to our language modeling efforts.
    • Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
    • Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    • Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
    • Automate data processing workflows and ensure their scalability and reliability. Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
    • Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs. Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
    • Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models. Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
    • Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
    • Manage data security, access, and compliance. Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.

     

    Qualifications and experience needed

    • Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
    • NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given our project’s focus. Understanding of FineWeb2 or a similar processing pipeline approach.
    • Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
    • Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
    • Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as our NLP applications may require embedding storage and fast similarity search.
    • Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
    • Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
    • Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.

     

    A plus would be

    • Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
    • Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
    • CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
    • Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
    • Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimising existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve our workflows.

     

    What we offer

    • Office or remote – it’s up to you. You can work from anywhere, and we will arrange your workplace.
    • Remote onboarding.
    • Performance bonuses.
    • We train employees with the opportunity to learn through the company’s library, internal resources, and programs from partners.β€―  
    • Health and life insurance.  
    • Wellbeing program and corporate psychologist.  
    • Reimbursement of expenses for Kyivstar mobile communication.  
    More
  • Β· 15 views Β· 1 application Β· 20d

    PHP developer/ Data Engineer

    Hybrid Remote Β· Poland, Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist. Join us on our mission to make photo editing...

    Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist.
    Join us on our mission to make photo editing enjoyable, easy, and accessible to anyone. You’ll be developing products with innovative technologies, providing value and inspiration for customers, and getting inspired in return.

     

    Thanks to our incredible team of experts, we’ve built a collaborative space where you can constantly develop and grow in a supportive way. At the same time, we believe in the freedom to be creative. Our work schedule is flexible, and we trust you to give your best while we provide you with everything you need to make work hassle-free. Skylum is proud to be a Ukrainian company, and we stand with Ukraine not only with words but with actions. We regularly donate to various organizations to help speed up the Ukrainian victory.

     

    Requirements:

    • Design and develop scalable backend services using PHP 7 / 8.
    • Strong understanding of OOP concepts, design patterns, clean code principles,
    • Extensive experience in MySQL, with expertise in database design, query optimization, and indexing.
    • Experience of work with NoSQL databases (e.g., Redis).
    • Proven experience working on high-load projects
    • Understanding of ETL processes and data integration
    • Experience of work with ClickHouse
    • Strong experience with API development
    • Strong knowledge of Symfony 6+, yii2
    • Experience with RabbitMQ

     

    Nice to Have:

    • AWS services
    • Payment API (Stripe, SolidGate etc.)
    • Docker, GitLab CI
    • Python

     

    Responsibilities:

    • Data Integration & ETL: Developed and maintained robust ETL pipelines using PHP to process and integrate data from diverse sources.
    • API Development: Built and managed secure RESTful APIs to facilitate seamless data exchange between internal and external systems.
    • Database Management: Optimized databases and data lakes, including schema design, complex query writing, and performance tuning.
    • Data Quality: Implemented data validation and error-handling mechanisms to ensure data integrity and accuracy.
    • Cross-Functional Collaboration: Partnered with data analysts and business teams to gather requirements and support data-driven initiatives.

     

    What we offer:

    For personal growth:

    • A chance to work with a strong team and a unique opportunity to make substantial contributions to our award-winning photo editing tools;
    • An educational allowance to ensure that your skills stay sharp;
    • English and German classes to strengthen your capabilities and widen your knowledge.

    For comfort:

    • A great environment where you’ll work with true professionals and amazing colleagues whom you’ll call friends quickly;
    • The choice of working remotely or in our office space located on Podil, equipped with everything you might need for productive and comfortable work.

    For health:

    • Medical insurance;
    • Twenty-one days of paid sick leave per year;
    • Healthy fruit snacks full of vitamins to keep you energized

    For leisure:

    • Twenty-one days of paid vacation per year;
    • Fun times at our frequent team-building activities.
    More
  • Β· 63 views Β· 2 applications Β· 4d

    Senior Data (Analytics) Engineer

    Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    About the project: Our customer is the European online car market with over 30 million monthly users, with a market presence in 18 countries. The company is now merging with a similar company in Canada and needs support in this way. As a Data& Analytics...

    About the project:

    Our customer is the European online car market with over 30 million monthly users, with a market presence in 18 countries. The company is now merging with a similar company in Canada and needs support in this way. As a  Data& Analytics Engineer, you will play a pivotal role in shaping the future of online car markets and enhancing the user experience for millions of car buyers and sellers.

     

    Requirements:

    • 5+ years of experience in Data Engineering or Analytics Engineering roles
    • Strong experience building and maintaining pipelines in BigQuery, Athena, Glue, and Airflow
    • Advanced SQL skills and experience designing dimensional models (star/snowflake)
    • Experience with AWS Cloud
    • Solid Python skills, especially for data processing and workflow orchestration
    • Familiarity with data quality tools like Great Expectations
    • Understanding of data governance, privacy, and security principles
    •  Experience working with large datasets and optimizing performance
    • Proactive problem solver who enjoys building scalable, reliable solutions
    • English - Upper-Intermediate + Great communication skills

       

    Responsibilities:

    • Collaborate with analysts, engineers, and stakeholders to understand data needs and deliver solutions
    • Build and maintain robust data pipelines that deliver clean and timely data
    • Organize and transform raw data into well-structured, scalable models
    •  Ensure data quality and consistency through validation frameworks like Great Expectations
    •  Work with cloud-based tools like Athena and Glue to manage datasets across different domains
    • Help set and enforce data governance, security, and privacy standards
    • Continuously improve the performance and reliability of data workflows
    • Support the integration of modern cloud tools into the broader data platform

     

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 55 views Β· 1 application Β· 18d

    Senior Data Engineer

    Hybrid Remote Β· Ukraine (Kyiv, Lviv) Β· Product Β· 3 years of experience Β· A2 - Elementary
    Solidgate is a payment processing and orchestration platform that helps thousands of businesses to accept payments online. We develop cutting-edge fintech solutions to facilitate seamless payment processing for merchants across 150+ countries, spanning...

    Solidgate is a payment processing and orchestration platform that helps thousands of businesses to accept payments online. We develop cutting-edge fintech solutions to facilitate seamless payment processing for merchants across 150+ countries, spanning Europe to LATAM, the USA to Asia. We are proud to be a part of the history of every company we work with - our infrastructure gives a quick scale to new markets and maximizes revenue.
     

    Key facts:

    • Offices in Ukraine, Poland, and Cyprus
    • 250+ team members
    • 200+ clients went global (Ukraine, US, EU)
    • Visa and Mastercard Principal Membership
    • EMI license in the EU
       

    Solidgate is part of Endeavor β€” a global community of the world’s most impactful entrepreneurs. We’re proud to be the first payment orchestrator from Europe to join β€” and to share our expertise within a network of outstanding global companies.
     

    Here, we’re building a strong engineering culture: designing architectures trusted by global leaders. Our engineers don’t just maintain systems β€” they create them. We believe the payments world is shaped by people who think big, act responsibly, and approach challenges with curiosity and drive. That’s exactly the kind of teammate we want on our team.
     

    We’re now looking for a Senior Data Engineer who will own the end-to-end construction of our Data Platform. The mission of the role is to build products that allow other teams to quickly launch, scale, and manage their own data-driven solutions independently.
     

    You’ll work side-by-side with Senior Engineering Manager of the Platform stream, and a team of four data enthusiasts to build the architecture that will become the foundation for all our data products.

    Explore our technology stack ➑️ https://solidgate-tech.github.io/
     

    What you’ll own:
    β€” Build the Data Platform from scratch (architecture, design, implementation, scaling)
    β€” Implement a Data Lake approach and Layered Architecture (bronze β†’ silver data layers)
    β€” Integrate streaming processing into data engineering practices
    β€” Foster a strong engineering culture with the team and drive best practices in data quality, observability, and reliability
     

    What you need to join us:
    β€” 3+ years of commercial experience as a Data Engineer
    β€” Strong hands-on experience building data solutions in Python
    β€” Confident SQL skills
    β€” Experience with Airflow or similar tools
    β€” Experience building and running DWH (BigQuery / Snowflake / Redshift)
    β€” Expertise in streaming stacks (Kafka / AWS Kinesis)
    β€” Experience with AWS infrastructure: S3, Glue, Athena
    β€” High attention to detail
    β€” Proactive, self-driven mindset
    β€” Continuous-learning mentality
    β€” Strong delivery focus and ownership in a changing environment
     

    Nice to have:
    β€” Background as an analyst or Python developer
    β€” Experience with DBT, Grafana, Docker, LakeHouse approaches
     

    Competitive corporate benefits:

    • more than 30 days off during the year (20 working days of vacation + days off for national holidays)
    • health insurance and corporate doctor
    • free snacks, breakfasts, and lunches in the office
    • full coverage of professional training (courses, conferences, certifications)
    • yearly performance review 
    • sports compensation
    • competitive salary
    • Apple equipment
       

    πŸ“© Ready to become a part of the team? Then cast aside all doubts and click "apply".

    More
  • Β· 80 views Β· 11 applications Β· 13d

    Senior Data Engineer

    Countries of Europe or Ukraine Β· Product Β· 4 years of experience Β· A2 - Elementary
    Our Mission and Vision At Solidgate, our mission is clear: to empower outstanding entrepreneurs to build exceptional internet companies. We exist to fuel the builders β€” the ones shaping the digital economy β€” with the financial infrastructure they deserve....

    Our Mission and Vision
    At Solidgate, our mission is clear: to empower outstanding entrepreneurs to build exceptional internet companies. We exist to fuel the builders β€” the ones shaping the digital economy β€” with the financial infrastructure they deserve. We’re on an ambitious journey to become the #1 payments orchestration platform in the world.
     

    Solidgate is part of Endeavor β€” a global community of the world’s most impactful entrepreneurs. We’re proud to be the first payment orchestrator from Europe to join β€” and to share our expertise within a network of outstanding global companies.
     

    As our processing volume is skyrocketing, the number of engineering teams is growing too β€” we’re already at 14. This gives our Data Engineering function a whole new scale of challenges: not just building data-driven solutions, but creating products and infrastructure that empowers other teams to build them autonomously.

    That’s why we’re launching the Data Platform direction and looking for a Senior Data Engineer who will own the end-to-end construction of our Data Platform. The mission of the role is to build products that allow other teams to quickly launch, scale, and manage their own data-driven solutions independently.

    You can check out the overall tech stack of the product here https://solidgate-tech.github.io/

     

    What you’ll own:


    β€” Build the Data Platform from scratch (architecture, design, implementation, scaling)
    β€” Implement a Data Lake approach and Layered Architecture (bronze β†’ silver data layers)
    β€” Integrate streaming processing into data engineering practices
    β€” Foster a strong engineering culture with the team and drive best practices in data quality, observability, and reliability

     

    What you need to join us:


    β€” 3+ years of commercial experience as a Data Engineer
    β€” Strong hands-on experience building data solutions in Python
    β€” Confident SQL skills
    β€” Experience with Airflow or similar tools
    β€” Experience building and running DWH (BigQuery / Snowflake / Redshift)
    β€” Expertise in streaming stacks (Kafka / AWS Kinesis)
    β€” Experience with AWS infrastructure: S3, Glue, Athena
    β€” High attention to detail
    β€” Proactive, self-driven mindset
    β€” Continuous-learning mentality
    β€” Strong delivery focus and ownership in a changing environment

     

    Nice to have:


    β€” Background as an analyst or Python developer
    β€” Experience with DBT, Grafana, Docker, LakeHouse approaches
     

    Why Join Solidgate?
    High-impact role. You’re not inheriting a perfect system β€” you’re building one.
    Great product. We’ve built a fintech powerhouse that scales fast. Solidgate isn’t just an orchestration player β€” it’s the financial infrastructure for modern Internet businesses. From subscriptions to chargeback management, fraud prevention, and indirect tax β€” we’ve got it covered.
    Massive growth opportunity. Solidgate is scaling rapidly β€” this role will be a career-defining move.
    Top-tier tech team. Work alongside our driving force β€” a proven, results-driven engineering team that delivers. We’re also early adopters of cutting-edge fraud and chargeback prevention technologies from the Schemes.
    Modern engineering culture. TBDs, code reviews, solid testing practices, metrics, alerts, and fully automated CI/CD.

    Competitive corporate benefits:

    • more than 30 days off during the year (20 working days of vacation + days off for national holidays)
    • health insurance and corporate doctor
    • free snacks, breakfasts, and lunches in the office
    • full coverage of professional training (courses, conferences, certifications)
    • yearly performance review 
    • sports compensation
    • competitive salary
    • Apple equipment

       

    πŸ“© Ready to become a part of the team? Then cast aside all doubts and click "apply".

    More
  • Β· 20 views Β· 0 applications Β· 12d

    Middle/Senior Data Engineer (IRC274051)

    Hybrid Remote Β· Ukraine (Vinnytsia, Ivano-Frankivsk, Kyiv + 7 more cities) Β· 3 years of experience Β· B2 - Upper Intermediate
    Job Description - 3+ years of intermediate to advanced SQL - 3+ years of Python development (intermediate level is fine: Pandas, Numpy, boto3, seaborn, requests, unittest) - Experience building ETLs, preferably in python - Experience with data tools (ex.:...

    Job Description

    - 3+ years of intermediate to advanced SQL

    - 3+ years of Python development (intermediate level is fine: Pandas, Numpy, boto3, seaborn, requests, unittest)

    - Experience building ETLs, preferably in python

    - Experience with data tools (ex.: Airflow, Grafana, AWS Glue, AWS Athena)

    - Excellent understanding of database design

    - Cloud expereince (AWS S3, Lambda, or alternatives)

    - Agile SDLC knowledge
    - Detail-oriented
    - Data-focused
    - Strong verbal/written communication and data presentation skills, including an ability to effectively communicate with both business and technical teams
    - An ability and interest in working in a fast-paced and rapidly changing environment
    - Be self-driven and show ability to deliver on ambiguous projects with incomplete or dirty data

     

    Would be a plus:
    - Understanding of basic SVOD store purchase workflows
    - Background in supporting data scientists in conducting data analysis / modelling to support business decision making

    - Experience in supervising subordinate staff

     

    Job Responsibilities

    - Data analysis, auditing, statistical analysis
    - ETL buildouts for data reconciliation
    - Creation of automatically-running audit tools
    - Interactive log auditing to look for potential data problems
    - Help in troubleshooting customer support team cases
    - Troubleshooting and analyzing subscriber reporting issues:
          Answer management questions related to subscriber count trends
          App purchase workflow issues
          Audit/reconcile store subscriptions vs userdb

    Department/Project Description

    Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services.

    This position collaborates with a geographically diverse team to develop, deliver, and maintain systems for digital subscription and transactional products across the Customer’ SVOD portfolio.

    More
  • Β· 72 views Β· 8 applications Β· 6d

    Data Engineer

    Hybrid Remote Β· Ukraine (Kyiv, Lutsk) Β· Product Β· 1 year of experience Β· B1 - Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    Jooble is a global technology company. Our main product jooble.org is an international job search website in 67 countries that aggregates thousands of job openings from various sources on a single page. We are ranked among the TOP-10 most visited websites...

    Jooble is a global technology company. Our main product jooble.org is an international job search website in 67 countries that aggregates thousands of job openings from various sources on a single page. We are ranked among the TOP-10 most visited websites in the Jobs and Employment segment worldwide. Since 2006, we’ve grown from a small startup founded by two students into a major player in the online recruitment market with 300+ professionals. Where others see challenges, we create opportunities.

    What You'll Be Doing

    • Design & Build Pipelines: Develop, and maintain robust and scalable ETL/ELT pipelines, moving data from diverse sources into our data warehouse.
    • Ensure Data Quality & Observability: Implement a comprehensive data observability strategy, including automated quality checks, monitoring, and lineage tracking to ensure data is accurate and trustworthy.
    • Optimize & Automate: Write clean, efficient code to automate data processing and continuously optimize our data storage strategies and query performance.
    • Govern & Document: Contribute to our data governance practices and maintain clear documentation for data processes, models, and architecture in our data catalog.
       

    What We're Looking For Core Requirements

    • Experience: 1-3 years of hands-on experience in a data engineering role.
    • Ukrainian proficiency level: Upper Intermediate and higher (spoken and written).
    • Core Languages: Strong proficiency in SQL (including complex queries and optimization) and Python for data processing.
    • Databases: Practical experience with relational databases, specifically PostgreSQL and MSSQL.
    • ETL/ELT: Proven experience designing and building pipelines using modern data orchestrators like Airflow or Dagster.
    • Data Modeling: A solid understanding of data warehousing concepts and data modeling techniques (e.g., dimensional modeling).
    • Ukrainian proficiency level: Upper Intermediate and higher (spoken and written)

    Bonus Points (Strongly Desired)
     

    • Streaming Data: Hands-on experience with streaming technologies like Kafka, Debezium, or message queues like RabbitMQ.
    • Specialized Databases: Experience with MPP databases (Greenplum/CloudberryDB) or columnar stores (ClickHouse).
    • Modern Data Stack: Familiarity with tools like dbt, Docker.
    • Basic knowledge of a cloud platform like AWS, GCP, or Azure.
    • A demonstrable interest in the fields of AI and Machine Learning.

    Our Tech Stack Includes

    • Observability & BI: DataHub, Grafana, Metabase
    • Languages: Python, SQL
    • Databases: PostgreSQL, MSSQL, ClickHouse, Greenplum/CloudberryDB
    • Orchestration: Airflow, Dagster
    • Streaming & Messaging: Kafka, Debezium, RabbitMQ

     

    Why You'll Love Working at Jooble

    Flexible Work Environment
    We offer a hybrid format in Kyiv and remote options worldwide. Start your 8-hour workday between 8:00 and 10:00 AM Kyiv time, ensuring collaboration across our team in 20+ countries. We provide all the equipment you need for productivity and comfort, whether remotely or in the office.

    Growth and Development
    We invest in your future with an individual education budget covering soft and hard skills. Career opportunities and regular performance reviews support your growth from entry-level to leadership roles.

    Healthcare and Well-being
    We offer health insurance after three months, plus financial support for medical expenses abroad. Our mental health benefits include access to psychological consultations and 50% reimbursement for therapy sessions.

    Time Off
    Enjoy 24 vacation days, 20 paid sick days, 4 extra sick days without a medical certificate, and 6 recharge days. Take the time you need and return refreshed!

    Our culture

    We embrace a product mindset, continuously innovating and improving our services to meet the needs of our users. We cultivate a workplace that values support, respect, honesty, and the free exchange of ideas. Experience an environment where "stronger together" is more than just a phrase β€” it's how we operate, fostering creativity and growth.

    Supporting Ukraine

    Since the beginning of the war, Jooble has been actively supporting and organizing fundraisers to aid our country. Many of our colleagues are bravely serving on the front lines or volunteering, and we couldn’t be prouder of their dedication and efforts. We committed to supporting our nation in any way we can.

    Ready to Make an Impact? If you’re passionate about this opportunity and want to join our team, please send us your CV. Our recruiter will be in touch with you soon.

    More
  • Β· 36 views Β· 2 applications Β· 5d

    Data Engineer

    Office Work Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - Intermediate MilTech πŸͺ–
    Key Responsibilities Design, develop, and maintain scalable data models to support analytics and reporting needs Build, monitor, and optimize ETL/ELT pipelines using best practices in data transformation and automation Collaborate with BI and analytics...

    Key Responsibilities

    • Design, develop, and maintain scalable data models to support analytics and reporting needs
    • Build, monitor, and optimize ETL/ELT pipelines using best practices in data transformation and automation
    • Collaborate with BI and analytics teams on data requirements
    • Ensure data integrity and consistency via automated data tests 
    • Proactively suggest data improvements, reporting ideas

     

    Required Qualifications

    • 3+ years of experience in analytics engineering, data engineering, or a related field
    • Advanced proficiency in SQL, with experience in writing efficient data modeling queries
    • Hands-on experience with modern data transformation frameworks (e.g. dbt, Dataform, or similar)
    • Strong understanding of data warehousing principles and data architecture best practices
    • Familiarity with ETL/ELT methodologies and workflow orchestration tools
    • Experience working with cloud-based data warehouses and databases (Snowflake, PostgreSQL, Redshift, or similar)
    • Knowledge of BI tools (Power BI, Tableau, Looker, or similar)
    • Basic programming skills in Python or another scripting language for automation
    • Solid understanding of data governance, lineage, and security best practices
    • Experience with Git-based version control and CI/CD workflows for data transformations

     

    Preferred Qualifications

    • Deep understanding of data warehouse concepts and database maintenance
    • Background in business intelligence, analytics, or software engineering
    • Self-motivated and proactive, with the ability to independently uncover and solve problems
    More
  • Β· 66 views Β· 1 application Β· 5d

    Junior Database Engineer

    Hybrid Remote Β· Ukraine (Kyiv) Β· Product Β· 1 year of experience Β· B1 - Intermediate
    As a Junior Database Engineer, you will be responsible for maintaining and optimizing modern database systems. Your role will include backup management, replication monitoring, query optimization, and close collaboration with developers and DevOps...

    As a Junior Database Engineer, you will be responsible for maintaining and optimizing modern database systems. Your role will include backup management, replication monitoring, query optimization, and close collaboration with developers and DevOps engineers. This is an excellent opportunity for someone with a strong theoretical foundation in databases who wants to gain practical experience in real-world, high-performance environments.

     

    Key Responsibilities

    • Configure, monitor, and test backups; perform recovery checks.
    • Monitor database replication and troubleshoot basic replication errors.
    • Collect and analyze slow query statistics; participate in query optimization.
    • Monitor database performance and apply necessary adjustments.
    • Install and configure components of database architecture.
    • Collaborate with developers and DevOps engineers to solve cross-team tasks.
    • Participate in testing and deployment of new solutions.
    • Write and debug scripts in Bash or Python to automate operations.
    • Contribute to technical documentation.

     

    Requirements

    • Understanding of modern DBMS architecture (PostgreSQL, MySQL, MongoDB, etc.).
    • Knowledge of relational data models and normalization principles.
    • Understanding of ACID transaction properties.
    • Experience installing and configuring at least one DBMS.
    • Skills in writing SQL queries.
    • Familiarity with monitoring systems (Prometheus, Grafana, PMM, etc.).
    • Experience with Linux (Ubuntu/Debian).
    • Ability to write simple automation scripts (Shell or Python).
    • Strong sense of responsibility and attention to detail.

     

    Nice-to-Have

    • Technical degree or final-year student (IT, Cybersecurity, Mathematics, Informatics, etc.).
    • Experience with high-load projects.
    • Familiarity with Docker.
    • Knowledge of replication (Master-Replica, WAL, GTID, MongoDB rs.replSet).
    • Understanding of indexing and its impact on performance.
    • Familiarity with cloud database services (AWS RDS, Azure Database, GCP Cloud SQL).

     

    What We Offer

    • Competitive salary based on experience and skills.
    • Flexible working schedule (remote/hybrid).
    • 17 paid vacation days and 14 paid sick leave.
    • Mentorship and clear career growth path towards Senior Database Engineer.
    • Access to courses, certifications, and conferences.
    • Collaborative team and knowledge-sharing environment.
    • International projects with modern tech stack.
    More
  • Β· 22 views Β· 1 application Β· 4d

    Senior Data Engineer (Python, Fast API)

    Hybrid Remote Β· Ukraine (Kyiv, Lviv) Β· 5 years of experience Β· B2 - Upper Intermediate
    Who we are! At Levi9, we are passionate about what we do. We love our work and together in a team, we are smarter and stronger. We are looking for skilled team players who make change happen. Are you one of these players? About the project Our client...

    πŸ”ΉWho we are!

    At Levi9, we are passionate about what we do. We love our work and together in a team, we are smarter and stronger. We are looking for skilled team players who make change happen. Are you one of these players?

     

    πŸ”ΉAbout the project

    Our client is a leading media company in Western Europe, delivering high-quality content across various platforms, including newspapers, magazines, radio, TV, websites, apps, and podcasts. Their brands reach millions of people daily, shaping the media landscape with independent and trusted journalism.
     

    πŸ”ΉAbout the job

    You’ll be working on a personalisation engine that serves all customers of our client, across all media offerings. The customer team is a cross-functional- with data engineers, ML engineers, and data scientists.

     

    πŸ”ΉResponsibilities

    • Maintain and extend our recommendation back-end.
    • Support operational excellence through practices like code review and pair programming.
    • The entire team is responsible for the operations of our services. This includes actively monitoring different applications and their infrastructure, as well as intervening to solve operational problems whenever they arise.

       

    πŸ”ΉYour key skills:

    • analyze and troubleshoot technical issues
    • communicate about technical and functional requirements with people outside of the team

       

    πŸ”ΉRequired qualifications:

    • a positive and constructive mindset and give feedback accordingly
    • high standards for the quality of the work you deliver
    • a degree in computer science, software engineering, a related field, or relevant prior experience
    • 5+ years of software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
    • can-do and growth mentality that communicates clearly
    • affinity with data analysis
    • a natural interest in digital media products

     

    πŸ”ΉThe candidate should have:

    • Experience in building microservices in Python and supporting large-scale applications
    • Experience building APIs with FastAPI
    • Experience in developing applications in a Kubernetes environment
    • Developing batch jobs in Apache Spark (pyspark or Scala)
    • Developing streaming applications for Apache Kafka in Python (experience with Kafka is a big plus)
    • Working with CI/CD pipelines
    • Writing Infrastructure as Code with Terraform
    • AWS certification at the Associate level or higher, or willingness to obtain it
    • Nice to have: machine learning knowledge

     

    πŸ”Ή9 reasons to join us:

    1. Today we're working with the technology of tomorrow.
    2. We don't wait for a change. We are the change.
    3. We're experts in creating experts (Levi9 academy, Lead9 program for leaders).
    4. No micromanagement. We are free birds with a clear understanding of what the high performance is!
    5. Learning in Levi9 never stops (unlimited Udemy for business, meetups, English&German courses, Professional trainings).
    6. Here you can train your body and mind.
    7. We've gathered the best locations - comfortable, cozy and pet-friendly offices in Kyiv (5 minutes from Olimpiyska metro station) and Lviv, with regular offline internal events
    8. We have a master's degree in work-life balance.
    9. We are actively supporting Ukraine with constant donations and volunteering

     

    πŸ”ΉSimple step to get this job

    Click the APPLY NOW button and leave your contacts!

    More
  • Β· 26 views Β· 0 applications Β· 3d

    Big Data Engineer

    Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    We are looking for a Middle Big Data Engineer to join one of the largest and strongest Data Units in Ukraine. With more than 220+ experts and over 30 ongoing projects across the EU and US, our Data Unit contributes to industries ranging from agriculture...

    We are looking for a Middle Big Data Engineer to join one of the largest and strongest Data Units in Ukraine.

    With more than 220+ experts and over 30 ongoing projects across the EU and US, our Data Unit contributes to industries ranging from agriculture to satellite communications and fintech. We work with cutting-edge technologies, handle massive data volumes, and provide our engineers with opportunities to grow from mentoring roles to becoming solution architects.

    Join our ambitious Data team, where business expertise, scientific approach, and advanced engineering meet to unlock the full potential of data in decision-making.


    About the Client

    Our client is a US-based global leader in in-flight Internet and entertainment services, serving 23 commercial airline partners and nearly 3,600 aircraft worldwide. They also provide connectivity solutions for maritime and government sectors and are one of the world’s largest satellite capacity providers.

    For over six years, N-iX has been supporting the client across Business Intelligence, Data Analysis, Data Science, and Big Data domains. We are now expanding the team with a Big Data Engineer who will help enhance complex data management and analytics solutions.


    Role Overview

    As a Big Data Engineer, you will work closely with the client’s Data Science team, supporting the end-to-end lifecycle of data-driven solutions β€” from designing and building data pipelines to deploying ML models into production. You’ll play a key role in ensuring high-quality data for model training and inference, as well as contributing to scalable architecture design.


    Responsibilities:

    • Design, develop, and maintain data pipelines and large-scale processing solutions.
    • Build and support environments (tables, clusters) for data operations.
    • Work with AWS SageMaker to deploy ML models into production.
    • Collaborate with Data Scientists to prepare and validate datasets.
    • Implement and support data validation frameworks (e.g., Great Expectations).
    • Migrate PySpark code into optimized DBT SQL queries.
    • Contribute to solution architecture and ensure scalability of workflows.


    Requirements:

    • Strong programming skills in Python (Pandas, PySpark).
    • Proficiency in SQL for data modeling and transformations (DBT knowledge is a plus).
    • Experience with the AWS ecosystem (Lambda, EMR, S3, DynamoDB, etc.).
    • Solid understanding of data pipeline orchestration.


    Nice to have:

    • Experience with Airflow for workflow automation.
    • Knowledge of Docker for containerized deployments.
    • Familiarity with data validation frameworks (Great Expectations).
    • Hands-on experience with Snowflake or other cloud data warehouses.
    • Exposure to ML data preparation.

       

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
Log In or Sign Up to see all posted jobs