Jobs Data Engineer

159
  • Β· 570 views Β· 55 applications Β· 7d

    Data Engineer

    Countries of Europe or Ukraine Β· 2 years of experience Β· B1 - Intermediate
    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV. Skills requirements: β€’ 2+ years of experience with...

    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV.

     

    Skills requirements:
    β€’ 2+ years of experience with Python;
    β€’ 2+ years of experience as a Data Engineer;
    β€’ Experience with Pandas;
    β€’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
    β€’ Familiarity with Amazon Web Services;
    β€’ Knowledge of data algorithms and data structures is a MUST;
    β€’ Working with high volume tables 10m+.


    Optional skills (as a plus):
    β€’ Experience with Spark (pyspark);
    β€’ Experience with Airflow;
    β€’ Experience with Kafka;
    β€’ Experience in statistics;
    β€’ Knowledge of DS and Machine learning algorithms..

     

    Key responsibilities:
    β€’ Create ETL pipelines and data management solutions (API, Integration logic);
    β€’ Different data processing algorithms;
    β€’ Involvement in creation of forecasting, recommendation, and classification models.

     

    We offer:

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 109 views Β· 21 applications Β· 10d

    Data Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· B2 - Upper Intermediate
    Lead the development and scaling of our scientific knowledge graphβ€”ingesting, structuring, and enriching massive datasets from research literature and global data sources into meaningful, AI-ready insights. Requirements: - Strong experience with...

    Lead the development and scaling of our scientific knowledge graphβ€”ingesting, structuring, and enriching massive datasets from research literature and global data sources into meaningful, AI-ready insights. 

     

    Requirements: 

    - Strong experience with knowledge graph design and implementation (Neo4j, RDFLib, GraphQL, etc.). 

    - Advanced Python for data engineering, ETL, and entity processing (Spark/Dask/Polars). 

    - Proven track record with large dataset ingestion (tens of millions of records). 

    - Familiarity with life-science or biomedical data (ontologies, research metadata, entity linking). 

    - Experience with Airflow/Dagster/dbt, and data APIs (OpenAlex, ORCID, PubMed). 

    - Strong sense of ownership, precision, and delivery mindset. Nice to Have: 

    - Domain knowledge in life sciences, biomedical research, or related data models. 

    - Experience integrating vector/semantic embeddings (Pinecone, FAISS, Weaviate).

     

    We offer:

    β€’ Attractive financial package

    β€’ Challenging projects

    β€’ Professional & career growth

    β€’ Great atmosphere in a friendly small team

    More
  • Β· 84 views Β· 2 applications Β· 25d

    Data Engineer

    Ukraine Β· Product Β· 2 years of experience Β· B2 - Upper Intermediate
    Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country. At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT...

    Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country.

    At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT teams, consisting of over 800 professionals. Every day, we collaborate to ensure that more than 2.7 million of our clients receive quality service, use the bank’s products and services, and develop their businesses because we are #Together_with_Ukraine.

    Your responsibilities:

    • Develop and maintain scalable ETL/ELT processes for data collection, transformation, and loading
    • Design and implement robust data pipelines for real-time and batch data processing
    • Ensure data quality, consistency, and availability for analytical and operational systems
    • Optimize query performance and database architecture
    • Automate the deployment and monitoring of data infrastructure components
    • Work closely with analytics, development, and business teams to implement data-driven solutions

    Preferred qualifications:

    • 2+ years of relevant experience in data engineering
    • We expect you to have solid commercial experience with Python, Groovy
    • Deep knowledge of Apache NiFi and hands-on experience in building and administering complex data flows
    • Proficient in PostgreSQL, understanding of architecture, experience in query optimization and data schema design
    • Experience with Apache Kafka, building real-time data pipelines

    Will be a plus:

    • Experience with Apache Airflow, workflow organization, monitoring and automation
    • Working with Apache Spark frameworks for distributed big data processing
    • Experience with AWS Athena and S3, interactive query services
    • Experience with Apache Iceberg, understanding of modern table formats for data lakes
    • Experience with Terraform, practice using the Infrastructure as Code (IaC) approach
    • Experience with Kubernetes, containerization and service orchestration

    We offer what matters most to you:

    • Competitive salary: we guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
    • Social package: official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
    • Comfortable working conditions: possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment. Classification: PUBLIC
    • Wellbeing program: all employees have access to medical insurance from the first working day; consultations with a psychologist, nutritionist, or lawyer; discount programs for sports and purchases; family days for children and adults; in-office massages
    • Training and development: access to over 130 online training resources; corporate training programs in CX, Data, IT Security, Leadership, Agile. Corporate library and English lessons. Great team: our colleagues form a community where curiosity, talent, and innovation are welcome. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
    • Career opportunities: we encourage advancement within the bank across functions
    • Innovations and technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (IOS), Kotlin (Android). Data stores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
    • Support program for defenders: we maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and develop the Bank’s veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes

    Why Raiffeisen Bank?

    • Our main value is people, and we support and recognize them, educate them and involve them in changes. Join Raif’s team because for us YOU matter!
    • One of the largest lenders to the economy and agricultural business among private banks
    • Recognized as the best employer by EY, Forbes, Randstad, Franklin Covey, and Delo.UA
    • The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)
    • One of the largest IT product teams among the country’s banks. One of the largest taxpayers in Ukraine; 6.6 billion UAH were paid in taxes in 2023

    Opportunities for Everyone:

    • Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
    • We support the principles of diversity, equality and inclusiveness
    • We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
    • We cooperate with students and older people, creating conditions for growth at any career stage

    Want to learn more? β€” Follow us on social media:

    Facebook, Instagram, LinkedIn

    ___________________________________________________________________________________________

    Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ β€” Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ ΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· Ρ–Π½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡ–Ρ‚Π°Π»ΠΎΠΌ. Π‘Ρ–Π»ΡŒΡˆΠ΅ 30 Ρ€ΠΎΠΊΡ–Π² ΠΌΠΈ ΡΡ‚Π²ΠΎΡ€ΡŽΡ”ΠΌΠΎ Ρ‚Π° Π²ΠΈΠ±ΡƒΠ΄ΠΎΠ²ΡƒΡ”ΠΌΠΎ Π±Π°Π½ΠΊΡ–Π²ΡΡŒΠΊΡƒ систСму Π½Π°ΡˆΠΎΡ— Π΄Π΅Ρ€ΠΆΠ°Π²ΠΈ.

    Π£ Π Π°ΠΉΡ„Ρ– ΠΏΡ€Π°Ρ†ΡŽΡ” ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡ–Π²Ρ€ΠΎΠ±Ρ–Ρ‚Π½ΠΈΠΊΡ–Π², сСрСд Π½ΠΈΡ… ΠΎΠ΄Π½Π° Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΎΠ²ΠΈΡ… Π†Π’-ΠΊΠΎΠΌΠ°Π½Π΄, Ρ‰ΠΎ Π½Π°Π»Ρ–Ρ‡ΡƒΡ” ΠΏΠΎΠ½Π°Π΄ 800 Ρ„Π°Ρ…Ρ–Π²Ρ†Ρ–Π². Щодня ΠΏΠ»Ρ–Ρ‡-ΠΎ-ΠΏΠ»Ρ–Ρ‡ ΠΌΠΈ ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ, Ρ‰ΠΎΠ± Π±Ρ–Π»ΡŒΡˆ Π½Ρ–ΠΆ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½Π° Π½Π°ΡˆΠΈΡ… ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡ‚Ρ€ΠΈΠΌΠ°Ρ‚ΠΈ якіснС обслуговування, користуватися ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Π°ΠΌΠΈ Ρ– ΡΠ΅Ρ€Π²Ρ–сами Π±Π°Π½ΠΊΡƒ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ‚ΠΈ бізнСс, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡ€Π°Ρ—Π½ΠΎΡŽ.β€―

    Π’Π²ΠΎΡ— ΠΌΠ°ΠΉΠ±ΡƒΡ‚Π½Ρ– обов’язки:

    • Розробляти Ρ‚Π° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΠ²Π°Ρ‚ΠΈ ΠΌΠ°ΡΡˆΡ‚Π°Π±ΠΎΠ²Π°Π½Ρ– ETL/ELT процСси для Π·Π±ΠΎΡ€Ρƒ, трансформації Ρ‚Π° Π·Π°Π²Π°Π½Ρ‚аТСння Π΄Π°Π½ΠΈΡ…
    • ΠŸΡ€ΠΎΡ”ΠΊΡ‚ΡƒΠ²Π°Ρ‚ΠΈ Ρ‚Π° Π²ΠΏΡ€ΠΎΠ²Π°Π΄ΠΆΡƒΠ²Π°Ρ‚ΠΈ Π½Π°Π΄Ρ–ΠΉΠ½Ρ– data pipelines для ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ… Ρƒ Ρ€Π΅Π°Π»ΡŒΠ½ΠΎΠΌΡƒ часі Ρ‚Π° Π² ΠΏΠ°ΠΊΠ΅Ρ‚Π½ΠΎΠΌΡƒ Ρ€Π΅ΠΆΠΈΠΌΡ–
    • Π—Π°Π±Π΅Π·ΠΏΠ΅Ρ‡ΡƒΠ²Π°Ρ‚ΠΈ ΡΠΊΡ–ΡΡ‚ΡŒ, ΠΊΠΎΠ½ΡΠΈΡΡ‚Π΅Π½Ρ‚Π½Ρ–ΡΡ‚ΡŒ Ρ‚Π° Π΄ΠΎΡΡ‚ΡƒΠΏΠ½Ρ–ΡΡ‚ΡŒ Π΄Π°Π½ΠΈΡ… для Π°Π½Π°Π»Ρ–Ρ‚ΠΈΡ‡Π½ΠΈΡ… Ρ‚Π° ΠΎΠΏΠ΅Ρ€Π°Ρ†Ρ–ΠΉΠ½ΠΈΡ… систСм
    • ΠžΠΏΡ‚ΠΈΠΌΡ–Π·ΡƒΠ²Π°Ρ‚ΠΈ ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΈΠ²Π½Ρ–ΡΡ‚ΡŒ Π·Π°ΠΏΠΈΡ‚Ρ–Π² Ρ‚Π° Π°Ρ€Ρ…Ρ–Ρ‚Π΅ΠΊΡ‚ΡƒΡ€ΠΈ Π±Π°Π· Π΄Π°Π½ΠΈΡ…
    • Автоматизувати процСси розгортання Ρ‚Π° ΠΌΠΎΠ½Ρ–Ρ‚ΠΎΡ€ΠΈΠ½Π³Ρƒ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ‚Ρ–Π² data-інфраструктури
    • Вісно ΡΠΏΡ–Π²ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Ρ‚ΠΈ Π· ΠΊΠΎΠΌΠ°Π½Π΄Π°ΠΌΠΈ Π°Π½Π°Π»Ρ–Ρ‚ΠΈΠΊΠΈ, Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΈ Ρ‚Π° Π±Ρ–знСс-ΠΏΡ–Π΄Ρ€ΠΎΠ·Π΄Ρ–Π»Π°ΠΌΠΈ для Ρ€Π΅Π°Π»Ρ–Π·Π°Ρ†Ρ–Ρ— data-driven Ρ€Ρ–ΡˆΠ΅Π½ΡŒ

    Π’Π²Ρ–ΠΉ досвід Ρ‚Π° Π½Π°Π²ΠΈΡ‡ΠΊΠΈ:

    • 2+ Ρ€ΠΎΠΊΠΈ Π²Ρ–Π΄ΠΏΠΎΠ²Ρ–Π΄Π½ΠΎΠ³ΠΎ досвіду Π² Π΄Π°Ρ‚Π° Ρ–Π½ΠΆΠΈΠ½Ρ–Ρ€ΠΈΠ½Π³Ρƒ
    • Ми ΠΎΡ‡Ρ–ΠΊΡƒΡ”ΠΌΠΎ, Ρ‰ΠΎ Π²ΠΈ ΠΌΠ°Ρ”Ρ‚Π΅ Π²ΠΏΠ΅Π²Π½Π΅Π½ΠΈΠΉ ΠΊΠΎΠΌΠ΅Ρ€Ρ†Ρ–ΠΉΠ½ΠΈΠΉ досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Python, Groovy
    • Π“Π»ΠΈΠ±ΠΎΠΊΡ– знання Apache NiFi Ρ‚Π° ΠΏΡ€Π°ΠΊΡ‚ΠΈΡ‡Π½ΠΈΠΉ досвід Ρƒ ΠΏΠΎΠ±ΡƒΠ΄ΠΎΠ²Ρ– Ρ‚Π° Π°Π΄ΠΌΡ–ніструванні складних ΠΏΠΎΡ‚ΠΎΠΊΡ–Π² Π΄Π°Π½ΠΈΡ…
    • Π’ΠΏΠ΅Π²Π½Π΅Π½Π΅ володіння PostgreSQL, розуміння Π°Ρ€Ρ…Ρ–Ρ‚Π΅ΠΊΡ‚ΡƒΡ€ΠΈ, досвід ΠΎΠΏΡ‚ΠΈΠΌΡ–Π·Π°Ρ†Ρ–Ρ— Π·Π°ΠΏΠΈΡ‚Ρ–Π² Ρ‚Π° ΠΏΡ€ΠΎΡ”ктування схСм Π΄Π°Π½ΠΈΡ…
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Apache Kafka, ΠΏΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° real-time data pipelines

    Π‘ΡƒΠ΄Π΅ плюсом:

    • Досвід Apache Airflow, організація, ΠΌΠΎΠ½Ρ–Ρ‚ΠΎΡ€ΠΈΠ½Π³ Ρ‚Π° Π°Π²Ρ‚оматизація Π²ΠΎΡ€ΠΊΡ„Π»ΠΎΡƒ
    • Π ΠΎΠ±ΠΎΡ‚Π° Π· Ρ„Ρ€Π΅ΠΉΠΌΠ²ΠΎΡ€ΠΊΠ°ΠΌΠΈ Apache Spark для Ρ€ΠΎΠ·ΠΏΠΎΠ΄Ρ–Π»Π΅Π½ΠΎΡ— ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ Π²Π΅Π»ΠΈΠΊΠΈΡ… Π΄Π°Π½ΠΈΡ…
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· AWS Athena Ρ‚Π° S3, Ρ–Π½Ρ‚Π΅Ρ€Π°ΠΊΡ‚ΠΈΠ²Π½ΠΈΠΌΠΈ сСрвісами Π·Π°ΠΏΠΈΡ‚Ρ–Π²
    • Досвід Π· Apache Iceberg, розуміння сучасних Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρ–Π² Ρ‚Π°Π±Π»ΠΈΡ†ΡŒ для data lake
    • Досввід Π· Terraform, ΠΏΡ€Π°ΠΊΡ‚ΠΈΠΊΠ° використання ΠΏΡ–Π΄Ρ…ΠΎΠ΄Ρƒ Infrastructure as Code (IaC)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Kubernetes, ΠΊΠΎΠ½Ρ‚Π΅ΠΉΠ½Π΅Ρ€ΠΈΠ·Π°Ρ†Ρ–Ρ”ΡŽ Ρ‚Π° ΠΎΡ€ΠΊΠ΅ΡΡ‚Ρ€Π°Ρ†Ρ–Ρ”ΡŽ сСрвісів

    ΠŸΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ Ρ‚Π΅, Ρ‰ΠΎ ΠΌΠ°Ρ” значСння самС для Ρ‚Π΅Π±Π΅:β€―

    • ΠšΠΎΠ½ΠΊΡƒΡ€Π΅Π½Ρ‚Π½Π° Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Π° ΠΏΠ»Π°Ρ‚Π°: Π³Π°Ρ€Π°Π½Ρ‚ΡƒΡ”ΠΌΠΎ ΡΡ‚Π°Π±Ρ–Π»ΡŒΠ½ΠΈΠΉ Π΄ΠΎΡ…Ρ–Π΄ Ρ‚Π° Ρ€Ρ–Ρ‡Π½Ρ– бонуси Π·Π° Ρ‚Π²Ρ–ΠΉ особистий внСсок. Π”ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΎ, Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” Ρ€Π΅Ρ„Π΅Ρ€Π°Π»ΡŒΠ½Π° ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡ€ΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡƒΡ‡Π΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ… ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊΡƒ.
    • Π‘ΠΎΡ†Ρ–Π°Π»ΡŒΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ‚: ΠΎΡ„Ρ–Ρ†Ρ–ΠΉΠ½Π΅ ΠΏΡ€Π°Ρ†Π΅Π²Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ, 28 Π΄Π½Ρ–Π² ΠΎΠΏΠ»Π°Ρ‡ΡƒΠ²Π°Π½ΠΎΡ— відпустки, Π΄ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΈΠΉ β€œΠ΄Π΅ΠΊΡ€Π΅Ρ‚β€ для татусів, Ρ‚Π° ΠΌΠ°Ρ‚Π΅Ρ€Ρ–Π°Π»ΡŒΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° для Π±Π°Ρ‚ΡŒΠΊΡ–Π² ΠΏΡ€ΠΈ Π½Π°Ρ€ΠΎΠ΄ΠΆΠ΅Π½Π½Ρ– Π΄Ρ–Ρ‚Π΅ΠΉ.
    • ΠšΠΎΠΌΡ„ΠΎΡ€Ρ‚Π½Ρ– ΡƒΠΌΠΎΠ²ΠΈ ΠΏΡ€Π°Ρ†Ρ–: ΠΌΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ Π³Ρ–Π±Ρ€ΠΈΠ΄Π½ΠΎΠ³ΠΎ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρƒ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ, офіси Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡Π΅Π½Π½Ρ– укриттями Ρ‚Π° Π³Π΅Π½Π΅Ρ€Π°Ρ‚ΠΎΡ€Π°ΠΌΠΈ, забСзпСчСння ΡΡƒΡ‡Π°ΡΠ½ΠΎΡŽ Ρ‚Π΅Ρ…Π½Ρ–ΠΊΠΎΡŽ.
    • Wellbeing ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ°: для всіх співробітників доступні ΠΌΠ΅Π΄ΠΈΡ‡Π½Π΅ страхування Π· ΠΏΠ΅Ρ€ΡˆΠΎΠ³ΠΎ Ρ€ΠΎΠ±ΠΎΡ‡ΠΎΠ³ΠΎ дня; ΠΊΠΎΠ½ΡΡƒΠ»ΡŒΡ‚Π°Ρ†Ρ–Ρ— психолога, Π½ΡƒΡ‚Ρ€ΠΈΡ†Ρ–ΠΎΠ»ΠΎΠ³Π° Ρ‡ΠΈ ΡŽΡ€ΠΈΡΡ‚Π°; дисконт ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡ€Ρ‚ Ρ‚Π° ΠΏΠΎΠΊΡƒΠΏΠΊΠΈ; family days для Π΄Ρ–Ρ‚Π΅ΠΉ Ρ‚Π° Π΄ΠΎΡ€ΠΎΡΠ»ΠΈΡ…; масаТ Π² ΠΎΡ„ісі.
    • Навчання Ρ‚Π° Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ: доступ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½ΠΈΡ… ΠΎΠ½Π»Π°ΠΉΠ½-рСсурсів; ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Ρ– Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½Ρ– ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π· CX, Data, IT Security, ЛідСрства, Agile. ΠšΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Π° Π±Ρ–Π±Π»Ρ–ΠΎΡ‚Π΅ΠΊΠ° Ρ‚Π° ΡƒΡ€ΠΎΠΊΠΈ Π°Π½Π³Π»Ρ–ΠΉΡΡŒΠΊΠΎΡ—.
    • ΠšΡ€ΡƒΡ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡˆΡ– ΠΊΠΎΠ»Π΅Π³ΠΈ β€” Ρ†Π΅ ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π°, Π΄Π΅ Π²Ρ–Ρ‚Π°ΡŽΡ‚ΡŒΡΡ Π΄ΠΎΠΏΠΈΡ‚Π»ΠΈΠ²Ρ–ΡΡ‚ΡŒ, Ρ‚Π°Π»Π°Π½Ρ‚ Ρ‚Π° Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ—. Ми ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, вчимося Ρ€Π°Π·ΠΎΠΌ Ρ‚Π° Π·Ρ€ΠΎΡΡ‚Π°Ρ”ΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρˆ Π·Π½Π°ΠΉΡ‚ΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡƒΠΌΡ†Ρ–Π² Ρƒ ΠΏΠΎΠ½Π°Π΄ 15-Ρ‚ΠΈ профСсійних ΠΊΠΎΠΌβ€™ΡŽΠ½Ρ–Ρ‚Ρ–, Ρ‡ΠΈΡ‚Π°Ρ†ΡŒΠΊΠΎΠΌΡƒ Ρ‡ΠΈ ΡΠΏΠΎΡ€Ρ‚ΠΈΠ²Π½ΠΎΠΌΡƒ ΠΊΠ»ΡƒΠ±Π°Ρ….
    • ΠšΠ°Ρ€β€™Ρ”Ρ€Π½Ρ– моТливості: ΠΌΠΈ Π·Π°ΠΎΡ…ΠΎΡ‡ΡƒΡ”ΠΌΠΎ просування всСрСдині Π±Π°Π½ΠΊΡƒ ΠΌΡ–ΠΆ функціями.
    • Π†Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ— Ρ‚Π° Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ—. Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink.
    • ΠŸΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ захисників Ρ– Π·Π°Ρ…ΠΈΡΠ½ΠΈΡ†ΡŒ: ΠΌΠΈ Π·Π±Π΅Ρ€Ρ–Π³Π°Ρ”ΠΌΠΎ Ρ€ΠΎΠ±ΠΎΡ‡Ρ– місця Ρ‚Π° Π²ΠΈΠΏΠ»Π°Ρ‡ΡƒΡ”ΠΌΠΎ ΡΠ΅Ρ€Π΅Π΄Π½ΡŽ Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Ρƒ ΠΏΠ»Π°Ρ‚Ρƒ ΠΌΠΎΠ±Ρ–Π»Ρ–Π·ΠΎΠ²Π°Π½ΠΈΠΌ. Для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ‚Π° Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΠΎΠΊ Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ”Ρ‚ΡŒΡΡ Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΡΡŒΠΊΠ° ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π° Π‘Π°Π½ΠΊΡƒ. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π½Π°Π΄ підвищСнням обізнаності ΠΊΠ΅Ρ€Ρ–Π²Π½ΠΈΠΊΡ–Π² Ρ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡ‚Π°Π½ΡŒ повСрнСння Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Π΄ΠΎ Ρ†ΠΈΠ²Ρ–Π»ΡŒΠ½ΠΎΠ³ΠΎ Тиття. Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ Π²Ρ–Π΄Π·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ як ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΡ… Ρ€ΠΎΠ±ΠΎΡ‚ΠΎΠ΄Π°Π²Ρ†Ρ–Π² для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² (Forbes).

    Π§ΠΎΠΌΡƒ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ?β€―

    • Наша Π³ΠΎΠ»ΠΎΠ²Π½Π° Ρ†Ρ–Π½Π½Ρ–ΡΡ‚ΡŒ β€” люди Ρ– ΠΌΠΈ Π΄Π°Ρ”ΠΌΠΎ Ρ—ΠΌ ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΡƒ Ρ– Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²Ρ‡Π°Ρ”ΠΌΠΎ, Π·Π°Π»ΡƒΡ‡Π°Ρ”ΠΌΠΎ Π΄ΠΎ Π·ΠΌΡ–Π½. ΠŸΡ€ΠΈΡ”Π΄Π½ΡƒΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡ„Ρƒ, Π°Π΄ΠΆΠ΅ для нас Π’И ΠΌΠ°Ρ”Ρˆ значСння!β€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΊΡ€Π΅Π΄ΠΈΡ‚ΠΎΡ€Ρ–Π² Π΅ΠΊΠΎΠ½ΠΎΠΌΡ–ΠΊΠΈ Ρ‚Π° Π°Π³Ρ€Π°Ρ€Π½ΠΎΠ³ΠΎ бізнСсу сСрСд ΠΏΡ€ΠΈΠ²Π°Ρ‚Π½ΠΈΡ… Π±Π°Π½ΠΊΡ–Π²β€―
    • Π’ΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΠΌ ΠΏΡ€Π°Ρ†Π΅Π΄Π°Π²Ρ†Π΅ΠΌ Π·Π° Π²Π΅Ρ€ΡΡ–ями EY, Forbes, Randstad, Franklin Covey, Delo.UAβ€―
    • ΠΠ°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ€ Π³ΡƒΠΌΠ°Π½Ρ–Ρ‚Π°Ρ€Π½ΠΎΡ— допомогисСрСд Π±Π°Π½ΠΊΡ–Π² (Π§Π΅Ρ€Π²ΠΎΠ½ΠΈΠΉ Π₯рСст Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)β€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΠ»Π°Ρ‚Π½ΠΈΠΊΡ–Π² ΠΏΠΎΠ΄Π°Ρ‚ΠΊΡ–Π² Π² Π£ΠΊΡ€Π°Ρ—Π½Ρ–, Π·Π° 2023 Ρ€Ρ–ΠΊ Π±ΡƒΠ»ΠΎ сплачСно 6,6 ΠΌΠ»Ρ€Π΄ Π³Ρ€ΠΈΠ²Π΅Π½ΡŒ

    ΠœΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡ‚Ρ– для всіх:β€―

    • Π Π°ΠΉΡ„ ΠΊΠ΅Ρ€ΡƒΡ”Ρ‚ΡŒΡΡ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠ°ΠΌΠΈ, Ρ‰ΠΎ Ρ„ΠΎΠΊΡƒΡΡƒΡŽΡ‚ΡŒΡΡ Π½Π° Π»ΡŽΠ΄ΠΈΠ½Ρ– Ρ‚Π° Ρ—Ρ— Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ, Ρƒ Ρ†Π΅Π½Ρ‚Ρ€Ρ– ΡƒΠ²Π°Π³ΠΈ 5β€―500 співробітників Ρ‚Π° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½ΠΈ ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π²β€―β€―
    • ΠŸΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΈ різноманіття, рівності Ρ‚Π° Ρ–Π½ΠΊΠ»ΡŽΠ·ΠΈΠ²Π½ΠΎΡΡ‚Ρ–
    • Ми Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Ρ– Π΄ΠΎ Π½Π°ΠΉΠΌΡƒ Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ– Π»ΡŽΠ΄Π΅ΠΉ Π· Ρ–Π½Π²Π°Π»Ρ–Π΄Π½Ρ–ΡΡ‚ΡŽ Ρ‚Π° Π³ΠΎΡ‚ΠΎΠ²Ρ– Π°Π΄Π°ΠΏΡ‚ΡƒΠ²Π°Ρ‚ΠΈ Ρ€ΠΎΠ±ΠΎΡ‡Π΅ сСрСдовищС ΠΏΡ–Π΄ Π²Π°ΡˆΡ– особливі ΠΏΠΎΡ‚Ρ€Π΅Π±ΠΈ
    • Π‘ΠΏΡ–Π²ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π·Ρ– ΡΡ‚ΡƒΠ΄Π΅Π½Ρ‚Π°ΠΌΠΈ Ρ‚Π° Π»ΡŽΠ΄ΡŒΠΌΠΈ ΡΡ‚Π°Ρ€ΡˆΠΎΠ³ΠΎ Π²Ρ–ΠΊΡƒ,β€―ΡΡ‚Π²ΠΎΡ€ΡŽΡŽΡ‡ΠΈ ΡƒΠΌΠΎΠ²ΠΈ для зростання Π½Π° Π±ΡƒΠ΄ΡŒ-якому Π΅Ρ‚Π°ΠΏΡ– кар’єри

    Π‘Π°ΠΆΠ°Ρ”Ρˆ дізнатися Π±Ρ–Π»ΡŒΡˆΠ΅? β€” ΠŸΡ–дписуйся Π½Π° Π½Π°Ρ Ρƒ ΡΠΎΡ†.ΠΌΠ΅Ρ€Π΅ΠΆΠ°Ρ…:

    Facebook, Instagram, LinkedInβ€―

    More
  • Β· 53 views Β· 15 applications Β· 11d

    Senior Data Engineer – (PySpark / Data Infrastructure)

    Full Remote Β· Worldwide Β· Product Β· 5 years of experience Β· C1 - Advanced
    Senior Data Engineer – (PySpark / Data Infrastructure) We're hiring a Senior Data Engineer to help lead the next phase of our data platform’s growth. At Forecasa, we provide enriched real estate transaction data and analytics to private lenders and...

    Senior Data Engineer –  (PySpark / Data Infrastructure)

    We're hiring a Senior Data Engineer to help lead the next phase of our data platform’s growth.

    At Forecasa, we provide enriched real estate transaction data and analytics to private lenders and investors. Our platform processes large volumes of public data, standardizes and enriches it, and delivers actionable insights that drive lending decisions.

    We recently completed a migration from a legacy SQL-based ETL stack (PostgreSQL/dbt) to PySpark, and we're now looking for a senior engineer to take ownership of the new pipeline, maintain and optimize it, and develop new data-driven features to support our customers and internal analytics.

    What You’ll Do

    • Own and maintain our PySpark-based data pipeline, ensuring stability, performance, and scalability.
    • Design and build new data ingestion, transformation, and validation workflows.
    • Optimize and monitor data jobs using Airflow, Kubernetes, and S3.
    • Collaborate with data analysts, product owners, and leadership to define data needs and deliver clean, high-quality data.
    • Support and mentor junior engineers working on scrapers, validation tools, and quality monitoring dashboards.
    • Contribute to the evolution of our data infrastructure and architectural decisions.

    Our Tech Stack

    Python β€’ PySpark β€’ PostgreSQL β€’ dbt β€’ Airflow β€’ S3 β€’ Kubernetes β€’ GitLab β€’ Grafana

    What We’re Looking For

    • 5+ years of experience in data engineering or backend systems with large-scale data processing.
    • Strong experience with PySpark, including building scalable data pipelines and working with large datasets.
    • Solid command of SQL, data modeling, and performance tuning (especially in PostgreSQL).
    • Experience working with orchestration tools like Airflow, and containers via Docker/Kubernetes.
    • Familiarity with cloud storage (preferably S3) and modern CI/CD workflows.
    • Ability to work independently and communicate clearly in a remote, async-first environment.

    Bonus Points

    • Background in real estate or financial data
    • Experience with data quality frameworks or observability tools (e.g., Great Expectations, Grafana, Prometheus)
    • Experience optimizing PySpark jobs for performance and cost-efficiency
    More
  • Β· 12 views Β· 1 application Β· 6d

    Presale engineer

    Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· A2 - Elementary
    Requirements: Knowledge of the core functionality of virtualization platforms; Experience implementing and migrating workloads in virtualized environment; Experience in complex IT solutions and Hybrid Cloud solution projects. Good understanding of...

    Requirements:

    • Knowledge of the core functionality of virtualization platforms;
    • Experience implementing and migrating workloads in virtualized environment;
    • Experience in complex IT solutions and Hybrid Cloud solution projects.
    • Good understanding of IT-infrastructure services is a plus;
    • Strong knowledge in troubleshooting of complex environments in case of failure;
    • At least basic knowledge in networking & information security is an advantage
    • Hyper-V, Proxmox, VMWare experience would be an advantage;
    • Experience in the area of services outsourcing (as customer and/or provider) is an advantage.
    • Work experience of 2+ years in a similar position
    • Scripting and programming experience/background in PowerShell/Bash is an advantage;
    • Strong team communication skills, both verbal and written;
    • Experience in technical documentation writing and preparation;
    • English skills - intermediate level is minimum and mandatory for global teams communication;
    • Industry certification focused on relevant solution area.

    Areas of Responsibility includes:

    • Participating in deployment and IT-infrastructure migration projects, Hybrid Cloud solution projects; Client support;
    • Consulting regarding migration IT-workloads in complex infrastructures;
    • Presales support (Articulating service value in the sales process) / Up and cross sell capability);
    • Project documentation: technical concepts
    • Education and development in professional area including necessary certifications.
    More
  • Β· 52 views Β· 2 applications Β· 27d

    Senior Market Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    We are looking for a skilled and experienced Software Engineer to join our team, building high-performance real-time data pipelines to process financial market data, including security prices for various asset classes such as equities, options, futures,...

    We are looking for a skilled and experienced Software Engineer to join our team, building high-performance real-time data pipelines to process financial market data, including security prices for various asset classes such as equities, options, futures, and more. You will play a key role in designing, developing, and optimizing data pipelines that handle large volumes of data with low latency and high throughput, ensuring that our systems can process market data in real time and batch modes.

     

    Key Responsibilities:

    • Architect, develop, and enhance market data systems
    • Contribute to the software development lifecycle in a collaborative team environment, including design, implementation, testing, and support
    • Design highly efficient, scalable, mission-critical systems
    • Maintain good software quality and test coverage
    • Participate in code reviews
    • Troubleshooting incidents and reported bugs

     

    Requirements:

    • Bachelor’s or advanced degree in Computer Science or Electrical Engineering
    • Proficiency in the following programming languages: Java, Python or Go
    • Prior experience working with equities or futures market data, such as CME data, US Equities Options, is a must
    • Experience in engineering and supporting Market Data feed handlers
    • Technically fluent (Python, SQL, JSON, ITCH, FIX, CSV); comfortable discussing pipelines and validation specs.
    • Prior experience working on tick data storage, such as KDB+ or Clickhouse
    • Familiarity with time series analysis
    • Good understanding of the Unix/Linux programming environment
    • Expertise with SQL and relational databases
    • Excellent problem-solving and communication skills
    • Self-starter and works well in a fast-paced environment
    More
  • Β· 88 views Β· 4 applications Β· 24d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate MilTech πŸͺ–
    Who We Are OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments,...

    Who We Are
     

    OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments, leading StratCom agencies, and research institutions.

    Our expertise lies in accessing restricted and high-risk environments, including conflict zones and closed platforms.

    We combine ML technologies with deep local expertise. Our team, based in Kyiv, Lviv, London, Ottawa, and Washington, DC, includes behavioural scientists, ML/AI engineers, data journalists, communications experts, and regional specialists.

    Our core values are: speed, experimentation, elegance and focus. We are expanding the team and welcome passionate, proactive, and resourceful professionals who are eager to contribute to the global fight in cognitive warfare.
     

    Who we’re looking for

    OpenMinds is seeking a skilled and curious Data Engineer who’s excited to design and build data systems that power meaningful insight. You’ll work closely with a passionate team of behavioral scientists and ML engineers on creating a robust data infrastructure that supports everything from large-scale narrative tracking to sentiment analysis.
     

    In the position you will:

    • Take ownership of our multi-terabyte data infrastructure, from data ingestion and orchestration to transformation, storage, and lifecycle management
    • Collaborate with data scientists, analysts, ML engineers, and domain experts to develop impactful data solutions
    • Optimize and troubleshoot data infrastructure to ensure high performance, cost-efficiency, scalability, and resilience
    • Stay up-to-date with trends in data engineering and apply modern tools and practices
    • Define and implement best practices for data processing, storage, and governance
    • Translate complex requirements into efficient data workflows that support threat detection and response
       

    We are a perfect match if you have:

    • 5+ years of hands-on experience as a Data Engineer, with a proven track record of leading complex data projects from design to production
    • Highly skilled in SQL and Python for advanced data processing, pipeline development, and optimization
    • Deep understanding of software engineering best practices, including SOLID, error handling, observability, performance tuning, and modular architecture
    • Ability to write, test and deploy production-ready code
    • Extensive experience in database design, data modeling, and modern data warehousing, including ETL orchestration using Airflow or equivalent
    • Familiarity with Google Cloud Platform (GCP) and its data ecosystem (BigQuery, GCS, Pub/Sub, Cloud Run, Cloud Functions, Looker)
    • Open-headed, capable of coming up with creative solutions and adapting to frequently changing circumstances and technological advances
    • Experience in DevOps (docker/k8s, IaaC, CI/CD) and MLOps
    • Fluent in English with excellent communication and cross-functional collaboration skills
       

    We offer:

    • Work in a fast-growing company with proprietary AI technologies, solving the most difficult problems in the domains of social behaviour analytics and national security
    • Competitive market salary
    • Opportunity to present your work on tier 1 conferences, panels, and briefings behind closed doors
    • Work face-to-face with world-leading experts in their fields, who are our partners and friends
    • Flexible work arrangements, including adjustable hours, location, and remote/hybrid options
    • Unlimited vacation and leave policies
    • Opportunities for professional development within a multidisciplinary team, boasting experience from academia, tech, and intelligence sectors
    • A work culture that values resourcefulness, proactivity, and independence, with a firm stance against micromanagement
    More
  • Β· 24 views Β· 5 applications Β· 24d

    Senior ML/GenAI Engineer

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    Senior ML Engineer Full-time / Remote About Us ExpoPlatform is a UK-based company founded in 2013, delivering advanced technology for online, hybrid, and in-person events across 30+ countries. Our platform provides end-to-end solutions for event...

    Senior ML Engineer 

    Full-time / Remote 

     

    About Us

    ExpoPlatform is a UK-based company founded in 2013, delivering advanced technology for online, hybrid, and in-person events across 30+ countries. Our platform provides end-to-end solutions for event organizers, including registration, attendee management, event websites, and networking tools.

     

    Role Responsibilities:

    • Develop AI Agents, tools for AI Agents, API as a service
    • Prepare development and deployment documentation
    • Participate in R&D activities of Data Science team

     

    Required Skills & Experience:

    • 5+ years of experience with DL frameworks (PyTorch and/or TensorFlow)
    • 5+ years of experience in software development in Python
    • Hand-on experience with LLM, RAG and AI Agents development
    • Experience with Amazon SageMaker, Amazon Bedrock, LangChain, LangGraph, LangSmith, LlamaIndex, HaggingFace, OpenAI 
    • Hand-on experience of usage AI tools for software development to increase efficiency and code quality, usage AI tools for code review.
    • Knowledge of SQL, non-SQL and vector databases
    • Understanding of embedding vectors  and semantic search
    • Proficiency in Git (Bitbucket) and Docker
    • Upper-Intermediate (B2+) or higher level of English

     

    Would a Plus:

    • Hand-on experience with SLM and LLM fine-tuning
    • Education in Data Science, Computer Science, Applied Math or similar
    • AWS certifications (AWS Certified ML or equivalent)
    • Experience with TypeSense
    • Experience with speech recognition, speech-to-text ML models

     

    What We Offer:

    • Career growth with an international team.
    • Competitive salary and financial stability.
    • Flexible working hours (Mon-Fri, 8 hours).
    • Free English courses and a budget for education


     

    More
  • Β· 35 views Β· 3 applications Β· 26d

    Senior Data Engineer at Payments AI Team

    Hybrid Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B2 - Upper Intermediate
    Job Description As a Senior Data Engineer on the Wix Payments AI Team, you’ll play a crucial role in the design and integration of emerging AI solutions into the Payments product. You’ll have significant responsibilities which include: Developing &...

    Job Description

    As a Senior Data Engineer on the Wix Payments AI Team, you’ll play a crucial role in the design and integration of emerging AI solutions into the Payments product. You’ll have significant responsibilities which include:

    • Developing & maintaining infrastructure for both generative AI and classical data science applications.
    • Researching emerging AI technology stacks and methodologies to identify optimal solutions.
    • Monitoring data pipeline performance and troubleshooting issues.
    • Leading & driving the entire lifecycle of a typical team project: ideation β†’ map business constraints, research and evaluate alternative solutions β†’ design & implement a proof-of-concept in collaboration with various stakeholders across the organization,  including data engineers, analysts, data scientists and product managers.

     

    Qualifications

    • Proficient in Trino SQL (with the ability to craft complex queries) and highly skilled in Python, with expertise in Python frameworks (e.g., Streamlit, Airflow, Pyless, etc.).
    • Ability to design, prototype, code, test and deploy production-ready systems.
    • Experience with a versatile range of infrastructure, server and frontend tech stacks.
    • Experience implementing and integrating GenAI models, particularly LLMs, into production systems. 
    • Experience with AI agentic technologies (e.g. MCP, A2A, ADK) - an advantage.
    • An independent and quick learner.
    • Passion for product and technical leadership.
    • Business-oriented thinking and skills: data privacy and system security awareness, understanding of business objectives and how to measure their key performance indicators (KPIs), derive and prioritize actionable tasks from complex business problems, business impact guided decision making. 
    • Open-headed, capable of coming up with creative solutions and adapting to frequently changing circumstances and technological advances.
    • Fluent in English with strong communication abilities

     

    About the Team

    We’re the Wix Payments team.

    We provide Wix users with the best way to collect payments from their customers and manage their Wix income online, in person, and on-the-go. We’re passionate about crafting the best experience for our users, and empowering any business on Wix to realize its full financial potential. We have developed our own custom payment processing solution that blends many integrations into one clean and intuitive user interface. We also build innovative products that help our users manage their cash and grow their business. The Payments AI team is instrumental in promoting AI based capabilities within the payments domain and is responsible for ensuring the company is always at the forefront of the AI revolution.

     

    More
  • Β· 53 views Β· 9 applications Β· 6d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    We are seeking a talented and experienced Data Engineer to join our professional services team of 50+ engineers, on a full-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud...

    We are seeking a talented and experienced Data Engineer to join our professional services team of 50+ engineers, on a full-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud platforms like AWS, Google Cloud. You will play a vital role in ensuring the performance, efficiency, and integrity of data pipelines of our customers while contributing to insightful data analysis and utilization.


    About us: 

    Opsfleet is a boutique services company who specializes in cloud infrastructure, data, AI, and human‑behavior analytics to help organizations make smarter decisions and boost performance.

    Our experts provide end‑to‑end solutionsβ€”from data engineering and advanced analytics to DevOpsβ€”ensuring scalable, secure, and AI‑ready platforms that turn insights into action.

     

    Role Overview

    As a Data Engineer at Opsfleet, you will lead the entire data lifecycleβ€”gathering and translating business requirements, ingesting and integrating diverse data sources, and designing, building, and orchestrating robust ETL/ELT pipelines with built‑in quality checks, governance, and observability. You’ll partner with data scientists to prepare, deploy, and monitor ML/AI models in production, and work closely with analysts and stakeholders to transform raw data into actionable insights and scalable intelligence.

     

    What You’ll Do

    * E2E Solution Delivery: Lead the full spectrum of data projectsβ€”requirements gathering, data ingestion, modeling, validation, and production deployment.

    * Data Modeling: Develop and maintain robust logical and physical data modelsβ€”such as star and snowflake schemasβ€”to support analytics, reporting, and scalable data architectures.

    * Data Analysis & BI: Transform complex datasets into clear, actionable insights; develop dashboards and reports that drive operational efficiency and revenue growth.

    * ML Engineering: Implement and manage model‑serving pipelines using cloud’s MLOps toolchain, ensuring reliability and monitoring in production.

    * Collaboration & Research: Partner with cross‑functional teams to prototype solutions, identify new opportunities, and drive continuous improvement.

     

    What We’re Looking For

    Experience: 4+ years in a data‑focused role (Data Engineer, BI Developer, or similar)

    Technical Skills: Proficient in SQL and Python for data manipulation, cleaning, transformation, and ETL workflows. Strong understanding of statistical methods and data modeling concepts. Soft Skills: Excellent problem‑solving ability, critical thinking, and attention to detail. Outstanding written and verbal communication.

    Education: BSc or higher in Mathematics, Statistics, Engineering, Computer Science, Life Science, or a related quantitative discipline.

     

    Nice to Have

    Cloud & Data Warehousing: Hands‑on experience with cloud platforms (GCP, AWS or others) and modern data warehouses such as BigQuery and Snowflake.


     

    More
  • Β· 23 views Β· 1 application Β· 12d

    Infrastructure Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· C1 - Advanced
    We are looking for a Senior Infrastructure Engineer to manage and improve our IT systems and cloud environments. You’ll work closely with DevOps and security teams to ensure system availability and reliability. Details: Experience: 5 years Schedule:...

    We are looking for a Senior Infrastructure Engineer to manage and improve our IT systems and cloud environments. You’ll work closely with DevOps and security teams to ensure system availability and reliability.

     

    Details
    Experience: 5 years 
    Schedule: Full time, remote
    Start: ASAP
    English: Fluent
    Employment: B2B Contract

     

    Responsibilities:

    • Design, deploy, and manage infrastructure environments
    • Automate deployments using Terraform, Ansible, etc.
    • Monitor and improve system performance and availability
    • Implement disaster recovery plans
    • Support troubleshooting across environments

     

    Requirements:

    • Strong Linux administration background
    • Experience with AWS, GCP, or Azure
    • Proficiency with containerization tools (Docker, Kubernetes)
    • Infrastructure as Code (IaC) using Terraform or similar
    • Scripting skills in Python, Bash, etc.
    More
  • Β· 36 views Β· 1 application Β· 27d

    Data Quality Engineer

    Office Work Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - Intermediate MilTech πŸͺ–
    We’re building a large-scale data analytics ecosystem powered by Microsoft Azure and Power BI. Our team integrates, transforms, and visualizes data from multiple sources to support critical business decisions. Data quality is one of our top priorities,...

    We’re building a large-scale data analytics ecosystem powered by Microsoft Azure and Power BI. Our team integrates, transforms, and visualizes data from multiple sources to support critical business decisions. Data quality is one of our top priorities, and we’re seeking an engineer who can help us enhance the reliability, transparency, and manageability of our data landscape. 

    Your responsibilities: 

    • Develop and maintain data quality monitoring frameworks within the Azure ecosystem (Data Factory, Data Lake, Databricks). 
    • Design and implement data quality checks, including validation, profiling, cleansing, and standardization. 
    • Detect data anomalies and design alerting systems (rules, thresholds, automation). 
    • Collaborate with Data Engineers, Analysts, and Business stakeholders to define data quality criteria and expectations. 
    • Ensure high data accuracy and integrity for Power BI reports and dashboards. 
    • Document data validation processes and recommend improvements to data sources. 

    Requirements: 

    • 3+ years of experience in a Data Quality, Data Engineering, or BI Engineering role. 
    • Hands-on experience with Microsoft Azure services (Data Factory, SQL Database, Data Lake). 
    • Advanced SQL skills (complex queries, optimization, data validation). 
    • Familiarity with Power BI or similar BI tools. 
    • Understanding of DWH principles and ETL/ELT pipelines. 
    • Experience with data quality frameworks and metrics (completeness, consistency, timeliness). 
    • Knowledge of Data Governance, Master Data Management, and Data Lineage concepts. 

    Would be a plus: 

    • Experience with Databricks or Apache Spark. 
    • DAX and Power Query (M) knowledge. 
    • Familiarity with DataOps or DevOps principles in a data environment. 
    • Experience in creating automated data quality dashboards in Power BI. 

     

    More
  • Β· 30 views Β· 1 application Β· 5d

    Data Engineer

    Hybrid Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    We are looking for a Data Engineer to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstar’s NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata,...

    We are looking for a Data Engineer to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstar’s NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling our data scientists and ML engineers to develop cutting-edge language models. You will work at the intersection of data engineering and machine learning, ensuring that our datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context. This is a unique opportunity to shape the data foundation of a pioneering AI project in Ukraine, working alongside NLP experts and leveraging modern big data technologies.

     

    What you will do

    • Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information. Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
    • Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to our language modeling efforts.
    • Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
    • Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    • Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
    • Automate data processing workflows and ensure their scalability and reliability. Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
    • Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs. Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
    • Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models. Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
    • Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
    • Manage data security, access, and compliance. Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.

     

    Qualifications and experience needed

    • Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
    • NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given our project’s focus. Understanding of FineWeb2 or a similar processing pipeline approach.
    • Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
    • Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
    • Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as our NLP applications may require embedding storage and fast similarity search.
    • Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
    • Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
    • Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.

     

    A plus would be

    • Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
    • Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
    • CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
    • Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
    • Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimising existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve our workflows.

     

    What we offer

    • Office or remote – it’s up to you. You can work from anywhere, and we will arrange your workplace.
    • Remote onboarding.
    • Performance bonuses.
    • We train employees with the opportunity to learn through the company’s library, internal resources, and programs from partners.β€―  
    • Health and life insurance.  
    • Wellbeing program and corporate psychologist.  
    • Reimbursement of expenses for Kyivstar mobile communication.  
    More
  • Β· 8 views Β· 0 applications Β· 18d

    IT Infrastructure Administrator

    Office Work Β· Ukraine (Dnipro) Β· Product Β· 1 year of experience
    Biosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT...

    Biosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT Infrastructure Administrator to join our team.

    Key responsibilities:

    • Administration of Active Directory
    • Managing group policies
    • Managing services via PowerShell
    • Administration of VMWare platform
    • Administration of Azure Active Directory
    • Administration of Exchange 2016/2019 mail servers
    • Administration of Exchange Online
    • Administration of VMWare Horizon View

    Required professional knowledge and skills:

    • Experience in writing automation scripts (PowerShell, Python, etc.)
    • Skills in working with Azure Active Directory (user and group creation, report generation, configuring synchronization between on-premise and cloud AD)
    • Skills in Exchange PowerShell (mailbox creation, search and removal of emails based on criteria, DAG creation and management)
    • Experience with Veeam Backup & Replication, VMWare vSphere (vCenter, DRS, vMotion, HA), VMWare Horizon View
    • Windows Server 2019/2025 (installation, configuration, and adaptation)
    • Diagnostics and troubleshooting
    • Working with anti-spam systems
    • Managing mail transport systems (exim) and monitoring systems (Zabbix)

    We offer:

    • Interesting projects and tasks
    • Competitive salary (discussed during the interview)
    • Convenient work schedule: Mon–Fri, 9:00–18:00; partial remote work possible
    • Official employment, paid vacation, and sick leave
    • Probation period β€” 2 months
    • Professional growth and training (internal training, reimbursement for external training programs)
    • Discounts on Biosphere Corporation products
    • Financial assistance (in cases of childbirth, medical treatment, force majeure, or circumstances caused by wartime events, etc.)

    Office address: Dnipro, Zaporizke Highway 37 (Right Bank, Topol-1 district).

    Learn more about Biosphere Corporation, our strategy, mission, and values at:
    http://biosphere-corp.com/
    https://www.facebook.com/biosphere.corporation/

    Join our team of professionals!

    By submitting your CV for this vacancy, you consent to the use of your personal data in accordance with the current legislation of Ukraine.
    If your application is successful, we will contact you within 1–2 business days.

    More
  • Β· 23 views Β· 1 application Β· 27d

    PHP developer/ Data Engineer

    Hybrid Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist. Join us on our mission to make photo editing...

    Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist.
    Join us on our mission to make photo editing enjoyable, easy, and accessible to anyone. You’ll be developing products with innovative technologies, providing value and inspiration for customers, and getting inspired in return.

     

    Thanks to our incredible team of experts, we’ve built a collaborative space where you can constantly develop and grow in a supportive way. At the same time, we believe in the freedom to be creative. Our work schedule is flexible, and we trust you to give your best while we provide you with everything you need to make work hassle-free. Skylum is proud to be a Ukrainian company, and we stand with Ukraine not only with words but with actions. We regularly donate to various organizations to help speed up the Ukrainian victory.

     

    Requirements:

    • Design and develop scalable backend services using PHP 7 / 8.
    • Strong understanding of OOP concepts, design patterns, clean code principles,
    • Extensive experience in MySQL, with expertise in database design, query optimization, and indexing.
    • Experience of work with NoSQL databases (e.g., Redis).
    • Proven experience working on high-load projects
    • Understanding of ETL processes and data integration
    • Experience of work with ClickHouse
    • Strong experience with API development
    • Strong knowledge of Symfony 6+, yii2
    • Experience with RabbitMQ

     

    Nice to Have:

    • AWS services
    • Payment API (Stripe, SolidGate etc.)
    • Docker, GitLab CI
    • Python

     

    Responsibilities:

    • Data Integration & ETL: Developed and maintained robust ETL pipelines using PHP to process and integrate data from diverse sources.
    • API Development: Built and managed secure RESTful APIs to facilitate seamless data exchange between internal and external systems.
    • Database Management: Optimized databases and data lakes, including schema design, complex query writing, and performance tuning.
    • Data Quality: Implemented data validation and error-handling mechanisms to ensure data integrity and accuracy.
    • Cross-Functional Collaboration: Partnered with data analysts and business teams to gather requirements and support data-driven initiatives.

     

    What we offer:

    For personal growth:

    • A chance to work with a strong team and a unique opportunity to make substantial contributions to our award-winning photo editing tools;
    • An educational allowance to ensure that your skills stay sharp;
    • English and German classes to strengthen your capabilities and widen your knowledge.

    For comfort:

    • A great environment where you’ll work with true professionals and amazing colleagues whom you’ll call friends quickly;
    • The choice of working remotely or in our office space located on Podil, equipped with everything you might need for productive and comfortable work.

    For health:

    • Medical insurance;
    • Twenty-one days of paid sick leave per year;
    • Healthy fruit snacks full of vitamins to keep you energized

    For leisure:

    • Twenty-one days of paid vacation per year;
    • Fun times at our frequent team-building activities.
    More
Log In or Sign Up to see all posted jobs