Jobs

109
  • Β· 12 views Β· 0 applications Β· 6h

    Data Engineer

    Hybrid Remote Β· Slovakia Β· 4 years of experience Β· Upper-Intermediate
    Now is an amazing time to join our company as we continue to empower innovators to change the world. We provide top-tier technology consulting, R&D, design, and software development services across the USA, UK, and EU markets. And this is where you come...

    Now is an amazing time to join our company as we continue to empower innovators to change the world. We provide top-tier technology consulting, R&D, design, and software development services across the USA, UK, and EU markets. And this is where you come in!

    We are looking for a Skilled Data Engineer to join our team.

    About the Project

    We’re launching a Snowflake Proof of Concept (PoC) for a leading football organization in Germany. The project aims to demonstrate how structured and well-managed data can support strategic decision-making in the sports domain.

    Key Responsibilities

    • Define data scope and identify data sources
    • Design and build the data architecture
    • Implement ETL pipelines into a data lake
    • Ensure data quality and consistency
    • Collaborate with stakeholders to define analytics needs
    • Deliver data visualizations using Power BI

    Required Skills

    • Strong experience with Snowflake, ETL pipelines, and data lakes
    • Power BI proficiency
    • Knowledge of data architecture and modeling
    • Data quality assurance expertise
    • Solid communication in English (B2+)

    Nice to Have

    • Familiarity with GDPR
    • Experience in sports or media-related data projects
    • Experience with short-term PoCs and agile delivery

    What We Offer

    • Contract for the PoC phase with potential long-term involvement
    • All cloud resources and licenses provided by the client
    • Hybrid/onsite work in Bratislava
    • Opportunity to join a meaningful data-driven sports project with European visibility

    πŸ“¬ Interested? Send us your CV and hourly rate (EUR).

    We’re prioritizing candidates based in Bratislava or in Europe

    Interview Process:

    1️⃣ internal technical interview
    2️⃣ interview with the client

    More
  • Β· 7 views Β· 0 applications Β· 7h

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· Upper-Intermediate
    Project Description GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Job Description - Strong experience in design, building, and...

    Project Description

    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance.

     

     

    Job Description

    - Strong experience in design, building, and maintaining data pipelines using Azure Data Factory (ADF) for data ingestion and processing, and leveraging Databricks for data transformation and analytical workloads
    - Strong experience with Databricks auto loader from cosmos/blobs
    - Design, create and maintain data pipelines that leverage Delta tables for efficient data storage and processing within a Databricks environment
    - Experience in provisioning Databricks/Synapse/ADF workspaces
    - Experience with RDBMS, such as PostgreSQL or MySQL, as well as NoSQL
    - Strong Experience with Azure Data Factory (ADF)
    - Preferred experience in analytical tools (Splunk, Kibana) and ability to pick up, work with and explore new analytical tools
    - Data modeling and schema design
    - Proven understanding and demonstrable implementation experience in Azure
    - Excellent interpersonal and teamwork skills
    - Strong problem solving, troubleshooting and analysis skills
    - Good knowledge of Agile Scrum

     

     

    Job Responsibilities

    - Responsible for the design and implementation of key components in the system.
    - Takes ownership of features, leads design decisions
    - Peer-review the code and provide constructive feedback
    - Takes part in defining technical strategies and best practices for the team
    - Assists with backlog refinement and estimation at story level
    - Identifies and resolves bottlenecks in the development process (such as performance bottlenecks)
    - Solves complex tasks without supervision.

    More
  • Β· 83 views Β· 14 applications Β· 8h

    Data Support / Junior Data Engineer to $600

    Full Remote Β· Ukraine Β· Product Β· 0.5 years of experience
    ΠŸΡ€ΠΈΠ²Ρ–Ρ‚! Π—Π°Ρ€Π°Π· Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Π° вакансії Data Support / Junior Data Engineer Π½Π° ΠΏΡ€ΠΎΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΈΠΉ ΠΏΡ€ΠΎΡ”ΠΊΡ‚. ΠšΠΎΠΌΠΏΠ°Π½Ρ–Ρ: Product IT, NDA. ΠšΠΎΠΌΠΏΠ°Π½Ρ–Ρ заснована Ρƒ 2022 Ρ€ΠΎΡ†Ρ–, ΠΌΠ°Ρ” інвСстиції. ΠŸΡ€ΠΎΡ”ΠΊΡ‚ Π²ΠΆΠ΅ Π² Ρ€ΠΎΠ±ΠΎΡ‚Ρ–, Π·Π°Ρ€Π°Π· ΠΏΠΎΡ‚Ρ€Ρ–Π±Π½Π° лишС ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠ°. Π’ΠΎΠΌΡƒ Ρ†Π΅ ΠΌΠΎΠΆΠ΅ Π±ΡƒΡ‚ΠΈ Π³Π°Ρ€Π½ΠΈΠΌ стартом...

    ΠŸΡ€ΠΈΠ²Ρ–Ρ‚! Π—Π°Ρ€Π°Π· Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Π° вакансії Data Support / Junior Data Engineer Π½Π° ΠΏΡ€ΠΎΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΈΠΉ ΠΏΡ€ΠΎΡ”ΠΊΡ‚. 
    ΠšΠΎΠΌΠΏΠ°Π½Ρ–Ρ: Product IT, NDA. ΠšΠΎΠΌΠΏΠ°Π½Ρ–Ρ заснована Ρƒ 2022 Ρ€ΠΎΡ†Ρ–, ΠΌΠ°Ρ” інвСстиції. 
    ΠŸΡ€ΠΎΡ”ΠΊΡ‚ Π²ΠΆΠ΅ Π² Ρ€ΠΎΠ±ΠΎΡ‚Ρ–, Π·Π°Ρ€Π°Π· ΠΏΠΎΡ‚Ρ€Ρ–Π±Π½Π° лишС ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠ°. Π’ΠΎΠΌΡƒ Ρ†Π΅ ΠΌΠΎΠΆΠ΅ Π±ΡƒΡ‚ΠΈ Π³Π°Ρ€Π½ΠΈΠΌ стартом для початківця, Ρ‰ΠΎΠ± ΠΎΡ‚Ρ€ΠΈΠΌΠ°Ρ‚ΠΈ ΠΊΠΎΠΌΠ΅Ρ€Ρ†Ρ–ΠΉΠ½ΠΈΠΉ досвід Π² Ρ€Π΅Π·ΡŽΠΌΠ΅ Ρ‚Π° Π·Π°ΠΊΡ€Ρ–ΠΏΠΈΡ‚ΠΈΡΡŒ Π½Π° посаді. 

    πŸ“Œ Π©ΠΎ ΠΎΡ‡Ρ–ΠΊΡƒΡ”ΠΌΠΎ Π²Ρ–Π΄ спСціаліста:
    β€” Π’Ρ–Π΄ 6 місяців ΠΊΠΎΠΌΠ΅Ρ€Ρ†Ρ–ΠΉΠ½ΠΎΠ³ΠΎ досвіду Π· Python
    β€” Знання Linux, Pandas, ETL-процСсів
    β€” Π ΠΎΠ±ΠΎΡ‚Π° Π· VSCode
    β€” Досвід Π· DuckDB
    β€” Elasticsearch β€” Π±ΡƒΠ΄Π΅ плюсом
    β€” ΠΠ½Π³Π»Ρ–ΠΉΡΡŒΠΊΠ° Π½Π΅ ΠΏΠΎΡ‚Ρ€Ρ–Π±Π½Π°. 

    Клас завдань (ΠΊΠΎΡ€ΠΎΡ‚ΠΊΠΈΠΉ ΠΏΠ΅Ρ€Π΅Π»Ρ–ΠΊ Π²Ρ–Π΄ Π·Π°ΠΌΠΎΠ²Π½ΠΈΠΊΠ°):
    1️⃣ ΠžΠ±Ρ€ΠΎΠ±ΠΊΠ° структурованих Ρ‚Π° нСструктурованих Π΄Π°Π½ΠΈΡ….
    2️⃣ АгрСгація Ρ‚Π° трансформація Π΄Π°Π½ΠΈΡ….
    3️⃣ ΠžΠΏΡ‚ΠΈΠΌΡ–Π·Π°Ρ†Ρ–Ρ ETL-процСсів.
    4️⃣ Π ΠΎΠ·ΡˆΠΈΡ€Π΅Π½Π½Ρ Ρ„ΡƒΠ½ΠΊΡ†Ρ–ΠΎΠ½Π°Π»Ρƒ Ρ‚Π° супровід.
    5️⃣ ΠšΠ΅Ρ€ΡƒΠ²Π°Π½Π½Ρ Π΅ΠΊΠΎΡΠΈΡΡ‚Π΅ΠΌΠΎΡŽ Elasticsearch (ΠΎΠΏΡ†Ρ–ΠΎΠ½Π°Π»ΡŒΠ½ΠΎ).

    Опис Π·Π°Π΄Π°Ρ‡ Π²Ρ–Π΄ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ (Ρ†ΠΈΡ‚Π°Ρ‚Π° Π²Ρ–Π΄ Ρ€ΠΎΠ·Ρ€ΠΎΠ±Π½ΠΈΠΊΠ°):
    "ΠŸΠΎΠ·ΠΈΡ†Ρ–Ρ Π½Π° саппорт Ρ€ΠΎΠ·Ρ€ΠΎΠ±Π½ΠΈΠΊΠ°. ΠŸΡ€ΠΎΠ³Ρ€Π°ΠΌΡƒΠ²Π°Ρ‚ΠΈ Π½Π΅ ΠΏΠΎΡ‚Ρ€Ρ–Π±Π½ΠΎ. ΠŸΠΎΡ‚Ρ€Ρ–Π±Π½ΠΎ Π±ΡƒΠ΄Π΅ Π·Π°Π²Π°Π½Ρ‚Π°ΠΆΡƒΠ²Π°Ρ‚ΠΈ Π±Π°Π·ΠΈ Π΄Π°Π½ΠΈΡ… Ρ– Π½Π°Π»Π°ΡˆΡ‚ΠΎΠ²ΡƒΠ²Π°Ρ‚ΠΈ ΠΏΠΎΡˆΡƒΠΊ ΠΏΠΎ Π½ΠΈΡ… Ρ‡Π΅Ρ€Π΅Π· вСбінтСрфСйс. Π—Π°Π³Π°Π»ΠΎΠΌ інструмСнт ΡƒΠΆΠ΅ ΠΏΡ€Π°Ρ†ΡŽΡ”, ΠΏΠΎΡ‚Ρ€Ρ–Π±Π½ΠΎ лишС ΠΉΠΎΠ³ΠΎ ΠΌΠ°ΡΡˆΡ‚Π°Π±ΡƒΠ²Π°Ρ‚ΠΈ."

    Π£ΠΌΠΎΠ²ΠΈ:
    β€” Π—ΠŸ ~600$
    β€” Π“Π½ΡƒΡ‡ΠΊΠΈΠΉ Π³Ρ€Π°Ρ„Ρ–ΠΊ, Π²Ρ–Π΄ΡΡƒΡ‚Π½Ρ–ΡΡ‚ΡŒ Ρ‚Π°ΠΉΠΌ-Ρ‚Ρ€Π΅ΠΊΠ΅Ρ€Ρ–Π²
    β€” НСвСлика ΠΊΠΎΠΌΠ°Π½Π΄Π°, Π°Π΄Π΅ΠΊΠ²Π°Ρ‚Π½Ρ– Π·Π°Π΄Π°Ρ‡Ρ– (Ρ‰ΠΎ Π΄ΠΎΠ·Π²ΠΎΠ»ΠΈΡ‚ΡŒ лСгшС Π°Π΄Π°ΠΏΡ‚ΡƒΠ²Π°Ρ‚ΠΈΡΡŒ Π΄ΠΎ Π·Π°Π΄Π°Ρ‡, Π±Π΅Π· Π°Π²Ρ€Π°Π»Ρ–Π², Π³ΠΎΠ½ΠΊΠΈ Ρ‚Π° стрСсів)
    β€” ΠœΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ Π²ΠΏΠ»ΠΈΠ²Π°Ρ‚ΠΈ Π½Π° Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ Π½Π°ΡˆΠΎΡ— ΠΊΡ€Π°Ρ—Π½ΠΈ, Π·ΠΌΡ–Ρ†Π½ΡŽΠ²Π°Ρ‚ΠΈ Ρ‚Π° Π·Π°Ρ…ΠΈΡ‰Π°Ρ‚ΠΈ власним Ρ€ΠΎΠ·ΡƒΠΌΠΎΠΌ. 

    Π•Ρ‚Π°ΠΏΠΈ Π²Ρ–Π΄Π±ΠΎΡ€Ρƒ:
    1. Screening-call - 20-30 Ρ…Π²ΠΈΠ»ΠΈΠ½.
    2. НСвСликС Π’Π— (ΠΎΠΏΡ†Ρ–ΠΎΠ½Π°Π»ΡŒΠ½ΠΎ).
    3. Π†Π½Ρ‚Π΅Ρ€Π²β€™ΡŽ Π· Ρ€ΠΎΠ·Ρ€ΠΎΠ±Π½ΠΈΠΊΠΎΠΌ (Π²Ρ–Π½ ΠΎΠ΄ΠΈΠ½ Π½Π° ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–) Ρ‚Π° PM - 30-60 Ρ…Π²ΠΈΠ»ΠΈΠ½.
    4. Π€Ρ–Π½Π°Π» / Offer β€” ΠΊΠΎΡ€ΠΎΡ‚ΠΊΠ° Ρ€ΠΎΠ·ΠΌΠΎΠ²Π° Π· ΠΊΠ΅Ρ€Ρ–Π²Π½ΠΈΠΊΠΎΠΌ ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρƒ - 15-20 Ρ…Π²ΠΈΠ»ΠΈΠ½.

    More
  • Β· 28 views Β· 2 applications Β· 12h

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Upper-Intermediate
    We are building a greenfield MVP for a healthcare analytics platform focused on patient-level insights from large-scale pharmacy claims data. The platform is being developed by a newly formed client with existing client interest and will be used for...

    We are building a greenfield MVP for a healthcare analytics platform focused on patient-level insights from large-scale pharmacy claims data. The platform is being developed by a newly formed client with existing client interest and will be used for real-time patient analytics at the point of service.

     

    All data is ingested via Snowflake Share from external vendors (no ingestion layer needed) and processed through a typical ETL pipeline to create a final patient-level dataset (~300M rows). This normalized output will be loaded into a PostgreSQL database (or comparable RDBMS; final tooling to be confirmed) and served via a low-latency REST API.

     

    Key pipeline stages include:

    • Standardization (cleansing, mapping, enrichment using BIN/PCN lookups)
    • Projection and extrapolation using a simple classification model or proximity search
    • Summarization to per-patient records

       

    The data is updated weekly (batch-based system). We are not building the ML model but must integrate with it and support its output. The system will initially serve two core API endpoints:

    1. Given patient info, return plan/coverage info with confidence score
    2. Given patient info, return medical history

       

    You will be part of a lean, senior-level engineering team and expected to own key parts of the ETL and data modeling effort.

     

     

    Key Responsibilities

    • Build performant and scalable ETL pipelines in Snowflake, transforming wide raw claims datasets into normalized outputs
    • Apply cleansing, mapping, enrichment logic (e.g., payer enrichment via BIN/PCN lookups)
    • Collaborate on projection/extrapolation workflows, integrating with classification models or rules-based engines
    • Load processed outputs into PostgreSQL to power real-time REST API endpoints
    • Tune Snowflake queries for cost-efficiency and speed, optimize workloads for batch processing (~weekly cadence)
    • Work closely with the API engineer to ensure alignment between data schema and API needs
    • Ensure data privacy, compliance, and PHI de-identification coordination (with Datavant)
    • Contribute to architectural decisions and the implementation roadmap in a fast-moving MVP cycle

       

    Requirements

    • 5+ years in data engineering or data platform development roles
    • Advanced SQL skills and experience working with wide, high-volume datasets (e.g., 100M+ rows)
    • Experience with Snowflake or readiness to quickly ramp up on it, including performance tuning and familiarity with native features (streams, tasks, stages)
    • Proficiency in Python for scripting, orchestration, and integration
    • Experience working with batch pipelines and familiar with best practices for data warehousing
    • Solid understanding of ETL design patterns and ability to work independently in a small, fast-paced team
    • Awareness of data compliance standards (HIPAA, PHI de-identification workflows)

       

    Preferred Qualifications

    • Experience with Snowpark (Python) or other in-Snowflake processing tools
    • Familiarity with payer enrichment workflows or healthcare claims data
    • Previous use of classification models, vector similarity, or proximity-based data inference
    • Hands-on experience with AWS EC2, S3 and integrating cloud resources with Snowflake
    • Exposure to PostgreSQL and API integration for analytic workloads
    More
  • Β· 3 views Β· 0 applications Β· 20h

    On-Site Data Center Engineer (Hyper-V and Infrastructure Upgrades)

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Intermediate
    Requirements: 4+ years of hands-on experience managing on-premise data center infrastructure, including server hardware setup, troubleshooting, and virtualization (Hyper-V preferred) Install, configure, and maintain physical servers for Hyper-V...

    Requirements:

    • 4+ years of hands-on experience managing on-premise data center infrastructure, including server hardware setup, troubleshooting, and virtualization (Hyper-V preferred)
    • Install, configure, and maintain physical servers for Hyper-V virtualization environments.
    • Experience with server hardware setup, including RAID and remote management tools configurations, BIOS/UEFI settings, and hardware diagnostics.
    • Troubleshoot and resolve hardware and network issues
    • Knowledge of critical data center infrastructure (Power configurations, HVAC, Cabling)
    • Displayed proficiency in various software applications such as Microsoft Office 365 Suite and G Suite applications
    • Strong troubleshooting methodology and attention to detail
    • Working knowledge of HPE/Dell server platforms and Juniper or Arista networking equipment.
    • Ability to work on-site in London to support ongoing infrastructure upgrades.
    • Strong troubleshooting skills and ability to assist with real-time problem resolution.

    Nice-to-Have:

    • Experience with large-scale data center setups or expansions.
    • Strong understanding of maintenance on mission-critical infrastructure power infrastructure in a live environment
    • Familiarity with networking and server provisioning.
    • Previous experience coordinating with remote teams to ensure smooth project execution.
    • Experience using Data Center Infrastructure management (DCIM) tools to manage data center infrastructure
    • Experience managing vendors while working on data center build projects

    Key Responsibilities:

    • Assist with Hyper-V installations and configuration within the data center.
    • Setting up and configuring on-premise networks for server infrastructures.
    • Work closely with the remote engineering team to facilitate a smooth and efficient upgrade process.
    • Document infrastructure setups and procedures.
    • Provide on-site support to minimize travel requirements for the core team.
    • Identify and resolve any issues that arise during installations and upgrades.

    About the Project:

    This project focuses on optimizing the power infrastructure within a London-based data center while deploying Hyper-V installations. The goal is to leverage all remaining power resources efficiently, ensuring a seamless and accelerated implementation. Having an on-site contractor will reduce the need for frequent travel, speeding up the project timeline and ensuring smooth execution.

    More
  • Β· 21 views Β· 1 application Β· 1d

    Senior Data Engineer

    Full Remote Β· Poland Β· Product Β· 5 years of experience Β· Upper-Intermediate
    Project Toshiba is the global market share leader in retail store technology. As retail’s first choice for integrated in-store solutions, Toshiba’s innovative technology enhances customer engagement, transforms in-store experience, and accelerates the...

    Project

    Toshiba is the global market share leader in retail store technology. As retail’s first choice for integrated in-store solutions, Toshiba’s innovative technology enhances customer engagement, transforms in-store experience, and accelerates the digital transformation of the retail industry. Today, Toshiba is in a position wherein it defines dominating practices of retail automation and advances the future of retail.

    The product is aimed at comprehensive retail chain automation and covers all work processes of large retail chain operators. The product covers retail store management, warehouse management, payment systems integration, logistics management, hardware/software store automation, etc.
    The product is already adopted by the market, and the biggest US and global retail operators are among the clients.

     

    Technology Stack

    Azure Databricks, Apache Spark (PySpark) , Delta Lake , ADF , Synapse , Python ,SQL, Power BI, MongoDB/CosmosDB, PostgreSQL, Terraform, Jenkins

     

    What you will do

    We are looking for an experienced Azure Databricks Engineer to join our team and contribute to building and optimizing large-scale data solutions. You will be responsible for working with Azure Databricks and Power BI , writing efficient Python and SQL scripts, and optimizing data workflows to ensure performance and scalability, building meaningful reports.

     

    Must-have skills

    • Bachelor’s or Master’s degree in Data Science, Computer Science or related field.
    • 3+ years of experience as a Data Engineer or in a similar role.
    • Proven experience in data analysis, data warehousing, and data reporting.
    • Proven experience in Azure Databricks ( python, pytorch), Azure infrastructure
    • Experience with Business Intelligence tools like Power BI.
    • Proficiency in querying languages like SQL.
    • Strong problem-solving skills and attention to detail.
    • Proven ability to translate business requirements into technical solutions.

     

    Nice-to-have skills

    • Knowledge and experience in e-commerce/retail
    More
  • Β· 23 views Β· 9 applications Β· 1d

    Principal Data Engineer

    Full Remote Β· Europe except Ukraine Β· 5 years of experience Β· Upper-Intermediate
    Tech stack: Python, Java, MS SQL, Oracle, PostgreSQL, MySQL, Scala, Kafka, Data Domain expertise: Location: Europe Expected date: ASAP Duration: Long-term Expiration: ASAP Description: Required Skills: - 5+ years in data engineering...

    ➀ Tech stack: Python, Java, MS SQL, Oracle, PostgreSQL, MySQL, Scala, Kafka, Data

     

    ➀ Domain expertise:

    ➀ Location: Europe

    ➀ Expected date: ASAP

    ➀ Duration: Long-term

    ➀ Expiration: ASAP

     

    ➀ Description:

    Required Skills:

    - 5+ years in data engineering (Python, Scala, or Java).

    - Strong expertise in Apache Kafka for stream processing.

    - Experience with databases like PostgreSQL, MySQL, MSSQL, Oracle.

    - Familiarity with cloud platforms (AWS, Azure, GCP) and on-prem solutions.

    - Leadership skills to guide and mentor a team.

     

    Must-have: Expertise in ClickHouse & iGaming industry experience

    More
  • Β· 19 views Β· 0 applications Β· 1d

    Senior Python Data Engineer (only Ukraine)

    Ukraine Β· Product Β· 6 years of experience Β· Upper-Intermediate
    The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer...

    The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer data, 2) AI-led multichannel journey orchestration, and 3) statistically credible multitouch attribution of every marketing action.

     

    Requirements:

     

    • At least 5 years of experience with Python
    • At least 3 years of experience in processing structured terabyte-scale data (processing structured data of several hundreds of gigabytes).
    • Solid experience in SQL and NoSQL (ideally GCP storages Firestore, BigQuery, BigTable and/or Redis, Kafka), (advanced skills in DML).
    • Hands-on experience with OLAP storage (at least one of Snowflake, BigQuery, ClickHouse, etc).
    • Deep understanding of data processing services (at least one of Apache Airflow, GCP Dataflow,  Apache Hadoop, Apache Spark).
    • Experience in automated test creation (TDD).
    • Freely spoken English.

       

    Advantages:

     

    • Being fearless of mathematical algorithms (part of our team’s responsibility is developing ML models for data analysis; although knowledge of ML is not required for the current position, it would be awesome if a person felt some passion for algorithms).
    • Experience in any OOP language.
    • Experience in DevOps (Familiarity with Docker and Kubernetes).
    • Experience with GCP services would be a plus.
    • Experience with IaC would be a plus.
    • Experience in Scala.

     

    What we offer:

    • 20 working days’ vacation; 
    • 10 paid sick leaves;
    • public holidays;
    • equipment;
    • accountant helps with documents;
    • many cool team activities.

     

    Apply now and start a new page of your fast career growth with us!

    More
  • Β· 25 views Β· 2 applications Β· 1d

    Data engineer (relocation to Berlin)

    Office Work Β· Germany Β· 5 years of experience Β· Upper-Intermediate
    At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment,...

    At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.

     

    About the Data Solution Team As a Data Engineer, you will join our Data Solution Team, which drives our data-driven innovation. The team is pivotal to powering our business processes and enhancing customer experiences through effective data utilization. Our focus areas include:
     

    ● Developing integrations between systems.

    ● Analyzing customer data to derive actionable insights.

    ● Improving customer experience by leveraging statistical and machine learning models. Our tech stack includes:

    ● Cloud & Infrastructure: AWS (S3, EKS, Quicksight, and monitoring tools).

    ● Data Engineering & Analytics: Apache Spark (Scala and PySpark on Databricks), Apache Kafka (Confluence Cloud).

    ● Infrastructure as Code: Terraform.

    ● Development & Collaboration: BitBucket, Jira.

    ● Integration Tools & APIs: Segment.io, Blueshift, Zendesk, Google Maps API, and other external systems

     

    Job requirements

    As A Data Engineer, You Will:

    ● Design, build, and own near-time and batch data processing workflows.

    ● Develop efficient, low-latency data pipelines and systems.

    ● Maintain high data quality while ensuring GDPR compliance.

    ● Analyze customer data and extract insights to drive business decisions.

    ● Collaborate with Product, Backend, Marketing, and other teams to deliver impactful features.

    ● Help Data scientists deliver ML/AI solutions.

     

    Requirements:

    ● 5+ years of experience as a Data Engineer, with expertise in Apache Spark using Python and Scala.

    ● 3+ years of experience with Apache Kafka.

    ● Management experience or Tech Lead experience

    ● Strong proficiency in SQL.

    ● Experience with CI/CD processes and platforms.

    ● Hands-on experience with cloud technologies such as AWS, GCP or Azure.

    ● Familiarity with Terraform.

    ● Comfortable working in an agile environment.

    ● Excellent problem-solving and self-learning skills, with the ability to operate both independently and as part of a team.

     

    Nice to have:

    ● Hands-on experience with Databricks.

    ● Experience with document databases, particularly Amazon DocumentDB.

    ● Familiarity with handling high-risk data.

    ● Exposure to BI tools such as AWS Quicksight or Redash.

    ● Work experience in a Software B2C company, especially in the FinTech industry.

     

    What we offer:

    Our goal is to set up a great working environment. Become part of the process and:

    ● Shape the future of our organization as part of the international founding team.

    ● Take on responsibility from day one.

    ● Benefit from various coaching and training opportunities, including a Sports Subscription, German classes, and a €1000 yearly self-development budget.

    ● Work in a hybrid working model from the comfortable Berlin office

    ● Enjoy a modern workplace in the heart of Berlin with drinks, fresh fruit, kicker and ping pong

    More
  • Β· 16 views Β· 0 applications Β· 1d

    DataOps Team Lead to $6500

    Poland Β· Product Β· 3 years of experience Β· Upper-Intermediate
    Only Krakow!!! Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Bigabid is on a mission to fuel mobile...

    Only Krakow!!!

    Who we are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product:

    Bigabid is on a mission to fuel mobile app growth through AI-powered prediction and precision targeting. With over 50TB of data processed daily, 4M+ requests per second, and 1B+ unique users reached weekly, they’re scaling like the biggest names in tech – but with startup agility.

    At Bigabid, they’re building intelligent systems that predict which mobile apps users will love – and connect them at the perfect moment. Their real-time user modeling engine continuously ingests and analyzes massive streams of behavioral data to power highly targeted ad campaigns with exceptional performance. As a result, they are seeing remarkable growth and close to zero customer churn. To support their hyper-growth and continue propelling the growth of some of the biggest names in the mobile industry, they offer a wide range of opportunities for different skill levels and experiences. 

     

    About the Role:

    We’re now looking for a DataOps Team Lead to own the stability, observability, and quality of the data pipelines and processes. This is a 50/50 hands-on and leadership role, perfect for someone who thrives at the intersection of data engineering, operational excellence, and cross-functional collaboration.

    You’ll be part of the mission-critical DataOps layer, ensuring data health and reliability across a product that directly influences business outcomes. You’ll support the engine that empowers some of the world’s top app developers to grow smarter and faster.

     

    Key Responsibilities: 

    • Lead a DataOps team of 2 (and growing), with ownership over Bigabid’s core data quality and observability processes
    • Build, maintain, and monitor robust data pipelines and workflows (Python + Airflow)
    • Act as the go-to person for identifying and resolving data issues affecting production systems
    • Coordinate with multiple teams: Data Engineering, Product, BI, Operations, Backend, and occasionally Data Science
    • Own projects such as metadata store development, anomaly detection systems, and scalable data quality frameworks
    • Balance strategic project leadership with hands-on scripting, debugging, and optimizations
    • Promote a culture of quality, reliability, and clear communication in a fast-moving, high-volume environment.

     

    Required Competence and Skills:

    • 3+ years of experience in data engineering/data operations, with at least 1 year of team or project leadership
    • Proficient in Python for scripting and automation (clean, logical code – not full-stack development)
    • Strong experience with Airflow (hands-on, not through abstraction layers)
    • Solid understanding of SQL and NoSQL querying, schema design, and cost-efficient querying (e.g., Presto, document DBs)
    • Exposure to tools like Spark, AWS, or similar is a big plus
    • Comfortable managing incident escalation, prioritizing urgent fixes, and guiding teams toward solutions
    • Analytical, communicative, and excited to work with smart, mission-driven people
       

     

    Nice-to-Have Skills:

    • Previous experience as a NOC or DevOps engineer 
    • Familiarity with PySpark.
    More
  • Β· 31 views Β· 0 applications Β· 1d

    Middle Data Support Engineer (Python, SQL)

    Ukraine Β· 3 years of experience Β· Upper-Intermediate
    N-iX is looking for a Middle Data Support Engineer to join our team. Our customer is the leading school transportation solutions provider in North America. Every day, the company completes 5 million student journeys, moving more passengers than all U.S....

    N-iX is looking for a Middle Data Support Engineer to join our team. Our customer is the leading school transportation solutions provider in North America. Every day, the company completes 5 million student journeys, moving more passengers than all U.S. airlines combined and delivers reliable, quality services, including full-service transportation and management, special-needs transportation, route optimization and scheduling, maintenance, and charter services for 1,100 school district contracts.

     

    Responsibilities:

    • Provide support in production and non-production environments (Azure cloud)
    • Install, configure and provide day-to-day support after implementation, including off hours as needed;
    • Troubleshooting defects and errors, arising problems resolution;
    • Plan, test, and implement server upgrades, maintenance fixes, and vendor-supplied patches;
    • Help in resolving incidents;
    • Monitor ETL jobs;
    • Perform small enhancements (Azure/SQL). 

       

    Requirements:

    • Proven knowledge and 3+ years experience in Python
    • Proficiency in RDBMS systems (MS SQL experience as a plus);
    • Experience with Azure cloud provider service;
    • Understanding of Azure Data Lake / Storage Accounts;
    • Experience in creation and managing data pipelines in Azure Data Factory;
    • Upper Intermediate/Advanced English level.

       

    Nice to have:

    • Experience with administration of Windows Server 2012 and higher;
    • Experience with AWS, Snowflake, Power BI;
    • Experience with technical support;
    • Experience in .Net.

       

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 22 views Β· 1 application Β· 1d

    Senior Data Engineer

    Full Remote Β· Poland Β· 5 years of experience Β· Upper-Intermediate
    As a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization,...

    As a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Airflow, Spark, using Python and various cloud-based solutions.

     

    Key Responsibilities:

    • Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources.
    • Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions.
    • Lead efforts in performance tuning and query optimization to enhance data processing efficiency.
    • Provide expertise in data modeling and database design to ensure scalability and reliability of data platforms.
    • Contribute to the development of best practices and standards for data engineering processes.
    • Stay updated on emerging technologies and trends in the data engineering landscape.

     

    Required Skills and Qualifications:

    • Bachelor's Degree in Computer Science or related field.
    • Minimum of 5 years of experience in tech lead data engineering or architecture roles.
    • Proficiency in Python and PySpark for ETL development and data processing.
    • AWS CLOUD at least 2 years
    • Extensive experience with cloud-based data platforms, particularly EMR.
    • Must have knowledge with Spark.
    • Excellent problem-solving skills and ability to work effectively in a collaborative team environment.
    • Leadership experience, with a proven track record of leading data engineering teams.

     

    Benefits

     

    • 20 days of paid vacation, 5 sick leave
    • National holidays observed
    • Company-provided laptop

     

     

    More
  • Β· 72 views Β· 3 applications Β· 1d

    Middle Data Engineer (Prom.ua)

    Full Remote Β· Ukraine Β· Product Β· 2 years of experience Ukrainian Product πŸ‡ΊπŸ‡¦
    Prom.ua – Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ маркСтплСйс Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, Π΄Π΅ ΠΏΡ€ΠΎΠ΄Π°ΡŽΡ‚ΡŒΡΡ ΠΏΠΎΠ½Π°Π΄ 200 ΠΌΠ»Π½ Ρ‚ΠΎΠ²Π°Ρ€Ρ–Π² Π²Ρ–Π΄ дСсятків тисяч ΠΏΡ–Π΄ΠΏΡ€ΠΈΡ”ΠΌΡ†Ρ–Π² Π· усієї ΠΊΡ€Π°Ρ—Π½ΠΈ. На Prom.ua: ΠΊΠΎΠΆΠ΅Π½ ΠΏΠΎΠΊΡƒΠΏΠ΅Ρ†ΡŒ ΠΌΠΎΠΆΠ΅ Π·Π½Π°ΠΉΡ‚ΠΈ всС, Ρ‰ΠΎ ΠΏΠΎΡ‚Ρ€Ρ–Π±Π½ΠΎ, Π·Π° Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΎΡŽ Ρ†Ρ–Π½ΠΎΡŽ: Π²Ρ–Π΄ Π·ΡƒΠ±Π½ΠΎΡ— Ρ‰Ρ–Ρ‚ΠΊΠΈ Π΄ΠΎ ΠΊΡƒΠ»ΡŒΡ‚ΠΈΠ²Π°Ρ‚ΠΎΡ€Π° для саду Ρ‚Π°...

    Prom.ua – Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ маркСтплСйс Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, Π΄Π΅ ΠΏΡ€ΠΎΠ΄Π°ΡŽΡ‚ΡŒΡΡ ΠΏΠΎΠ½Π°Π΄ 200 ΠΌΠ»Π½ Ρ‚ΠΎΠ²Π°Ρ€Ρ–Π² Π²Ρ–Π΄ дСсятків тисяч ΠΏΡ–Π΄ΠΏΡ€ΠΈΡ”ΠΌΡ†Ρ–Π² Π· усієї ΠΊΡ€Π°Ρ—Π½ΠΈ.

     

    На Prom.ua:

    • ΠΊΠΎΠΆΠ΅Π½ ΠΏΠΎΠΊΡƒΠΏΠ΅Ρ†ΡŒ ΠΌΠΎΠΆΠ΅ Π·Π½Π°ΠΉΡ‚ΠΈ всС, Ρ‰ΠΎ ΠΏΠΎΡ‚Ρ€Ρ–Π±Π½ΠΎ, Π·Π° Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΎΡŽ Ρ†Ρ–Π½ΠΎΡŽ: Π²Ρ–Π΄ Π·ΡƒΠ±Π½ΠΎΡ— Ρ‰Ρ–Ρ‚ΠΊΠΈ Π΄ΠΎ ΠΊΡƒΠ»ΡŒΡ‚ΠΈΠ²Π°Ρ‚ΠΎΡ€Π° для саду Ρ‚Π° Π³ΠΎΡ€ΠΎΠ΄Ρƒ.
    • ΠΊΠΎΠΆΠ΅Π½ ΠΏΡ–Π΄ΠΏΡ€ΠΈΡ”ΠΌΠ΅Ρ†ΡŒ ΠΌΠΎΠΆΠ΅ ΠΏΡ€ΠΎΠ΄Π°Π²Π°Ρ‚ΠΈ Ρ‚ΠΎΠ²Π°Ρ€ΠΈ Π² ΠΊΠ°Ρ‚Π°Π»ΠΎΠ·Ρ– маркСтплСйса, Π½Π° сайті, створСному Π½Π° ΠΏΠ»Π°Ρ‚Ρ„ΠΎΡ€ΠΌΡ– Prom Ρ‚Π° Ρƒ ΠΌΠΎΠ±Ρ–Π»ΡŒΠ½ΠΎΠΌΡƒ Π΄ΠΎΠ΄Π°Ρ‚ΠΊΡƒ β€œProm покупки”.

       

    Prom.ua Π² Ρ†ΠΈΡ„Ρ€Π°Ρ…:

    • щодня маркСтплСйс Π²Ρ–Π΄Π²Ρ–Π΄ΡƒΡŽΡ‚ΡŒ 4,8 ΠΌΠ»Π½ осіб
    • Π½Π° маркСтплСйсі ΠΏΡ€Π°Ρ†ΡŽΡŽΡ‚ΡŒ ΠΏΠΎΠ½Π°Π΄ 60 тис. ΠΊΠΎΠΌΠΏΠ°Π½Ρ–ΠΉ
    • Ρƒ ΠΊΠ°Ρ‚Π°Π»ΠΎΠ·Ρ– 200 ΠΌΠ»Π½ Ρ‚ΠΎΠ²Π°Ρ€Ρ–Π²

     

    ΠŸΡ€ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄Ρƒ:

    Data Analytics, Data Engineers, Product Analytics

     

    Ми використовуємо Π² Ρ€ΠΎΠ±ΠΎΡ‚Ρ–:

    • Data Lakehouse 200+ TB Π΄Π°Π½ΠΈΡ…, Π΄Π°Π½Ρ– Π·Π±Π΅Ρ€Ρ–Π³Π°Ρ”ΠΌΠΎ Π² HDFS, S3, для Ρ‚Π°Π±Π»ΠΈΡ‡Π½ΠΈΡ… Π΄Π°Π½ΠΈΡ… використовуємо Ρ„ΠΎΡ€ΠΌΠ°Ρ‚ Apache Iceberg.
    • Π‘Π΅Π·ΠΏΠΎΡΠ΅Ρ€Π΅Π΄Π½ΡŒΠΎ Π· самим сховищСм Π²Π·Π°Ρ”ΠΌΠΎΠ΄Ρ–Ρ” 30 – 40 людСй, ΡΠΏΠΎΠΆΠΈΠ²Π°ΡŽΡ‚ΡŒ ΠΉΠΎΠ³ΠΎ Π΄Π°Π½Ρ– сотні (якщо Π²Ρ€Π°Ρ…ΠΎΠ²ΡƒΠ²Π°Ρ‚ΠΈ Ρ‚Ρ–Π»ΡŒΠΊΠΈ Π²Π½ΡƒΡ‚Ρ€Ρ–ΡˆΠ½Ρ–Ρ… споТивачів).
    • Для ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ… використовуємо Spark, Trino, оркСстрація Π²Ρ–Π΄Π±ΡƒΠ²Π°Ρ”Ρ‚ΡŒΡΡ Π² Airflow.
    • ІнструмСнти/БСрвіси ΠΏΠ΅Ρ€Π΅Π²Π°ΠΆΠ½ΠΎ Π΄Π΅ΠΏΠ»ΠΎΡ—ΠΌΠΎ Π² Kubernetes, Ρ–Π½ΠΎΠ΄Ρ– Π² OpenStack.
    • Π£ якості Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€Ρ–ΡŽ ΠΊΠΎΠ΄Ρƒ Ρ‚Π° для CI/CD використовуємо GitLab.
    • ΠŸΡ–Π΄ ΠΊΠ°Ρ‚Π°Π»ΠΎΠ³, Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚Π°Ρ†Ρ–ΡŽ Ρ‚Π° для ΠΌΠΎΠ½Ρ–Ρ‚ΠΎΡ€ΠΈΠ½Π³Ρƒ Π΄Π°Π½ΠΈΡ… використовуємо Open Metadata, для Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚Π°Ρ†Ρ–Ρ— ΠΏΠΎ інструмСнтах/сСрвісах використовуємо Material for MkDocs.

       

    Для Π΄Π°Π½ΠΎΡ— Ρ€ΠΎΠ»Ρ– Π²Π°ΠΆΠ»ΠΈΠ²ΠΎ:

    • ΠœΠ°Ρ‚ΠΈ високий Ρ€Ρ–Π²Π΅Π½ΡŒ володіння ΠΌΠΎΠ²Π°ΠΌΠΈ SQL Ρ‚Π° Python.
    • ΠœΠ°Ρ‚ΠΈ Π³Π»ΠΈΠ±ΠΎΠΊΠΈΠΉ досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· code-based інструмСнтами оркСстрації, Π±Π°ΠΆΠ°Π½ΠΎ Airflow, Prefect, Dagster. Наприклад, Ρƒ Π²ΠΈΠΏΠ°Π΄ΠΊΡƒ Π· Airflow Ρ€ΠΎΠ·ΡƒΠΌΡ–Ρ‚ΠΈ, Ρ‰ΠΎ Ρ‚Π°ΠΊΠ΅ XCom, Pool, Hook, Sensor, Operator, TaskGroup Ρ‚ΠΎΡ‰ΠΎ.
    • ΠœΠ°Ρ‚ΠΈ досвід написання складних, Ρ–Π΄Π΅ΠΌΠΏΠΎΡ‚Π΅Π½Ρ‚Π½ΠΈΡ… ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Ρ–Π² Π· ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ Π²Π΅Π»ΠΈΠΊΠΈΡ… обсягів Π΄Π°Π½ΠΈΡ….
    • ΠœΠ°Ρ‚ΠΈ Π³Π»ΠΈΠ±ΠΎΠΊΠΈΠΉ досвід використання Ρ…ΠΎΡ‡Π° Π± Π΄Π²ΠΎΡ… Π· ΠΏΠ΅Ρ€Π΅Π»Ρ–Ρ‡Π΅Π½ΠΈΡ… інструмСнтів для ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ…: (Spark/Databricks/Snowpark), (Trino/Presto/Athena), (Kafka/Kinesis/Flink), (Snowflake/BigQuery/Redshift).
    • Π ΠΎΠ·ΡƒΠΌΡ–Ρ‚ΠΈ Π²Ρ–Π΄ΠΌΡ–Π½Π½Ρ–ΡΡ‚ΡŒ ΠΌΡ–ΠΆ Data Warehouse, Data Lake, Data Lakehouse.
    • Π ΠΎΠ·ΡƒΠΌΡ–Ρ‚ΠΈ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Ρ‚Π°Π±Π»ΠΈΡ‡Π½ΠΈΡ… Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρ–Π², Π² Ρ–Π΄Π΅Π°Π»Ρ– ΠΌΠ°Ρ‚ΠΈ досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Iceberg Π°Π±ΠΎ Delta Lake, Hudi.
    • Π—Π½Π°Ρ‚ΠΈ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΈ ΠΎΡ€Π³Π°Π½Ρ–Π·Π°Ρ†Ρ–Ρ— Ρ‚Π° модСлювання Π΄Π°Π½ΠΈΡ…: Medallion, Kimball, Inmon.
    • ΠœΠ°Ρ‚ΠΈ досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Docker, Kubernetes, Gitlab CI/CD.
    • БистСмно мислити, ΡˆΠΈΡ€ΠΎΠΊΠΎ Ρ‚Π° Π· урахуванням ΠΌΠ°ΠΉΠ±ΡƒΡ‚Π½Ρ–Ρ… пСрспСктив, Π΄ΡƒΠΌΠ°Ρ‚ΠΈ ΠΏΡ€ΠΎ ΠΌΠ΅Ρ‚Ρƒ, Π° Π½Π΅ Π·Π°Π΄Π°Ρ‡Ρƒ.
    • ΠžΡ€Ρ–Ρ”Π½Ρ‚ΡƒΠ²Π°Ρ‚ΠΈΡΡŒ Π½Π° якісний Ρ‚Π° довгостроковий Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ, Π² Π±Ρ–Π»ΡŒΡˆΠΎΡΡ‚Ρ– Π²ΠΈΠΏΠ°Π΄ΠΊΡ–Π² ΡΠΊΡ–ΡΡ‚ΡŒ Π²Π°ΠΆΠ»ΠΈΠ²Ρ–ΡˆΠ° Π·Π° ΡˆΠ²ΠΈΠ΄ΠΊΡ–ΡΡ‚ΡŒ.
    • Π‘ΡƒΡ‚ΠΈ Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚ΠΈΠΌ Π΄ΠΎ Π·ΠΌΡ–Π½, ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ Π΄ΡƒΠΆΠ΅ Π°ΠΊΡ‚ΠΈΠ²Π½ΠΎ Π·ΠΌΡ–Π½ΡŽΡ”Ρ‚ΡŒΡΡ.
    • Π ΠΎΠ·ΡƒΠΌΡ–Ρ‚ΠΈ, Ρ‰ΠΎ комунікація Π· Ρ‚Π΅Ρ…Π½Ρ–Ρ‡Π½ΠΈΠΌΠΈ Ρ‚Π° Π½Π΅ Ρ‚Π΅Ρ…Π½Ρ–Ρ‡Π½ΠΈΠΌΠΈ спСціалістами Ρ†Π΅ частина ΠΊΠΎΠΌΠΏΠ΅Ρ‚Π΅Π½Ρ†Ρ–ΠΉ.

     

    ΠœΠΎΠΆΠ»ΠΈΠ²Ρ– Π·Π°Π΄Π°Ρ‡Ρ–:

    • ΠŸΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠ° Ρ‚Π° Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ сховища (Data Lakehouse) ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–Π² Prom+, Ρ‰ΠΎ Π²ΠΊΠ»ΡŽΡ‡Π°Ρ” Π² сСбС:
      • ΠŸΡ€ΠΎΡ”ΠΊΡ‚ΡƒΠ²Π°Π½Π½Ρ Ρ‚Π° наповнСння сховища Π΄Π°Π½ΠΈΠΌΠΈ, Π½Π΅ΠΎΠ±Ρ…Ρ–Π΄Π½ΠΈΠΌΠΈ для ΠΊΠΎΠΌΠ°Π½Π΄ ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–Π².
      • Написання ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Ρ–Π² для ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ/оновлСння Π΄Π°Π½ΠΈΡ… Π² сховищі.
      • Π Π΅Ρ„Π°ΠΊΡ‚ΠΎΡ€ΠΈΠ½Π³, модифікація наявних ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Ρ–Π².
      • Написання тСстів Ρ‚Π° ΠΌΠΎΠ½Ρ–Ρ‚ΠΎΡ€ΠΈΠ½Π³ якості Π΄Π°Π½ΠΈΡ….
      • ΠžΠ±ΡΠ»ΡƒΠ³ΠΎΠ²ΡƒΠ²Π°Π½Π½Ρ Π΄ΠΎΠΊΡƒΠΌΠ΅Π½Ρ‚Π°Ρ†Ρ–Ρ— ΠΏΠΎ сховищу.
    • ΠŸΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠ° частини інфраструктури для Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Π΄Π°Π½ΠΈΠΌΠΈ, Airflow, Trino, Open Metadata (конфігурування, оновлСння Ρ‚Π° Π΄Π΅ΠΏΠ»ΠΎΠΉ).
    • Π”ΠΎΠΏΠΎΠΌΠΎΠ³Π° Ρ‚Π° ΠΊΠΎΠ½ΡΡƒΠ»ΡŒΡ‚ΡƒΠ²Π°Π½Π½Ρ ΠΊΠΎΠΌΠ°Π½Π΄ Data Analytics, Data Science Π· ΠΏΡ€ΠΈΠ²ΠΎΠ΄Ρƒ отримання Ρ‚Π° ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ….
    • Code Review ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Ρ–Π² ΠΊΠΎΠΌΠ°Π½Π΄ Data Analytics, Data Engineers.

     

    Π•Ρ‚Π°ΠΏΠΈ ΠΏΡ–Π΄Π±ΠΎΡ€Ρƒ: 

    • Знайомство 
    • Π’Π΅Ρ…Π½Ρ–Ρ‡Π½Π° співбСсіда 
    • Π€Ρ–Π½Π°Π»ΡŒΠ½Π° зустріч (ΠΎΠΏΡ†Ρ–ΠΉΠ½ΠΎ) 

     

    ΠŸΡ€ΠΎ Ρ€ΠΎΠ±ΠΎΡ‚Ρƒ Π² EVO:

    • Π‘ΠΎΡ†Ρ–Π°Π»ΡŒΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ‚ β€” ΠΎΡ„Ρ–Ρ†Ρ–ΠΉΠ½Π΅ ΠΏΡ€Π°Ρ†Π΅Π²Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ, 24 Π΄Π½Ρ– ΠΎΠΏΠ»Π°Ρ‡ΡƒΠ²Π°Π½ΠΎΡ— відпустки Π½Π° Ρ€Ρ–ΠΊ Ρ– Π½Π΅ΠΎΠ±ΠΌΠ΅ΠΆΠ΅Π½Π° ΠΊΡ–Π»ΡŒΠΊΡ–ΡΡ‚ΡŒ лікарняних, Ρ‰ΠΎΠ± Π²ΠΈ ΠΌΠΎΠ³Π»ΠΈ Π²Ρ–Π΄ΠΏΠΎΡ‡ΠΈΠ²Π°Ρ‚ΠΈ Ρ‚Π° Π΄Π±Π°Ρ‚ΠΈ ΠΏΡ€ΠΎ своє здоров’я.
    • Π’ΡƒΡ€Π±ΠΎΡ‚Π° ΠΏΡ€ΠΎ здоров’я β€” ΠΌΠΈ ΠΏΠΎΠΊΡ€ΠΈΠ²Π°Ρ”ΠΌΠΎ ΠΌΠ΅Π΄ΠΈΡ‡Π½Π΅ страхування Ρ‚Π° ΠΏΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΡƒ ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½ΠΎΠ³ΠΎ психолога, Π°Π΄ΠΆΠ΅ ΠΏΠ΅Ρ€Π΅ΠΊΠΎΠ½Π°Π½Ρ–, Ρ‰ΠΎ піклування ΠΏΡ€ΠΎ ΠΌΠ΅Π½Ρ‚Π°Π»ΡŒΠ½Π΅ здоров’я Ρ” Ρ‚Π°ΠΊ само Π²Π°ΠΆΠ»ΠΈΠ²ΠΈΠΌ, як Ρ– ΠΏΡ€ΠΎ Ρ„Ρ–Π·ΠΈΡ‡Π½Π΅.
    • Π“Π½ΡƒΡ‡ΠΊΠΈΠΉ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ β€” Π²Ρ–Π΄Π΄Π°Π»Π΅Π½ΠΎ Π°Π±ΠΎ Π² офісі. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρ‚Π΅ ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Ρ‚ΠΈ дистанційно Π°Π±ΠΎ Π²Ρ–Π΄Π²Ρ–Π΄ΡƒΠ²Π°Ρ‚ΠΈ наш Π·Π°Ρ‚ΠΈΡˆΠ½ΠΈΠΉ офіс Π² ΠšΠΈΡ”Π²Ρ–, який ΠΏΠΎΠ²Π½Ρ–ΡΡ‚ΡŽ Π΅Π½Π΅Ρ€Π³ΠΎΠ½Π΅Π·Π°Π»Π΅ΠΆΠ½ΠΈΠΉ Ρ‚Π° оснащСний усім Π½Π΅ΠΎΠ±Ρ…Ρ–Π΄Π½ΠΈΠΌ.
    • Π’ΠΎΠ»ΠΎΠ½Ρ‚Π΅Ρ€ΡΡŒΠΊΠ° ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π° – ΠΌΠΈ рСгулярно ΠΏΡ€ΠΎΠ²ΠΎΠ΄ΠΈΠΌΠΎ Π±Π»Π°Π³ΠΎΠ΄Ρ–ΠΉΠ½Ρ– Π°ΡƒΠΊΡ†Ρ–ΠΎΠ½ΠΈ, Π·Π±ΠΈΡ€Π°Ρ”ΠΌΠΎ Π³Ρ€ΠΎΡˆΡ– Π½Π° Π΄Ρ€ΠΎΠ½ΠΈ-Ρ€ΠΎΠ·Π²Ρ–Π΄Π½ΠΈΠΊΠΈ Ρ‚Π° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ Π²ΠΎΠ»ΠΎΠ½Ρ‚Π΅Ρ€ΡΡŒΠΊΡ– Ρ–Π½Ρ–Ρ†Ρ–Π°Ρ‚ΠΈΠ²ΠΈ співробітників.
    • Ми Π½Π°Π΄Π°Ρ”ΠΌΠΎ Ρ€Ρ–Π²Π½Ρ– моТливості для всіх – Ρ‚ΠΎΠΌΡƒ Π½Π΅ допускаємо дискримінації Π·Π° Π±ΡƒΠ΄ΡŒ-якими ΠΎΠ·Π½Π°ΠΊΠ°ΠΌΠΈ. Π’Π°ΠΊΠΎΠΆ ΠΌΠΈ Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Ρ– Π΄ΠΎ співпраці Π· Π²Π΅Ρ‚Π΅Ρ€Π°Π½Π°ΠΌΠΈ/Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΠΊΠ°ΠΌΠΈ Ρ‚Π° Π³ΠΎΡ‚ΠΎΠ²Ρ– ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠ°Ρ‚ΠΈ Ρ—Ρ… Π½Π° ΡˆΠ»ΡΡ…Ρƒ Π΄ΠΎ Π½ΠΎΠ²ΠΈΡ… профСсійних досягнСнь.
    • ΠœΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ навчання Ρ‚Π° профСсійного зростання. Π§Π΅ΡΠ½Ρ–ΡΡ‚ΡŒ Ρ‚Π° Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Ρ–ΡΡ‚ΡŒ Ρƒ всіх комунікаціях. ΠšΠΎΠ½ΡΡ‚Ρ€ΡƒΠΊΡ‚ΠΈΠ²Π½ΠΈΠΉ Π·Π²ΠΎΡ€ΠΎΡ‚Π½ΠΈΠΉ зв’язок Π·Π° Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚Π°ΠΌΠΈ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ. ΠŸΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠ° Π»Ρ–Π΄Π΅Ρ€Π° Ρ– ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ.
    More
  • Β· 21 views Β· 1 application Β· 1d

    Senior Data Engineer/Lead Data Engineer (Healthcare domain)

    Full Remote Β· Europe except Ukraine Β· 5 years of experience Β· Upper-Intermediate
    We are looking for a Senior Data Engineer with extensive experience in data engineering who is passionate about making an impact. Join our team, where you will have the opportunity to drive innovation, improve solutions, and help us reach new heights! If...

    We are looking for a Senior Data Engineer with extensive experience in data engineering who is passionate about making an impact. Join our team, where you will have the opportunity to drive innovation, improve solutions, and help us reach new heights!

    If you're ready to take your expertise to the next level and contribute significantly to the success of our projects, submit your resume now.

    Our client is a leading medical technology company. The portfolio of products, services, and solutions is central to clinical decision-making and treatment pathways. Patient-centered innovation has always been at the core of the company, which is committed to improving patient outcomes and experiences, no matter where they live or what challenges they face. The company is innovating sustainably to provide healthcare for everyone, everywhere.

    The Project’s mission is to enable healthcare providers to increase their value by equipping them with innovative technologies and services in diagnostic and therapeutic imaging, laboratory diagnostics, molecular medicine, and digital health and enterprise services.


    Responsibilities:

    • Work closely with the client (PO) as well as other team members to clarify tech requirements and expectations
    • Contribute to the design, development, and optimization of squad-specific data architecture and pipelines adhering to defined ETL and Data Lake principles
    • Implement architectures using Azure Cloud platforms (Data Factory, Databricks, Event Hub)
    • Discover, understand, and organize disparate data sources, structuring them into clean data models with clear, understandable schemas
    • Evaluate new tools for analytical data engineering or data science and suggest improvements
    • Contribute to training plans to improve analytical data engineering skills, standards, and processes


    Requirements:

    • Solid experience in data engineering and cloud computing services, specifically in the areas of data and analytics (Azure preferred)
    • Strong conceptual knowledge of data analytics fundamentals, including dimensional modeling, ETL, reporting tools, data governance, data warehousing, and handling both structured and unstructured data
    • Expertise in SQL and at least one programming language (Python/Scala)
    • Excellent communication skills and fluency in business English
    • Familiarity with Big Data DB technologies such as Snowflake, BigQuery, etc. (Snowflake preferred)
    • Experience with database development and data modeling, ideally with Databricks/Spark
    More
  • Β· 101 views Β· 25 applications Β· 2d

    Middle Python / Data Engineer

    Part-time Β· Full Remote Β· Worldwide Β· 2 years of experience Β· Upper-Intermediate
    Involvement: ~15–20 hours/week Start Date: ASAP Location: Remote Client: USA-based Project: Legal IT – AI-powered legal advisory platform About the Project Join a growing team behind Legal IT, an intelligent legal advisory platform that simplifies legal...

    Involvement: ~15–20 hours/week
    Start Date: ASAP
    Location: Remote
    Client: USA-based
    Project: Legal IT – AI-powered legal advisory platform

     

    About the Project

    Join a growing team behind Legal IT, an intelligent legal advisory platform that simplifies legal support for businesses. The platform features:

    - A robust contract library

    - AI-assisted document generation & guidance

    - Interactive legal questionnaires

    - A dynamic legal blog with curated insights

     

    We’re building out advanced AI-driven proof-of-concepts (PoCs) and are looking for a strong Python/Data Engineer to support the backend logic and data pipelines powering these tools.

     

    Core Responsibility

    - Collaborate directly with the AI Architect to develop and iterate on proof-of-concept features with ongoing development

     

    Being a part of 3asoft means having:
    - High level of flexibility and freedom
    - p2p relationship with worldwide customers
    - Competitive compensation paid in USD
    - Fully remote working

    More
Log In or Sign Up to see all posted jobs