Jobs

112
  • Β· 3 views Β· 0 applications Β· 9d

    Infrastructure Engineer with Java (hybrid work in Warsaw)

    Office Work Β· Poland Β· 5 years of experience Β· Upper-Intermediate
    The product we are working on is one of TOP-3 navigation systems, complex web services, and other solutions related to it. The web and mobile apps handle information at a massive scale and extend well beyond the search, giving people and companies a lot...

    The product we are working on is one of TOP-3 navigation systems, complex web services, and other solutions related to it. The web and mobile apps handle information at a massive scale and extend well beyond the search, giving people and companies a lot of new, useful options.

    This role focuses on executing critical migration projects within the backend infrastructure of the project. The Backend Infrastructure team is undertaking several large-scale migrations to modernize its systems, improve reliability, and reduce maintenance overhead. This TVC position will be instrumental in performing the hands-on work required for these migrations, working closely with the infrastructure team and other Backend teams.
     

    Responsibilities:
     

    • Execute Migrations: Actively participate in and drive the execution of large-scale code and system migrations across various backend services. Some examples include:
      • migrating event processing systems from custom infrastructure to managed infrastructure solutions;
      • Transitioning services from custom OpenCensus metrics collection to OpenTelemetry;
      • migrating custom metrics to standard OpenTelemetry metrics.
    • Code Modification and Updates: Update and refactor existing codebases (primarily Java) to align with new libraries, platforms, and infrastructure.
    • Testing: Work with the Infrastructure team to create a testing plan for migrations to ensure that changes do not break running services and execute the test plans.
    • Collaboration: Work closely with the Backend Infrastructure team and other software engineers to understand migration requirements, plan execution strategies, and ensure smooth transitions with minimal disruption.
    • Problem Solving: Investigate, debug, and resolve technical issues and complexities encountered during the migration processes.
    • Documentation: Maintain clear and concise documentation for migration plans, processes, changes made, and outcomes.
    • Best Practices: Adhere to software development best practices, ensuring code quality, and follow established guidelines for infrastructure changes.

       

    Requirements:

    • 5+ years of hands-on experience in backend software development.
    • Strong proficiency in Java programming.
    • Strong communication and interpersonal skills, with the ability to collaborate effectively within a technical team environment.
    • Bachelor’s degree in Computer Science, Software Engineering, or a related technical field, or equivalent practical experience.
    • Good spoken and written English level β€” Upper-Intermediate or higher.
       

    Nice to have:

    • Experience with observability frameworks such as OpenTelemetry or OpenCensus.
    • Familiarity with gRPC.
    • Knowledge of Google Cloud Platform (GCP) services, particularly data processing services like Dataflow.
       

    We offer:

    • Opportunities to develop in various areas;
    • Compensation package (20 paid vacation days, paid sick leaves);
    • Flexible working hours;
    • Medical insurance;
    • English courses with a native speaker, yoga (Zoom);
    • Paid tech training and other activities for professional growth;
    • Hybrid work mode (∼3 days in the office);
    • International business trips
    • Comfortable office.

       

    If your qualifications and experience match the requirements of the position, our recruitment team will reach out to you in a week maximum. Please rest assured that we carefully consider each candidate, but due to the amount of applications, the review and further processing of your candidacy may take some time.

    More
  • Β· 47 views Β· 0 applications Β· 8d

    Data Engineer

    Full Remote Β· Poland Β· Product Β· 5 years of experience Β· Upper-Intermediate
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client, Harmonya, develops an AI-powered product...

    Who we are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product: 

    Our client, Harmonya, develops an AI-powered product data enrichment, insights, and attribution platform for retailers and brands. Its proprietary technology processes millions of online product listings, extracting valuable insights from titles, descriptions, ingredients, consumer reviews, and more.

    Harmonya builds robust tools to help uncover insights about the consumer drivers of market performance, improve assortment and merchandising, categorize products, guide product innovation, and engage target audiences more effectively.

     

    About the Role: 

    We’re seeking talented data engineers to join our rapidly growing team, which includes senior software and data engineers. Together, we drive our data platform from acquisition and processing to enrichment, delivering valuable business insights. Join us in designing and maintaining robust data pipelines, making an impact in our collaborative and innovative workplace.

     

    Key Responsibilities:

    • Design, implement, and optimize scalable data pipelines for efficient processing and analysis.
    • Build and maintain robust data acquisition systems to collect, process, and store data from diverse sources.
    • Collaborate with DevOps, Data Science, and Product teams to understand needs and deliver tailored data solutions.
    • Monitor data pipelines and production environments proactively to detect and resolve issues promptly.
    • Apply best practices for data security, integrity, and performance across all systems.

    Required Competence and Skills:

    • 4+ years of experience in data or backend engineering, with strong proficiency in Python for data tasks.
    • Proven track record in designing, developing, and deploying complex data applications.
    • Hands-on experience with orchestration and processing tools (e.g. Apache Airflow and/or Apache Spark).
    • Experience with public cloud platforms (preferably GCP) and cloud-native data services.
    • Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent practical experience).
    • Ability to perform under pressure and make strategic prioritization decisions in fast-paced environments.
    • Strong verbal and written communication skills in English.
    • Excellent communication skills and a strong team player, capable of working cross-functionally.

    Nice to have:

    • Familiarity with data science tools and libraries (e.g., pandas, scikit-learn).
    • Experience working with Docker and Kubernetes.
    • Hands-on experience with CI tools such as GitHub Actions
    More
  • Β· 35 views Β· 0 applications Β· 6d

    Data Engineer to $7500

    Full Remote Β· Poland Β· 5 years of experience Β· Upper-Intermediate
    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client, Harmonya, develops an AI-powered product data...

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product: 

     

    Our client, Harmonya, develops an AI-powered product data enrichment, insights, and attribution platform for retailers and brands. Its proprietary technology processes millions of online product listings, extracting valuable insights from titles, descriptions, ingredients, consumer reviews, and more.

     

    Harmonya builds robust tools to help uncover insights about the consumer drivers of market performance, improve assortment and merchandising, categorize products, guide product innovation, and engage target audiences more effectively.

     

    About the Role: 


    We're seeking talented data engineers to join our rapidly growing team, which includes senior software and data engineers. Together, we drive our data platform from acquisition and processing to enrichment, delivering valuable business insights. Join us in designing and maintaining robust data pipelines, making an impact in our collaborative and innovative workplace.

     

    Key Responsibilities: 

    • Design, implement, and optimize scalable data pipelines for efficient processing and analysis.
    • Build and maintain robust data acquisition systems to collect, process, and store data from diverse sources.
    • Collaborate with DevOps, Data Science, and Product teams to understand needs and deliver tailored data solutions.
    • Monitor data pipelines and production environments proactively to detect and resolve issues promptly.
    • Apply best practices for data security, integrity, and performance across all systems.

     

    Required Competence and Skills:

    • 4+ years of experience in data or backend engineering, with strong proficiency in Python for data tasks.
    • Proven track record in designing, developing, and deploying complex data applications.
    • Hands-on experience with orchestration and processing tools (e.g. Apache Airflow and/or Apache Spark).
    • Experience with public cloud platforms (preferably GCP) and cloud-native data services.
    • Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent practical experience).
    • Ability to perform under pressure and make strategic prioritization decisions in fast-paced environments.
    • Strong verbal and written communication skills in English.
    • Excellent communication skills and a strong team player, capable of working cross-functionally.

     

    Nice to have:

    • Familiarity with data science tools and libraries (e.g., pandas, scikit-learn).
    • Experience working with Docker and Kubernetes.
    • Hands-on experience with CI tools such as GitHub Actions

     

    Why Us?

    We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).

     

    We provide full accounting and legal support in all countries we operate.

     

    We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

     

    We offer a highly competitive package with yearly performance and compensation reviews.

    More
  • Β· 34 views Β· 0 applications Β· 6d

    Senior AWS Data Engineer (IRC267111)

    Full Remote Β· Ukraine Β· 3 years of experience Β· Intermediate
    Job Description Amazon Web Services (AWS) – Kinesis, DMS, EMR, Glue, Lambda, Athena, Redshift, S3 Strong expertise in building data ingestion, ETL/ELT pipelines and orchestration on AWS Experience with Change Data Capture (CDC) and real-time streaming...

    Job Description

    •  Amazon Web Services (AWS) – Kinesis, DMS, EMR, Glue, Lambda, Athena, Redshift, S3
    • Strong expertise in building data ingestion, ETL/ELT pipelines and orchestration on AWS
    • Experience with Change Data Capture (CDC) and real-time streaming data architectures
    • Proficiency in SQL and data modeling
    • Solid understanding of distributed systems and big data processing
    • Python for data transformations and scripting
    • Hands-on experience with orchestration frameworks like AWS Step Functions or Data Pipeline

     

    Nice to Have

    • Apache Spark, Zeppelin, Hadoop ecosystem
    • Infrastructure-as-Code (IaC) using Terraform or CloudFormation
    • Experience with data quality frameworks and monitoring
    • Knowledge of HIPAA or similar healthcare data compliance standards
    • Experience working in highly regulated environments

     

    Job Responsibilities

    Job Responsibilities

    Lead the design and development of scalable and reliable data pipelines in AWS using tools such as Kinesis, DMS, Glue, EMR, Athena, and Lambda.
    Define the architecture of the data ingestion and processing layer, considering performance, cost-efficiency, and maintainability.
    Collaborate closely with business stakeholders and technical teams to gather requirements, validate solutions, and align data flows with business needs.
    Propose and validate technical approaches and best practices for CDC, ETL/ELT, orchestration, data storage, and query optimization.
    Own the implementation of key data workflows, ensuring quality, security, and adherence to architectural standards.
    Conduct technical reviews, mentor junior engineers, and foster a culture of continuous improvement.
    Participate in strategic planning of the data platform and actively contribute to its evolution.
    Monitor and optimize performance of AWS-based data solutions, addressing bottlenecks and proposing enhancements.

     

    Department/Project Description

    The client is a pioneer in medical devices for less invasive surgical procedures, ranking as a leader in the market for coronary stents. The company’s medical devices are used in a variety of interventional medical specialties, including interventional cardiology, peripheral interventions, vascular surgery, electrophysiology, neurovascular intervention, oncology, endoscopy, urology, gynecology, and neuromodulation.
    The client’s mission is to improve the quality of patient care and the productivity of health care delivery through the development and advocacy of less-invasive medical devices and procedures. This is accomplished through the continuing refinement of existing products and procedures and the investigation and development of new technologies that can reduce risk, trauma, cost, procedure time and the need for aftercare.


     

    More
  • Β· 49 views Β· 1 application Β· 6d

    Senior Data Engineer

    Full Remote Β· Ukraine, Poland, Romania Β· 4 years of experience Β· Upper-Intermediate
    Job Description Required Qualifications 4+ years of experience in Data Engineering, with hands-on ETL development. Proven experience with Apache Airflow (DAG design, scheduling, monitoring), Apache NiFi. Strong experience with Snowflake architecture,...

    Job Description

    Required Qualifications

     

    • 4+ years of experience in Data Engineering, with hands-on ETL development.
    • Proven experience with Apache Airflow (DAG design, scheduling, monitoring), Apache NiFi.
    • Strong experience with Snowflake architecture, data migration, and performance optimization.
    • Proficient in SQL, Python, and working with REST APIs for data ingestion.
    • Experience with cloud environments (Azure or AWS).
    • Top-notch English written and verbal communication skills (a candidate will report to the USA-based PMO)

     

    Preferred Qualifications

    • Experience in data infrastructure modernization projects.
    • Exposure to CI/CD practices and DevOps collaboration.
    • Familiarity with integrating tools such as Azure DevOps, Snyk, OpsGenie, or Datadog
    • Previous working experience with PAM or Identity Management Solutions

     

    Job Responsibilities

    In the capacity of Senior Data Engineer - you will be expected to:

    • Design scalable data pipeline architectures to support real-time and batch processing.
    • Deploy and configure Apache Airflow for the orchestration of complex ETL workflows.
    • Develop Airflow DAGs for key integrations:
      - Azure DevOps (Work Items, build/release pipelines, commit data)
      - OpsGenie (incident and alert data) - Snyk (security vulnerability data)
      - Datadog (infrastructure monitoring and logs)
    • Migrate the existing data warehouse infrastructure and historical data to Snowflake.
    • Create documentation for data architecture, Airflow configurations, and DAGs.
    • Collaborate with Engineering and DevOps teams to align on integration and deployment strategies.

     

    Department/Project Description

    The client is a product international company with a common goal to redefine the legacy approach to Privileged Access Management by delivering multi-cloud-architected solutions to enable digital transformation at scale. The client company establishes a root of trust and then grants the least privileged access just in time-based on verifying who is requesting access, the context of the request, and the risk of the access environment.

    The client's products centralize and orchestrate fragmented identities, improve audit and compliance visibility, and reduce risk, complexity, and costs for the modern, hybrid enterprise. Over half of the Fortune 100, the world’s largest financial institutions, intelligence agencies, and critical infrastructure companies, all trust this company to stop the leading cause of breaches – privileged credential abuse.        

    The client seeks an experienced Data Engineer to support the Engineering Team in building a modern, scalable data pipeline infrastructure. This role will focus on migrating existing ETL processes to Apache Airflow and transitioning the data warehouse to Snowflake. The engineer will be responsible for designing architecture, developing DAGs, and integrating data sources critical to operations, development, and security analytics.

    The position the client is looking to fill will work under the supervision of the USA client-based PMO in the areas of Data Analysis, Reporting, Database querying, and results presentation to stakeholders. A selected candidate will experience the advantageous privilege of contributing to the distinguished PAM solution company and becoming an inextricable part of the reporting squad. That team is integral to company data analysis, structuring, delivering, and presentation to Senior and VP stakeholders.

    More
  • Β· 12 views Β· 1 application Β· 5d

    Informatica and IDMC Data Expert / Senior Data Engineer (Informatica)

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· Advanced/Fluent
    We’re looking for a hands-on data integration specialist with deep expertise in Informatica and practical experience with the Intelligent Data Management Cloud (IDMC). You will lead and participate in development of scalable, secure, and high-performance...

    We’re looking for a hands-on data integration specialist with deep expertise in Informatica and practical experience with the Intelligent Data Management Cloud (IDMC). You will lead and participate in development of scalable, secure, and high-performance data pipelines supporting enterprise-grade integration, quality, and governance initiatives.


    Key responsibilities:

    • Build and maintain ETL/ELT pipelines using Informatica PowerCenter, Informatica Cloud Data Integration, and related IDMC services.
    • Design and optimize data integration workflows across hybrid and multi-cloud environments.
    • Implement data quality rules, lineage tracking, and governance policies using IDMC modules (Data Quality, Data Catalog, Governance & Privacy, etc.).
    • Collaborate with data architects, analysts, and business users to deliver clean, trusted, and timely data.
    • Monitor pipeline performance, manage exceptions, and create robust logging, alerting, and retry mechanisms.
    • Produce technical documentation, mappings, and best-practice playbooks to enable scalable team adoption.


    Required skills & experience:

    • 4+ years building data pipelines using Informatica products, with at least 1–2 years in IDMC or Informatica Cloud environments.
    • Strong knowledge of data integration and ETL/ELT patterns in cloud and hybrid ecosystems.
    • Proficient in SQL and working with varied data sources (databases, APIs, files, SaaS platforms).
    • Experience with data quality, lineage, and metadata management using Informatica tools.
    • Familiarity with cloud platforms like GCP, AWS, Azure and associated data services.


    Nice to have:

    • Experience with IDMC services such as Cloud Application Integration, Informatica Data Governance, or Axon Data Governance.
    • Exposure to data privacy, masking, or compliance frameworks (e.g., GDPR, HIPAA).
    • Automation using Python, Shell, or DevOps CI/CD tools.
    • Certifications: Informatica Cloud Data Integration Professional, IDMC Specialist, or equivalent.
    More
  • Β· 52 views Β· 13 applications Β· 5d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Upper-Intermediate
    We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient...

    We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.

    Does this relate to you?

    • 5+ years of experience in Data Engineering or a related field
    • Strong expertise in SQL and data modeling concepts
    • Hands-on experience with Airflow
    • Experience working with Redshift
    • Proficiency in Python for data processing
    • Strong understanding of data governance, security, and compliance
    • Experience in implementing CI/CD pipelines for data workflows
    • Ability to work independently and collaboratively in an agile environment
    • Excellent problem-solving and analytical skills

     

    A new team member will be in charge of:

    • Design, develop, and maintain scalable data warehouse solutions
    • Build and optimize ETL/ELT pipelines for efficient data integration
    • Design and implement data models to support analytical and reporting needs
    • Ensure data integrity, quality, and security across all pipelines
    • Optimize data performance and scalability using best practices
    • Work with big data technologies such as  Redshift
    • Collaborate with cross-functional teams to understand business requirements and translate them into data solutions
    • Implement CI/CD pipelines for data workflows
    • Monitor, troubleshoot, and improve data processes and system performance
    • Stay updated with industry trends and emerging technologies in data engineering


     

    More
  • Β· 85 views Β· 8 applications Β· 5d

    Data Engineer

    Part-time Β· Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 2 years of experience Β· Intermediate
    Novoplex is a group of companies that develop iGaming products and provide services in various areas of performance marketing. We are looking for an experienced Data Engineer to design efficient data workflows and ensure data reliability across our...

    Novoplex is a group of companies that develop iGaming products and provide services in various areas of performance marketing.

    We are looking for an experienced Data Engineer to design efficient data workflows and ensure data reliability across our systems.

    Key Responsibilities:
    - Design, build, and maintain scalable data pipelines using Airflow and Python.
    - Develop and optimize SQL queries for data transformation, analysis, and reporting in BigQuery.
    - Ensure data quality, reliability, and integrity across the pipeline.
    - Automate data ingestion from various sources (APIs, cloud storage, etc.).
    - Monitor and troubleshoot data pipeline performance and failures.
    - Collaborate with analysts, and stakeholders to understand data needs.
    - Implement data governance best practices (logging, monitoring, versioning).

    Requirements:
    - 2–4 years of experience as a data engineer or similar role.
    - Strong Python skills, especially for scripting and data processing.
    - Expertise in SQL (especially analytical queries and data modeling).
    - Experience with Google BigQuery: data loading, partitioning, and performance tuning.
    - Solid understanding and experience with Apache Airflow: DAG design, scheduling, and troubleshooting.
    - Hands-on experience with Google Cloud Platform (GCP) services like:
    β€’ Cloud Storage
    β€’ Cloud Functions (optional)
    β€’ Pub/Sub (nice to have)
    β€’ Dataflow (bonus)
    - Familiarity with ETL/ELT best practices and orchestration patterns.
    - Experience working with version control systems (e.g., Git).
    - Comfortable working in CI/CD environments.

    We offer:
    - 4 hours working day from Monday to Friday.
    - 10 working days of annual paid vacations.
    - 3 working days of sick leave per year without a medical
    certificate, unlimited with a medical certificate.
    - Working equipment provision.
    - Open-minded and engaged team. 


     

    Our hiring process:
    - HR Screening
    - General/Technical Interview – with a Data Engineer and the CTO
    - Final Interview – with the Head of Analytics and the CTO
    - OfferπŸ₯³ 
     





     

    More
  • Β· 45 views Β· 5 applications Β· 5d

    Data Engineer with Knime experience

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Upper-Intermediate
    We are looking for a skilled Data Engineer with hands-on experience in KNIME to join our data team. You will be responsible for designing and maintaining scalable data pipelines and ETL workflows, integrating multiple data sources, and ensuring high data...

    We are looking for a skilled Data Engineer with hands-on experience in KNIME to join our data team. You will be responsible for designing and maintaining scalable data pipelines and ETL workflows, integrating multiple data sources, and ensuring high data quality and availability across the organization.
     

    Key Responsibilities

    • Design, develop, and manage ETL workflows using KNIME Analytics Platform
    • Integrate and transform data from various sources (databases, APIs, flat files, etc.)
    • Optimize data pipelines for performance and reliability
    • Collaborate with data analysts, scientists, and business stakeholders
    • Maintain clear documentation of data processes and pipelines
    • Monitor and troubleshoot data quality issues

    Requirements

    • 2+ years of experience as a Data Engineer or similar role
    • Solid hands-on experience with KNIME (workflow creation, integration, automation)
    • Strong knowledge of SQL and relational databases (e.g., PostgreSQL, MySQL, MSSQL)
    • Familiarity with Python or R for data manipulation (a plus)
    • Understanding of ETL concepts and data warehousing principles
    • Experience working with APIs, JSON, XML, Excel, and CSV data
    • Good communication skills and ability to work in cross-functional teams

    Nice to Have

    • Experience with cloud platforms (AWS, Azure, GCP)
    • Knowledge of other ETL tools (e.g., Talend, Alteryx, Apache NiFi)
    • Basic knowledge of machine learning or business intelligence tools

    What We Offer

    • Competitive salary and performance bonuses
    • Flexible working hours and remote work options
    • Opportunities for professional growth and training
    • Collaborative and supportive team culture
    More
  • Β· 20 views Β· 1 application Β· 4d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-Intermediate
    We are looking for an experienced Data Engineer to join us in one of our projects for the pharmaceutical sector. You will be involved in building complex big data/analytics platforms, data warehouses, and data pipelines for a wide range of departments...

    We are looking for an experienced Data Engineer to join us in one of our projects for the pharmaceutical sector. You will be involved in building complex big data/analytics platforms, data warehouses, and data pipelines for a wide range of departments (Finance, IoT, Operations, HR, Sales Marketing etc.) for different use cases.

    Your tasks

    • Designing, implementing, and operating Data & Analytics solutions in a DevOps working environment
    • Creating ETL/ELT processes and scalable data pipelines for the data science team & BI Team
    • Managing and releasing system resources with versioning using Continuous Integration/Continuous Delivery
    • Implementing relational and multidimensional data models
    • Standardizing data discovery, acquisition, and harmonization processes, optimizing the data
    • Ensuring compliance with all required security considerations

    Requirements

    • At least 5 years of experience working with the Big Data Ecosystems as Data Engineer
    • Strong experience with cloud-based services and with containers (Docker, Kubernetes, Functions)
    • Hands-on experience in creating and managing an enterprise-scale Data Platform, setting best practices for security, privacy, monitoring & alerting, and CI/CD
    • Excellent communication skills
    • Experience with BI & Master Data (Technology, Process & Governance) will be an advantage
    • Good knowledge of the English language
    More
  • Β· 33 views Β· 0 applications Β· 4d

    Data Engineer/DataOps Team Lead to $8000

    Poland Β· 5 years of experience Β· Upper-Intermediate
    About Us We are a leading Israeli IT company with 15 years of market experience and 8 years in Ukraine. Officially registered in Ukraine, Israel, and Estonia, we employ over 100 professionals worldwide. Specializing in successful startup collaboration,...

    About Us

    We are a leading Israeli IT company with 15 years of market experience and 8 years in Ukraine. Officially registered in Ukraine, Israel, and Estonia, we employ over 100 professionals worldwide. Specializing in successful startup collaboration, we offer services across e-commerce, Fintech, logistics, and healthcare.
    Our client is leading mobile app company that depends on high-volume, real-time data pipelines to drive user acquisition and engagement. This role is instrumental in maintaining data reliability, supporting production workflows, and enabling operational agility across teams. This is a hands-on leadership role that requires deep technical expertise, ownership mindset, and strong collaboration across engineering and business stakeholders.

    Key Requirements:

    πŸ”Ή5+ years of experience in data engineering, with strong hands-on expertise in building and maintaining data pipelines;
    πŸ”Ή At least 2 years in a team leadership or technical lead role;
    πŸ”Ή Proficient in Python, SQL, and data orchestration tools such as Airflow;
    πŸ”Ή Experience with both SQL and NoSQL databases, such as MySQL, Presto, Couchbase, MemSQL, or MongoDB;
    πŸ”Ή Bachelor’s degree in Computer Science, Engineering, or a related field;
    πŸ”Ή English – Upper-Intermediate or higher.

    Will be plus:

    πŸ”Ή Background in NOC or DevOps environments is a plus;
    πŸ”Ή Familiarity with PySpark is an advantage.

    What you will do:

    πŸ”Ή Oversee daily data workflows, troubleshoot failures, and escalate critical issues to ensure smooth and reliable operations;
    πŸ”Ή Use Python, SQL, and Airflow to configure workflows, extract client-specific insights, and adjust live processes as needed;
    πŸ”Ή Build and maintain automated data validation and testing frameworks to ensure data reliability at scale;
    πŸ”Ή Own and evolve the metadata system, maintaining table lineage, field definitions, and data usage context to support a unified knowledge platform;
    πŸ”Ή Act as the primary point of contact for operational teams and stakeholders, ensuring consistent collaboration and high data quality across the organization.

    Interview stages:

    πŸ”Ή HR Interview;
    πŸ”Ή Pro-Interview;
    πŸ”Ή Technical Interview;
    πŸ”Ή Final Interview with HR;
    πŸ”Ή Reference Check;
    πŸ”Ή Offer.

    Why Join Us?

    πŸ”Ή Be part of a friendly international team, working together on interesting global projects;
    πŸ”Ή Enjoy many chances to grow, learn from mentors, and work on projects that make a real difference;
    πŸ”Ή Join a team that loves fresh ideas and supports creativity and new solutions;
    πŸ”Ή Work closely with clients, building great communication skills and learning directly from their needs;
    πŸ”Ή Thrive in a workplace that values your needs, offering flexibility and a good balance between work and life.

    More
  • Β· 58 views Β· 5 applications Β· 4d

    Data Engineer with AI-pipelines experience

    Full Remote Β· Worldwide Β· Product Β· 1 year of experience Β· Upper-Intermediate
    Requirements Solid Python or JS (to build data pipelines in LangGraph, LangChain or with similar tools) Hands-on work with LLMs or chatbot frameworks (LangGraph, ADK etc.) Basic knowledge of content quality checks (fact-checking, toxicity...

    Requirements

    • Solid Python or JS (to build data pipelines in LangGraph, LangChain or with similar tools)
    • Hands-on work with LLMs or chatbot frameworks (LangGraph, ADK etc.)
    • Basic knowledge of content quality checks (fact-checking, toxicity filters)
    • Curious, fast-learning, well-organized, and reliable
    • English β€” Intermediate or higher

    Nice to have: experience with vector databases, prompt tuning, or content moderation.

     

    Responsibilities

    • Build and maintain AI agents pipelines that auto-create and update content
    • Design quick validation pipelines to catch errors, duplicates, or policy violations
    • Write and refine prompts to get the best answers from LLMs
    • Track quality metrics; debug and improve the agents over time
    • Suggest new ideas for smarter, safer content automation

     

    We Offer:

    • WFH and flexible working hours 
    More
  • Β· 23 views Β· 0 applications Β· 4d

    Big Data Engineer (Forward Deployed Engineer)

    Office Work Β· Ukraine (Kyiv) Β· 4 years of experience Β· Upper-Intermediate
    In our project, we help organizations turn big and complex multi-modal datasets into information-rich geo-spatial data subscriptions that can be used across a wide spectrum of use cases. We turn petabytes of raw data into clear, actionable insights by...

    In our project, we help organizations turn big and complex multi-modal datasets into information-rich geo-spatial data subscriptions that can be used across a wide spectrum of use cases. We turn petabytes of raw data into clear, actionable insights by applying advanced analytics to multi-sourced data, enabling customers to gain comprehensive understanding of organizations, events, and behaviors across land, sea, air, cyber and space domains.

    We are seeking a Big Data Engineer (Forward Deployed Engineer) to work on-site in Kyiv, supporting critical geospatial analytics and natural language processing initiatives. This hybrid role combines hands-on big data engineering with product development work, focusing on analyzing massive datasets to surface actionable intelligence patterns and behavioral insights.

    As a forward-deployed engineer, you will be embedded directly with the operations team, working on cutting-edge projects that transform petabytes of structured and unstructured geospatial data into strategic intelligence. You'll play a crucial role in developing and optimizing the data fusion capabilities that reduce task force response times from days to hours, while collaborating closely with multidisciplinary teams supporting mission-critical applications.

     

    1. RESPONSIBILITIES
      1. Design and implement scalable big data processing pipelines for ingesting and analyzing petabytes of multi-modal geospatial datasets
      2. Develop and optimize data fusion algorithms that automatically identify relationships and surface hidden patterns in near real-time
      3. Build and maintain robust data engineering infrastructure supporting behavioral analytics and anomaly detection at unprecedented scale
      4. Collaborate with data scientists and analysts to translate complex analytical requirements into production-ready data processing solutions
      5. Implement and optimize natural language processing workflows for unstructured data analysis and entity relationship extraction
      6. Ensure data quality, governance, and security compliance across all data processing workflows
      7. Document data architectures, processing standards, and operational procedures to support knowledge transfer and audit readiness
      8. Participate in incident response and troubleshooting for data pipeline and processing issues
      9. Develop custom attribution and modeling technologies for real-time threat detection and opportunity identification specific to regional requirements
      10. Build integration layers connecting the data engine with tactical mission networks and front-end visualization tools used by local partners
      11. Implement behavioral analytics capabilities that enable rapid identification of meaningful activities, patterns, and anomalies in complex regional datasets
      12. Support product development initiatives by prototyping new analytical capabilities and validating them against real-world operational scenarios
      13. Present analytical findings, insights, and recommendations to both technical and executive audiences, translating complex data relationships into actionable intelligence
      14. Conduct on-site client workshops and training sessions on the analytical capabilities and data products
      15. Optimize data processing performance for tactical decision-making timelines, ensuring sub-hour response times for critical intelligence queries

     

    SKILLS

    1. Big Data Technologies: 4+ years hands-on experience with distributed computing frameworks (Apache Spark, Hadoop, Kafka, Flink)
    2. Programming Proficiency: Expert-level skills in Python, Scala, or Java for large-scale data processing; SQL expertise for complex analytical queries
    3. Geospatial Analytics: Experience with geospatial data formats (GeoTIFF, SHP, KML, NetCDF) and processing libraries (GDAL, PostGIS, GeoPandas, Shapely)
    4. Cloud Platforms: Proficiency with cloud-based big data services (Azure Data Factory, Databricks, HDInsight, or AWS equivalents)
    5. Data Engineering: Strong understanding of ETL/ELT pipelines, data warehousing concepts, and real-time streaming architectures
    6. Database Systems: Experience with both SQL (PostgreSQL, SQL Server) and NoSQL (MongoDB, Cassandra, Elasticsearch) database technologies
    7. DevOps & Infrastructure: Familiarity with containerization (Docker, Kubernetes), CI/CD pipelines, and infrastructure-as-code principles
    8. Performance Optimization: Proven track record of optimizing big data workloads for speed, cost, and reliability at petabyte scale
    9. Real-time Analytics: Required hands-on experience building low-latency data processing systems for near real-time behavioral analytics and alerting
    10. Geospatial Intelligence: Required understanding of GEOINT workflows, temporal analysis, and multi-source intelligence fusion methodologies
    11. Data Visualization Integration: Required experience integrating data engines with tactical visualization tools and mission planning software
    12. Multi-source Data Fusion: Required proven ability to correlate and analyze disparate data sources (satellite imagery, communications data, social media, sensor networks)
    13. Client Presentation Skills: Required experience presenting complex technical findings to diverse audiences including C-level executives, leadership, and technical teams
    14. Stakeholder Management: Required ability to manage multiple client relationships simultaneously while delivering high-quality technical solutions
    15. Security Awareness: Required understanding of data security best practices, particularly in sensitive operational environment 

     

    BENEFITS:

    • Fair Compensation: Enjoy a competitive salary and bonuses that recognize your hard work.
    • Work-Life Balance: Choose from flexible work arrangements, whether you prefer working from home, the office, or a mix of both.
    • Grow with Us: Take advantage of opportunities for professional growth through training, certifications, and attending exciting conferences.
    • Care for Your Well-being: Benefit from comprehensive health benefits, wellness programs, and other perks designed to support your overall well-being.
    More
  • Β· 53 views Β· 9 applications Β· 4d

    Data Engineer

    Full Remote Β· EU Β· 5 years of experience Β· Upper-Intermediate
    Hello, fellow data engineers! We are Stellartech - an educational technology product company, and we believe in inspiration but heavily rely on data. And we are looking for a true pipeline detective and zombie process hunter! Why? Because we trust our...

    Hello, fellow data engineers! We are Stellartech - an educational technology product company, and we believe in inspiration but heavily rely on data. And we are looking for a true pipeline detective and zombie process hunter!

     

    Why? Because we trust our Data Platform for daily business decisions. From β€œWhat ad platform presents us faster? Which creative media presents our value to customers in the most touching way?” to β€œWhat would our customers like to learn the most about? What can make education more enjoyable?”, we rely on numbers, metrics and stuff. But as we are open and curious, there’s a lot to collect and measure! That’s why we need to extend, improve and speed up our data platform.

     

    That’s why we need you to:

    • Build and maintain scalable data pipelines using Python and Airflow to provide data ingestion, transformation, and delivery.
    • Develop and optimize ETL/ELT workflows to ensure data quality, reliability, and performance.
    • Bring your vision and opinion to define data requirements and shape solutions to business needs.
    • Smartly monitor, relentlessly troubleshoot, and bravely resolve issues in data workflows, striving for high availability and fault tolerance.
    • Propose, advocate, and implement best practices for data storage and querying using AWS services such as S3 and Athena.
    • Document data workflows and processes, ensuring you don’t have to say it twice and have time for creative experiments. Sure, it’s about clarity and maintainability across the team as well.

     

    For that, we suppose you’d be keen on

    • AWS services such as S3, Kinesis, Athena, and others.
    • dbt and Airflow for data pipeline and workflow management.
    • Application of data architecture, ETL/ELT processes, and data modeling.
    • Advanced SQL and Python programming.
    • Monitoring tools and practices to ensure data pipeline reliability.
    • CI/CD pipelines and DevOps practices for data platforms.
    • Monitoring and optimizing platform performance at scale.

     

    Will be nice to 

    • Understand cloud services (we use AWS), advances, trade-offs, and perspectives.
    • Keep in mind the analytical approach and the ability to consider future perspectives in system design in daily practice and technical decisions

     

    Why You'll Love Working With Us:

    • Impactful Work: Your contributions will directly shape the future of our company.
    • Innovative Environment: We're all about trying new things and pushing the envelope in EdTech.
    • Freedom: flexible role based either remotely or hybrid from one of our offices in Cyprus, Poland.
    • Health: we offer Health Insurance package for hybrid mode (Cyprus, Poland) and health corner in the Cyprus office.
    • AI solutions β€” GPT Chat bot/ Chat GPT subscription and other tools.
    • Wealth: we offer a competitive salary.
    • Balance: flexible paid time off, you get 21 days of annual leave + 10 bank holidays.
    • Collaborative Culture: Work alongside passionate professionals who are as driven as you are.

     

    More
  • Β· 30 views Β· 1 application Β· 4d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-Intermediate
    We are seeking a proactive Senior Data Engineer to join our vibrant team. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within...

    We are seeking a proactive Senior Data Engineer to join our vibrant team.

    As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges. 
     

    Key Responsibilities:

    • Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
    • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
    • Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
    • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency. 
    • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
    • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.


    Tools and skills you will use in this role:

    • Palantir Foundry
    • Python
    • PySpark
    • SQL
    • TypeScript


    Required:

    • 5+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
    • Strong proficiency in Python and PySpark;
    • Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
    • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
    • Expertise in data modeling, data warehousing, and ETL/ELT concepts;
    • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
    • Proficiency in containerization technologies (e.g., Docker, Kubernetes);
    • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities;
    • Strong communication and teamwork abilities;
    • Understanding of data security and privacy best practices;
    • Strong mathematical, statistical, and algorithmic skills.


    Nice to have:

    • Certification in Cloud platforms, or related areas;
    • Experience with search engine Apache Lucene, Webservice Rest API;
    • Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
    • Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
    • Previous experience working with JavaScript and TypeScript.
       

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
Log In or Sign Up to see all posted jobs