Jobs

139
  • Β· 74 views Β· 10 applications Β· 9d

    Senior Data Engineer

    Countries of Europe or Ukraine Β· Product Β· 4 years of experience Β· A2 - Elementary
    Our Mission and Vision At Solidgate, our mission is clear: to empower outstanding entrepreneurs to build exceptional internet companies. We exist to fuel the builders β€” the ones shaping the digital economy β€” with the financial infrastructure they deserve....

    Our Mission and Vision
    At Solidgate, our mission is clear: to empower outstanding entrepreneurs to build exceptional internet companies. We exist to fuel the builders β€” the ones shaping the digital economy β€” with the financial infrastructure they deserve. We’re on an ambitious journey to become the #1 payments orchestration platform in the world.
     

    Solidgate is part of Endeavor β€” a global community of the world’s most impactful entrepreneurs. We’re proud to be the first payment orchestrator from Europe to join β€” and to share our expertise within a network of outstanding global companies.
     

    As our processing volume is skyrocketing, the number of engineering teams is growing too β€” we’re already at 14. This gives our Data Engineering function a whole new scale of challenges: not just building data-driven solutions, but creating products and infrastructure that empowers other teams to build them autonomously.

    That’s why we’re launching the Data Platform direction and looking for a Senior Data Engineer who will own the end-to-end construction of our Data Platform. The mission of the role is to build products that allow other teams to quickly launch, scale, and manage their own data-driven solutions independently.

    You can check out the overall tech stack of the product here https://solidgate-tech.github.io/

     

    What you’ll own:


    β€” Build the Data Platform from scratch (architecture, design, implementation, scaling)
    β€” Implement a Data Lake approach and Layered Architecture (bronze β†’ silver data layers)
    β€” Integrate streaming processing into data engineering practices
    β€” Foster a strong engineering culture with the team and drive best practices in data quality, observability, and reliability

     

    What you need to join us:


    β€” 3+ years of commercial experience as a Data Engineer
    β€” Strong hands-on experience building data solutions in Python
    β€” Confident SQL skills
    β€” Experience with Airflow or similar tools
    β€” Experience building and running DWH (BigQuery / Snowflake / Redshift)
    β€” Expertise in streaming stacks (Kafka / AWS Kinesis)
    β€” Experience with AWS infrastructure: S3, Glue, Athena
    β€” High attention to detail
    β€” Proactive, self-driven mindset
    β€” Continuous-learning mentality
    β€” Strong delivery focus and ownership in a changing environment

     

    Nice to have:


    β€” Background as an analyst or Python developer
    β€” Experience with DBT, Grafana, Docker, LakeHouse approaches
     

    Why Join Solidgate?
    High-impact role. You’re not inheriting a perfect system β€” you’re building one.
    Great product. We’ve built a fintech powerhouse that scales fast. Solidgate isn’t just an orchestration player β€” it’s the financial infrastructure for modern Internet businesses. From subscriptions to chargeback management, fraud prevention, and indirect tax β€” we’ve got it covered.
    Massive growth opportunity. Solidgate is scaling rapidly β€” this role will be a career-defining move.
    Top-tier tech team. Work alongside our driving force β€” a proven, results-driven engineering team that delivers. We’re also early adopters of cutting-edge fraud and chargeback prevention technologies from the Schemes.
    Modern engineering culture. TBDs, code reviews, solid testing practices, metrics, alerts, and fully automated CI/CD.

    Competitive corporate benefits:

    • more than 30 days off during the year (20 working days of vacation + days off for national holidays)
    • health insurance and corporate doctor
    • free snacks, breakfasts, and lunches in the office
    • full coverage of professional training (courses, conferences, certifications)
    • yearly performance review 
    • sports compensation
    • competitive salary
    • Apple equipment

       

    πŸ“© Ready to become a part of the team? Then cast aside all doubts and click "apply".

    More
  • Β· 47 views Β· 6 applications Β· 9d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    We are seeking an experienced Data Engineer with a strong background in healthcare data integration, cloud platforms, and Salesforce data ecosystems. The ideal candidate will have proven expertise in building and optimizing scalable, secure, and...

    We are seeking an experienced Data Engineer with a strong background in healthcare data integration, cloud platforms, and Salesforce data ecosystems. The ideal candidate will have proven expertise in building and optimizing scalable, secure, and high-performance data pipelines that power analytics, reporting, and patient engagement workflows.


    Minimum Experience Requirements:


    Advanced Data Integration (5+ years):

    • Strong experience integrating data from healthcare systems, Salesforce, and cloud-based sources.
    • Expertise in managing complex data pipelines for large-scale data ingestion and transformation.
    • Hands-on experience integrating Salesforce data using MuleSoft, Salesforce APIs, and Data Loader.
    • Deep understanding of how Salesforce data supports patient engagement, clinical workflows, and reporting.


    Cloud Platform Expertise (5+ years):

    • Proven experience with AWS services (S3, Redshift, Glue, Lambda, Athena, EC2) for data storage, processing, and orchestration.
    • Experience scaling cloud infrastructure to manage large and sensitive healthcare datasets.


    Healthcare Data Experience (3+ years):

    • Strong background working with healthcare data (clinical data, patient records, lab results).
    • Familiarity with healthcare data integration and regulatory standards such as HIPAA.
    • Bachelor’s or Master’s Degree in Computer Science, Data Engineering, Health Informatics, or a related field.
    • Equivalent practical experience may also be considered.


    Required Skills:

    • Expertise with AWS data services (S3, Redshift, Glue, Lambda, Athena, EMR).
    • Ability to architect and optimize data pipelines for performance, scalability, and reliability.
    • Proficiency in integrating Salesforce data via APIs.
    • Strong ETL/ELT experience with AWS Glue, Apache Airflow, or custom Python scripts.
    • Knowledge of healthcare data security standards (HIPAA, HITECH), encryption techniques, and secure data transfer.
    • Hands-on experience with AWS Redshift or Snowflake for building high-performance data warehouses, enabling efficient querying and BI accessibility.
    • Proven collaboration with BI and analytics teams to translate business requirements into technical solutions and ensure data supports actionable insights.
    More
  • Β· 62 views Β· 5 applications Β· 9d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    We are seeking an experienced Data Engineer with a strong background in healthcare data integration, cloud platforms, and Salesforce data ecosystems. The ideal candidate will have proven expertise in building and optimizing scalable, secure, and...

    We are seeking an experienced Data Engineer with a strong background in healthcare data integration, cloud platforms, and Salesforce data ecosystems. The ideal candidate will have proven expertise in building and optimizing scalable, secure, and high-performance data pipelines that power analytics, reporting, and patient engagement workflows.

     

    Minimum Experience Requirements

    Advanced Data Integration (5+ years):

    • Strong experience integrating data from healthcare systems, Salesforce, and cloud-based sources.
    • Expertise in managing complex data pipelines for large-scale data ingestion and transformation.
    • Hands-on experience integrating Salesforce data using MuleSoft, Salesforce APIs, and Data Loader.
    • Deep understanding of how Salesforce data supports patient engagement, clinical workflows, and reporting.

    Cloud Platform Expertise (5+ years):

    • Proven experience with AWS services (S3, Redshift, Glue, Lambda, Athena, EC2) for data storage, processing, and orchestration.
    • Experience scaling cloud infrastructure to manage large and sensitive healthcare datasets.

    Healthcare Data Experience (3+ years):

    • Strong background working with healthcare data (clinical data, patient records, lab results).
    • Familiarity with healthcare data integration and regulatory standards such as HIPAA.
    • Bachelor’s or Master’s Degree in Computer Science, Data Engineering, Health Informatics, or a related field.
    • Equivalent practical experience may also be considered.

     

    Required Skills:

    • Expertise with AWS data services (S3, Redshift, Glue, Lambda, Athena, EMR).
    • Ability to architect and optimize data pipelines for performance, scalability, and reliability.
    • Proficiency in integrating Salesforce data via APIs.
    • Strong ETL/ELT experience with AWS Glue, Apache Airflow, or custom Python scripts.
    • Knowledge of healthcare data security standards (HIPAA, HITECH), encryption techniques, and secure data transfer.
    • Hands-on experience with AWS Redshift or Snowflake for building high-performance data warehouses, enabling efficient querying and BI accessibility.
    • Proven collaboration with BI and analytics teams to translate business requirements into technical solutions and ensure data supports actionable insights.
    More
  • Β· 43 views Β· 9 applications Β· 9d

    Senior Data Streaming Engineer – Location (Latvia or Lithuania )(relocation from other EU countries possible)

    Full Remote Β· Worldwide Β· 2 years of experience Β· B2 - Upper Intermediate
    Senior Data Engineer (Scala / AWS / Streaming) Engagement: Full-time / Long-term contract Domain: Big Data, Real-Time Streaming, Cloud Infrastructure About the Role We are seeking a highly skilled Data Engineer with strong expertise in Scala, AWS, Python,...

    Senior Data Engineer (Scala / AWS / Streaming)
    Engagement: Full-time / Long-term contract
    Domain: Big Data, Real-Time Streaming, Cloud Infrastructure

    About the Role

    We are seeking a highly skilled Data Engineer with strong expertise in Scala, AWS, Python, and DevOps practices. You will be responsible for designing, building, and maintaining highly available and scalable streaming data solutions in a modern cloud environment.

    This role requires a DevOps mindset, ownership mentality (β€œYou Build It, You Run It”), and a strong collaborative approach to working with analysts, data scientists, and engineers across the organization.

    Responsibilities

    • Deliver reliable and scalable streaming solutions for millions of real-time interactions.
    • Design, implement, and maintain cloud-based data pipelines (batch and streaming) and the supporting infrastructure.
    • Work closely with analysts, data scientists, and developers across all departments to ensure data solutions meet business needs.
    • Build and maintain a robust real-time customer profile system, enabling personalized recommendations on digital platforms.
    • Co-develop and refine streaming architectures from design through deployment and operation.
    • Apply Infrastructure as Code (IaC) and CI/CD best practices for efficient and automated delivery.
    • Actively contribute to the data engineering guild and communities of practice.
    • Support efforts to harmonize the organization’s data landscape across countries, departments, and acquisitions.

    Requirements

    • Strong programming experience with Scala (primary) and proficiency in Python or shell scripting.
    • Hands-on experience with AWS services (CodePipeline, DevOps toolchain, streaming/analytics services).
    • Track record of implementing highly available and scalable big data solutions.
    • Proficiency in streaming data workflows and real-time processing architectures.
    • Strong knowledge of Infrastructure as Code and CI/CD pipelines.
    • Solid understanding of modern software engineering best practices and Domain-Driven Design.
    • DevOps mindset, with a passion for automation, monitoring, and ownership.
    • Collaborative attitude β€” experience with pair programming and cross-functional teamwork.
    • AWS Certification (minimum: AWS Certified Associate) or willingness to achieve within 6 months.
    More
  • Β· 26 views Β· 2 applications Β· 8d

    Data Engineer (NLP-Focused)

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel the Ukrainian LLM and NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling the Data Scientists and ML Engineers to develop cutting-edge language models.

    You will work at the intersection of data engineering and machine learning, ensuring that the datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context.

    Requirements:
    - Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
    - NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    Understanding of FineWeb2 or a similar processing pipeline approach.
    - Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
    - Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
    - Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as the NLP applications may require embedding storage and fast similarity search.
    - Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
    - Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
    - Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.

    Responsibilities:
    - Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information.
    - Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
    - Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to the language modeling efforts.
    - Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
    - Automate data processing workflows and ensure their scalability and reliability.
    - Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
    - Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs.
    - Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
    - Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models.
    - Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
    - Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
    - Manage data security, access, and compliance.
    - Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 44 views Β· 1 application Β· 8d

    Junior/Middle Data engineer (IRC274101)

    Hybrid Remote Β· Ukraine (Vinnytsia, Zhytomyr, Ivano-Frankivsk + 7 more cities) Β· 2 years of experience Β· B2 - Upper Intermediate
    Job Description Strong experience in data pipeline development and ETL/ELT processes. Proficiency with Apache Airflow for workflow orchestration. Hands-on experience with object storage solutions, preferably MinIO. Expertise in SQL and database...

    Job Description

    • Strong experience in data pipeline development and ETL/ELT processes.
    • Proficiency with Apache Airflow for workflow orchestration.
    • Hands-on experience with object storage solutions, preferably MinIO.
    • Expertise in SQL and database management, specifically PostgreSQL.
    • Experience with graph databases like Neo4j.
    • Familiarity with vector databases such as Qdrant.
    • Ability to work with large, diverse datasets and ensure data integrity.
    • Solid expertise in SQL and relational DBs
    • Experience in database design and optimization
    • Experience with NoSQL DBs (MongoDB, Cosmos, etc.) for handling unstructured and semi-structured data
    • Contributing to release management following the best CI/CD practices

       

    Job Responsibilities

    • Design, develop, and maintain robust and scalable data pipelines for ingesting, transforming, and loading diverse datasets.
    • Implement ETL/ELT processes to cleanse, validate, and enrich raw data into query-optimized formats.
    • Orchestrate data workflows using Apache Airflow, including scheduling jobs and managing dependencies.
    • Manage and optimize data storage solutions in MinIO (object storage), PostgreSQL (relational data).
    • Ensure data integrity, quality, and compliance throughout the data lifecycle.
    • Collaborate with cross-functional teams to understand data requirements and deliver data solutions that enable advanced analytics and AI/ML initiatives.
    • Troubleshoot and resolve data-related issues, ensuring high availability and performance of data systems.

     

    Department/Project Description

    Our client is focused on developing a robust and versatile data ingestion pipeline and associated schema designed to efficiently and accurately collect, process, analyze, and manage diverse data types from various sources in real-time or near real-time.This pipeline will automate and enhance data workflows, ensure data quality, and support advanced analytical capabilities including NLP, Face Recognition, and OCR.

    As a Middle Data Engineer on the project, you will play a crucial role in managing deployment, infrastructure, automation, and monitoring. You will be instrumental in setting up and maintaining CI/CD pipelines, managing cloud resources, ensuring system stability and performance, and implementing robust logging and alerting mechanisms for the client platform.If you seek a challenge and want to impact the way the world distributes products from manufacturers to store shelves, we invite you to join our team.

    More
  • Β· 18 views Β· 0 applications Β· 8d

    Middle/Senior Data Engineer (IRC274051)

    Hybrid Remote Β· Ukraine (Vinnytsia, Ivano-Frankivsk, Kyiv + 7 more cities) Β· 3 years of experience Β· B2 - Upper Intermediate
    Job Description - 3+ years of intermediate to advanced SQL - 3+ years of Python development (intermediate level is fine: Pandas, Numpy, boto3, seaborn, requests, unittest) - Experience building ETLs, preferably in python - Experience with data tools (ex.:...

    Job Description

    - 3+ years of intermediate to advanced SQL

    - 3+ years of Python development (intermediate level is fine: Pandas, Numpy, boto3, seaborn, requests, unittest)

    - Experience building ETLs, preferably in python

    - Experience with data tools (ex.: Airflow, Grafana, AWS Glue, AWS Athena)

    - Excellent understanding of database design

    - Cloud expereince (AWS S3, Lambda, or alternatives)

    - Agile SDLC knowledge
    - Detail-oriented
    - Data-focused
    - Strong verbal/written communication and data presentation skills, including an ability to effectively communicate with both business and technical teams
    - An ability and interest in working in a fast-paced and rapidly changing environment
    - Be self-driven and show ability to deliver on ambiguous projects with incomplete or dirty data

     

    Would be a plus:
    - Understanding of basic SVOD store purchase workflows
    - Background in supporting data scientists in conducting data analysis / modelling to support business decision making

    - Experience in supervising subordinate staff

     

    Job Responsibilities

    - Data analysis, auditing, statistical analysis
    - ETL buildouts for data reconciliation
    - Creation of automatically-running audit tools
    - Interactive log auditing to look for potential data problems
    - Help in troubleshooting customer support team cases
    - Troubleshooting and analyzing subscriber reporting issues:
          Answer management questions related to subscriber count trends
          App purchase workflow issues
          Audit/reconcile store subscriptions vs userdb

    Department/Project Description

    Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services.

    This position collaborates with a geographically diverse team to develop, deliver, and maintain systems for digital subscription and transactional products across the Customer’ SVOD portfolio.

    More
  • Β· 49 views Β· 13 applications Β· 8d

    Middle+/Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    This is for E2E Project, which is collecting player level data for our brands, which helps us to track revenue on various pages/brands and better understand how our users are using our products and what's trending. Our vision for this is to become the...

    This is for E2E Project, which is collecting player level data for our brands, which helps us to track revenue on various pages/brands and better understand how our users are using our products and what's trending. Our vision for this is to become the central source of truth for user journey insights, empowering our company to make smarter, faster, and more impactful decisions that drive commercial growth and product innovation.

     

    Role requirements:

    • 3+ years of experience as a Data Engineer or Software Engineer working on data infrastructure.
    • Strong Python skills and hands-on experience with SQL and Snowflake.
    • Experience with modern orchestration tools like Airflow and data streaming platforms like Kafka.
    • Understanding of data modeling, governance, and performance tuning in warehouse environments.
    • Ability to work independently and prioritize across multiple stakeholders and systems.
    • Comfort operating in a cloud-native environment (e.g., AWS, Terraform, Docker).
    • Python side:
      • must have is experience in pulling and managing data from APIs
      • nice to have would be web scraping via browser automation (playwright / selenium / puppeteer for example)

     

    Role Responsibilities:

    • Build connectors to external partners to harvest the data
    • Build custom functions to process the data
    • Integrate data into Snowflake and other reporting tools
    • Work cross teams and cross functions to provide good quality and speed of data
    • Champion and challenge existing solution to improve and optimize them even further

       

    Key Skills/What they'll be working on:

    • Design, build, and maintain ETL/ELT pipelines and batch/streaming workflows.
    • Integrate data from external APIs and internal systems into Snowflake and downstream tools.
    • Own critical parts of our Airflow-based orchestration layer and Kafka-based event streams.
    • Ensure data quality, reliability, and observability across our pipelines and platforms.
    • Build shared data tools and frameworks to support analytics and reporting use cases.
    • Partner closely with analysts, product managers, and other engineers to support data-driven decisions.

       

     

    More
  • Β· 67 views Β· 5 applications Β· 8d

    Data Engineer with DBT

    Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    Who we’re looking for ‍‍ Role: Data Engineer We are looking for a skilled Data Engineer with strong expertise in AWS, dbt, Python, and SQL. You will collaborate with cross-functional teams to design and optimize data pipelines, ensure data quality,...

    Who we’re looking for πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’»


    Role: Data Engineer
    We are looking for a skilled Data Engineer with strong expertise in AWS, dbt, Python, and SQL. You will collaborate with cross-functional teams to design and optimize data pipelines, ensure data quality, and enable data-driven decision-making.

    The Project 

    We're working on a large enterprise product - a US market leader in ticket sales for global events: from the Champions League and Super Bowl to the Olympic Games.

    This is a stable and long-term partnership (over 4 years together already), and the project continues to grow.
    You will join the project under the leadership of Project Manager [insert 

    Tech requirements:

    • Strong knowledge of Python & SQL (experience as Python Developer is a plus)
    • Experience with cloud data warehouses: Redshift, BigQuery, Snowflake
    • Data modeling with dbt (must-have)
    • BI tools: Tableau, PowerBI, Looker
    • Workflow management: Airflow, Prefect
    • Understanding of data quality principles & validation
    • English – Upper-Intermediate+, Ukrainian – fluent

    What you will do:

    • Gather and analyze requirements
    • Build data models using dbt/SQL
    • Develop and maintain pipelines in collaboration with BI engineers and analysts
      Hiring process:
    • HR interview (up to 30 min)
    • Technical interview (up to 1h)
    • Client interview (up to 1h 15 min)

     

    Why join Empeek?

    ✨ Healthcare focus – one of the fastest-growing industries with endless opportunities.

     βœ¨ Challenging & meaningful products – complex architectures, modern technologies, and solutions that truly make an impact.

     βœ¨ Professional growth – every team member has a personal development plan, mentorship, career maps, and opportunities to grow into new roles and responsibilities.
    ✨ Strong team culture – we share the same mission, values, and passion for what we do.
    ✨ Flexibility & ownership – freedom to choose your format and schedule, focus on results, and have a real impact on the company’s success.


    What we offer πŸ’Ž

    • Access to learning opportunities – internal and external training, certification reimbursement.
    • Up to $300/year for English classes + free speaking club.
    • Up to $180/year for sports activities.
    • Mentorship and knowledge sharing – people you can really learn from.
    • Career maps and growth plans to support your professional development.
    • New equipment provision, and accounting support if needed.
    • Internal mobility – the opportunity to switch roles, projects, and take on more responsibility.
    • Competitive market-level salary with regular reviews.
    • Additional perks and compensations.
    • Psychological safety and supportive culture.
    • Company values that align with yours.
    • Community and team-building activities for informal networking.
    • Social responsibility – support for Ukraine, the Armed Forces, and CSR initiatives.

     

    More
  • Β· 41 views Β· 2 applications Β· 7d

    Data Engineer NPS

    Full Remote Β· EU Β· 4 years of experience Β· B2 - Upper Intermediate
    Skills Required: AWS, Kafka, Spark, Python (FastAPI), SQL, Terraform. You have: high standards for the quality of the work you deliver a degree in computer science, software engineering, a related field, or relevant prior experience 3+ years of...

    Skills Required: AWS, Kafka, Spark, Python (FastAPI), SQL, Terraform.

     

    You have:

    ● high standards for the quality of the work you deliver

    ● a degree in computer science, software engineering, a related field, or relevant prior experience

    ● 3+ years of software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience

    ● affinity with data analysis

    ● a natural interest in digital media products

    ● AWS certified on an Associate level or higher, or willing to get certified

     

    You have experience in:

     

    ● developing applications in a Kubernetes environment

    ● developing batch jobs in Apache Spark (pyspark or Scala)

    ● developing streaming applications for Apache Kafka in Python

    ● working with CI/CD pipelines

    ● writing Infrastructure as Code with Terraform

     

    Responsibilities

    ● Maintain and extend our back-end.

    ● Support operational excellence through practices like code review and pair programming.

    ● The entire team is responsible for the operations of our services. This includes actively monitoring different applications and their infrastructure as well as intervening to solve operational problems whenever they arise.

     

    You can:

    ● analyze and troubleshoot technical issues

    ● communicate about technical and functional requirements with people outside of the team

    ● You have a positive and constructive mindset and give feedback accordingly

     

    Location: remote from Latvia/Lithuania, possible relocation

    More
  • Β· 49 views Β· 10 applications Β· 6d

    Data Engineer (with Azure)

    Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· B1 - Intermediate
    Π₯ΠΎΡ‚Ρ–Π»ΠΈ Π± Π·Π±Ρ–Π»ΡŒΡˆΠΈΡ‚ΠΈ СкспСртизу Π΄ΠΎ Ρ…ΠΌΠ°Ρ€? Π¨ΡƒΠΊΠ°Ρ”ΠΌΠΎ Data Engineer Π΄ΠΎ ΠΌΡ–ΠΆΠ½Π°Ρ€ΠΎΠ΄Π½ΠΎΡ— ΠΊΠΎΠΌΠΏΠ°Π½Ρ–Ρ— Π· Ρ…ΠΌΠ°Ρ€Π½ΠΈΡ… Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–ΠΉ. Π¦Π΅ ΠΏΡ€ΠΎΠ²Ρ–Π΄Π½ΠΈΠΉ ΠΏΠ°Ρ€Ρ‚Π½Π΅Ρ€ Microsoft&Azure, Ρ‰ΠΎ Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡ΡƒΡ” Ρ…ΠΌΠ°Ρ€Π½ΠΈΠΉ сСрвіс Ρƒ Π„Π²Ρ€ΠΎΠΏΡ– Ρ‚Π° Π‘Ρ…Ρ–Π΄Π½Ρ–ΠΉ Азії. Π ΠΎΠ±ΠΎΡ‚Π° Ρ–Π· Ρ€Ρ–Π·Π½ΠΈΠΌΠΈ Π΄ΠΎΠΌΠ΅Π½Π°ΠΌΠΈ ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² + Π½Π°ΠΉΠΏΡ€ΠΎΡ„Π΅ΡΡ–ΠΉΠ½Ρ–ΡˆΠ°...

    Π₯ΠΎΡ‚Ρ–Π»ΠΈ Π± Π·Π±Ρ–Π»ΡŒΡˆΠΈΡ‚ΠΈ СкспСртизу Π΄ΠΎ Ρ…ΠΌΠ°Ρ€? Π¨ΡƒΠΊΠ°Ρ”ΠΌΠΎ Data Engineer Π΄ΠΎ ΠΌΡ–ΠΆΠ½Π°Ρ€ΠΎΠ΄Π½ΠΎΡ— ΠΊΠΎΠΌΠΏΠ°Π½Ρ–Ρ— Π· Ρ…ΠΌΠ°Ρ€Π½ΠΈΡ… Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–ΠΉ. Π¦Π΅ ΠΏΡ€ΠΎΠ²Ρ–Π΄Π½ΠΈΠΉ ΠΏΠ°Ρ€Ρ‚Π½Π΅Ρ€ Microsoft&Azure, Ρ‰ΠΎ Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡ΡƒΡ” Ρ…ΠΌΠ°Ρ€Π½ΠΈΠΉ сСрвіс Ρƒ Π„Π²Ρ€ΠΎΠΏΡ– Ρ‚Π° Π‘Ρ…Ρ–Π΄Π½Ρ–ΠΉ Азії. Π ΠΎΠ±ΠΎΡ‚Π° Ρ–Π· Ρ€Ρ–Π·Π½ΠΈΠΌΠΈ Π΄ΠΎΠΌΠ΅Π½Π°ΠΌΠΈ ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² + Π½Π°ΠΉΠΏΡ€ΠΎΡ„Π΅ΡΡ–ΠΉΠ½Ρ–ΡˆΠ° ΠΊΠΎΠΌΠ°Π½Π΄Π° = зростання! 

     

    Main Responsibilities:

    Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements.

     

    You will work on cutting-edge cloud technologies, including Microsoft Fabric, Azure Synapse Analytics, Apache Spark, Data Lake, Data Bricks, Data Factory, Cosmos DB, HD Insights, Stream Analytics, Event Grid in the implementation projects for corporate clients all over EU, CIS, United Kingdom, Middle East.

    Our ideal candidate is a professional passionated with technologies, a curious and self-motivated person.

     

    Responsibilities revolve around DevOps and include implementing ETL pipelines, monitoring/maintaining data pipeline performance, model optimization

     

    Mandatory Requirements:

    – 2+ years of experience, ideally within a Data Engineer role.

    – understanding of data modeling, data warehousing concepts, and ETL processes

    – experience with Azure Cloud technologies

    – experience in distributed computing principles and familiarity with key architectures, broad experience across a set of data stores (Azure Data Lake Store, Azure Synapse Analytics, Apache Spark, Azure Data Factory)

    – Understanding of landing, staging area, data cleansing, data profiling, data security and data architecture concepts (DWH, Data Lake, Delta Lake/Lakehouse, Datamart)

    – SQL-skills

    – communication and interpersonal skills

    – English β€”Π’2

    – Ukrainian language

     

    Will be beneficial if a candidate has experience in SQL migration from on-premises to cloud, data modernization and migration, advanced analytics projects, and/or professional certification in data&analytics.

     

    We offer:

    – professional growth and international certification

    – free of charge technical and business trainings and the best bootcamps (worldwide, including HQ Microsoft- Redmond courses)

    – innovative data & analytics projects, practical experience with cutting-edge Azure data&analytics technologies at various customers’ projects

    – great compensation and individual bonus remuneration

    – medical insurance

    – long-term employment

    – ondividual development plan

    More
  • Β· 55 views Β· 7 applications Β· 6d

    Middle/Senior Data Engineer

    Full Remote Β· EU Β· 2 years of experience Β· B2 - Upper Intermediate
    In partnership with one of the global consulting enterprises, on a large-scale digital program for a UK Telecommunication industry client, we are seeking a Middle/Senior Data Engineer. Responsibilities: - Work closely with product and analytics teams to...

    In partnership with one of the global consulting enterprises, on a large-scale digital program for a UK Telecommunication industry client, we are seeking a Middle/Senior  Data Engineer.
    Responsibilities:
    - Work closely with product and analytics teams to identify and solve problems through experimentation

    - Calculate and communicate baseline conversion rates, sample sizes, MDE, confidence levels and statistical power to product teams

    - Monitor and validate data quality in running experiments

    - Analyse performance of experiments, visualise findings in a clear, digestible form and communicate them to product teams and wider stakeholder groups

    - Conduct deep dive analyses of results to further understand performance and suggest future iterations

    - Work with the platforms team to improve the capability of the experimentation platform
     

    Skills & Tools for Mid level:

    Proficient in SQL and Excel. 
    Familiarity with Optimizely, Piano Analytics, and Tag Inspector. 
    Exposure to Python or R for basic statistical analysis. 
    Basic understanding of experimental design and statistical concepts. 
    Typical Experience 1–2 years in a data or analytics role. 
    Works under general direction, escalating complex issues.

     

    Skills & Tools for Advanced:

    SQL and experience with large datasets. 
    Proficient in Python or R for statistical modeling. 
    Strong understanding of frequentist statistics and experimental design. Skilled in using Optimizely, Tableau, and data dictionaries like Alation. Typical Experience 3–5 years in experimentation or analytics roles. 
    Works with minimal supervision and influences product decisions.

    Location: remote from Latvia/Lithuania or relocation there

    More
  • Β· 34 views Β· 1 application Β· 6d

    Senior Professional Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Who We Are: We are a vibrant team of passionate software developers and architects who enjoy thinking outside the box and continually growing. As a Senior Professional Data Engineer in our tribe-oriented organization, you will be constantly challenged...

    Who We Are:

     

    We are a vibrant team of passionate software developers and architects who enjoy thinking outside the box and continually growing.

    As a Senior Professional Data Engineer in our tribe-oriented organization, you will be constantly challenged and supported. You’ll have the freedom to contribute your own ideas and advance your expertise at an extraordinary pace. Our collaborative culture spans stakeholders such as project leads, product owners, business analysts, security engineers, operations teams, and test teams.

     

    Responsibilities:

     

    • Design, implement, and maintain data pipelines
    • Integrate and consolidate data from various systems and platforms
    • Ensure data quality and proactively identify and resolve inconsistencies
    • Monitor and optimize databases and storage solutions
    • Implement security measures and ensure compliance with data privacy regulations
    • Work closely with data scientists, analysts, and other stakeholders
    • Stay up‑to‑date with emerging trends in data processing technologies

       

    Requirements:

     

    • Fluent in German (C1) is a must and strong proficiency in English
    • Degree in Computer Science, Data Science, or a related field; or equivalent experience
    • Experience in data engineering or a similar role
    • Proficiency in relevant programming languages (e.g., Python, Java)
    • Experience with both relational and non-relational databases (SQL & NoSQL)
    • Familiarity with ETL tools and frameworks such as Apache Spark, Apache NiFi, or similar
    • Basic knowledge of Big Data technologies including Hadoop, MongoDB, graph databases, and vector databases
    • Strong analytical thinking, communication skills, and ability to work independently
    More
  • Β· 96 views Β· 13 applications Β· 6d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 1.5 years of experience Β· B1 - Intermediate
    Ready to design scalable data solutions and influence product growth? Softsich is a young and ambitious international product tech company that develops scalable B2B digital platforms. We’re looking for a Data Engineer eager to grow with us and bring...

    🌟Ready to design scalable data solutions and influence product growth?
    Softsich is a young and ambitious international product tech company that develops scalable B2B digital platforms. We’re looking for a Data Engineer eager to grow with us and bring modern data engineering practices into high-load solutions.

     

    Your key responsibilities will include:

    • Extending the existing data warehouse (AWS: Redshift, S3, EMR) with dbt.
    • Developing and maintaining data pipelines (Kafka, MongoDB, PostgreSQL, messaging systems) using AWS Glue.
    • Building and optimizing data models for analytics and reporting (dbt, SQL).
    • Creating data verification scripts in Python (pandas, numpy, marimo / Jupyter Notebook).
    • Maintaining infrastructure for efficient and secure data access.
    • Collaborating with product owners and analysts to provide insights.
    • Ensuring data quality, integrity, and security across the lifecycle.
    • Keeping up with emerging data engineering technologies and trends.

     

    It’s a match if you have:

    • 1+ year of experience as a Data Engineer.
    • Strong understanding of data warehousing concepts and practices.
    • Hands-on experience with AWS (EC2, S3, IAM, VPC, CloudWatch).
    • Experience with dbt.
    • Proficiency in SQL, PostgreSQL, and MongoDB.
    • Experience with AWS Glue.
    • Knowledge of Kafka, SQS, SNS.
    • Strong Python skills for automation and data processing.
    • Ukrainian β€” C1 level or native.
    • English β€” Intermediate (written and spoken).
    • You are proactive, communicative, and ready to ask questions and offer solutions instead of waiting for answers.

    Nice to have:

    • Knowledge of other cloud platforms (Azure, GCP).
    • Experience with Kubernetes, Docker.
    • Java/Scala as additional tools.
    • Exposure to ML/AI technologies.
    • Experience with data security tools and practices.

     

    What we offer:

    – Flexible schedule and remote format or offices in Warsaw/Kyiv β€” you choose.
    – 24 paid vacation days, sick leaves, and health insurance (UA-based, other locations in progress).
    – A supportive, friendly team where knowledge-sharing is part of the culture.
    – Coverage for professional events and learning.
    – Birthday greetings, team buildings, and warm human connection beyond work.
    – Zero joules of energy to the aggressor state, its affiliated businesses, or partners.

     

    πŸš€ If you’re ready to build scalable and impactful data solutions β€” send us your CV now, we’d love to get to know you better!

    More
  • Β· 36 views Β· 4 applications Β· 5d

    Energy System Analyst / Software Developer

    Full Remote Β· Worldwide Β· Product Β· 2 years of experience Β· B2 - Upper Intermediate
    Contract type: Full-time or part-time Location: Ukraine or Czech Republic Contract duration: 1 year, with possibility of extension Start date: ASAP About the Role We are seeking a skilled Energy System Analyst / Software Developer to join our team and...

    Contract type: Full-time or part-time
    Location: Ukraine or Czech Republic
    Contract duration: 1 year, with possibility of extension
    Start date: ASAP

    About the Role

    We are seeking a skilled Energy System Analyst / Software Developer to join our team and contribute to the full lifecycle of energy system modelling and simulation projects. You will focus primarily on market modelling of energy markets, power system optimization, data analysis, and economic assessment, while also engaging in data handling, visualization, and automation development.

    This role combines technical expertise, analytical thinking, and software development skills to deliver high-quality modelling results and insights for our clients.

     

    Key Responsibilities

    • Lead and participate in the design, implementation, testing, and analysis of energy market models using tools such as PLEXOS, Antares or PyPSA
    • Develop and maintain data management, automation, and market modelling processes.
    • Conduct data analysis, economic evaluations, and power system simulations to support decision-making.
    • Collaborate with client teams to ensure accuracy and consistency in data collection, validation, and central dataset maintenance.
    • Communicate modelling results, methodologies, and assumptions clearly to both technical and non-technical stakeholders.
    • Contribute to technical documentation, reports, and presentations.

     

    Required Qualifications & Experience

    • 2–7 years of relevant professional experience.
    • 2+ years of proven experience in market and economic analysis.
    • Proficiency in Python or R for modelling, simulation, and data analysis.
    • Strong skills in data handling, processing, and visualization.
    • English language proficiency at B2 level or higher.
    • MSc degree in Electrical Engineering, Power Engineering, Computer Science, or related field.

     

    Desirable Skills & Assets

    • Knowledge of optimization theory.
    • Experience in market modelling using PLEXOS, Antares, PyPSA or similar.
    • Familiarity with Git or other version control systems.
    • Ability to translate customer requirements into actionable tasks and deliverables.
    • Experience in applying AI tools to support programming and analytical work.
    • Strong communication and teamwork skills, with the ability to meet deadlines.

     

    More
Log In or Sign Up to see all posted jobs