Jobs
139-
Β· 60 views Β· 2 applications Β· 23d
Data Engineer (NLP-Focused)
Hybrid Remote Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - IntermediateWe are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstarβs NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text...We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstarβs NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling our data scientists and ML engineers to develop cutting-edge language models. You will work at the intersection of data engineering and machine learning, ensuring that our datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context. This is a unique opportunity to shape the data foundation of a pioneering AI project in Ukraine, working alongside NLP experts and leveraging modern big data technologies.
What you will do
- Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information. Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
- Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to our language modeling efforts.
- Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
- Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
- Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
- Automate data processing workflows and ensure their scalability and reliability. Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
- Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs. Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
- Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models. Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
- Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
- Manage data security, access, and compliance. Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.
Qualifications and experience needed
- Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
- NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given our projectβs focus. Understanding of FineWeb2 or a similar processing pipeline approach.
- Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
- Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
- Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as our NLP applications may require embedding storage and fast similarity search.
- Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
- Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
- Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.
A plus would be
- Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
- Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
- CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
- Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
- Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimising existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve our workflows.
What we offer
- Office or remote β itβs up to you. You can work from anywhere, and we will arrange your workplace.
- Remote onboarding.
- Performance bonuses.
- We train employees with the opportunity to learn through the companyβs library, internal resources, and programs from partners.β―
- Health and life insurance.
- Wellbeing program and corporate psychologist.
- Reimbursement of expenses for Kyivstar mobile communication.
-
Β· 82 views Β· 10 applications Β· 20d
Middle/Senior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· B2 - Upper IntermediateAbout the project: Our customer is the European online car market with over 30 million monthly users, with a market presence in 18 countries. The company is now merging with a similar company in Canada and needs support in this way. As a Data& Analytics...About the project:
Our customer is the European online car market with over 30 million monthly users, with a market presence in 18 countries. The company is now merging with a similar company in Canada and needs support in this way. As a Data& Analytics Engineer, you will play a pivotal role in shaping the future of online car markets and enhancing the user experience for millions of car buyers and sellers.
Requirements:
- 5+ years of experience in Data Engineering or Analytics Engineering roles
- Strong experience building and maintaining pipelines in BigQuery, Athena, Glue, and Airflow
- Advanced SQL skills and experience designing dimensional models (star/snowflake)
- Experience with AWS Cloud
- Solid Python skills, especially for data processing and workflow orchestration
- Familiarity with data quality tools like Great Expectations
- Understanding of data governance, privacy, and security principles
- Experience working with large datasets and optimizing performance
- Proactive problem solver who enjoys building scalable, reliable solutions
- English - Upper-Intermediate+Great communication skills
Responsibilities:
- Collaborate with analysts, engineers, and stakeholders to understand data needs and deliver solutions
- Build and maintain robust data pipelines that deliver clean and timely data
- Organize and transform raw data into well-structured, scalable models
- Ensure data quality and consistency through validation frameworks like Great Expectations
- Work with cloud-based tools like Athena and Glue to manage datasets across different domains
- Help set and enforce data governance, security, and privacy standards
- Continuously improve the performance and reliability of data workflows
- Support the integration of modern cloud tools into the broader data platform
-
Β· 67 views Β· 21 applications Β· 5d
Senior Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateAutomat-it is where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI support that drives real results. ...Automat-it is where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI support that drives real results.
We work across EMEA, fueling innovation and solving complex challenges daily. Join us to grow your skills, shape bold ideas, and help build the future of tech.
Weβre looking for a Senior Data Engineer to play a key role in building our Data & Analytics practice and delivering modern data solutions on AWS for our clients. In this role, you'll be a customer-facing, hands-on technical engineer who designs and implements end-to-end data pipelines and analytics platforms using AWS services like AWS Glue, Amazon OpenSearch Service, Amazon Redshift, and Amazon QuickSight. From migrating legacy ETL workflows to AWS Glue to building scalable data lakes for AI/ML training, you'll ensure our customers can unlock the full value of their data. Youβll work closely with client stakeholders (from startup founders and CTOs to data engineers) to create secure, cost-efficient architectures that drive real business impact.
π Work location - remote from Ukraine
If you are interested in this opportunity, please submit your CV in English.
Responsibilities
- Design, develop, and deploy AWS-based data and analytics solutions to meet customer requirements. Ensure architectures are highly available, scalable, and cost-efficient.
- Develop dashboards and analytics reports using Amazon QuickSight or equivalent BI tools.
- Migrate and modernize existing data workflows to AWS. Re-architect legacy ETL pipelines to AWS Glue and move on-premises data systems to Amazon OpenSearch/Redshift for improved scalability and insights.
- Build and manage multi-modal data lakes and data warehouses for analytics and AI. Integrate structured and unstructured data on AWS (e.g. S3, Redshift) to enable advanced analytics and generative AI model training using tools like SageMaker.
- Implement infrastructure automation and CI/CD for data projects. Use Infrastructure as Code (Terraform) and DevOps best practices to provision AWS resources and continuously integrate/deploy data pipeline code.
- Lead customer workshops and proof-of-concepts (POCs) to demonstrate proposed solutions. Run technical sessions (architecture whiteboards, Well-Architected reviews) to validate designs and accelerate customer adoption.
- Collaborate with engineering teams (Data Scientist, DevOps and MLOps teams) and stakeholders to deliver projects successfully. Ensure solutions follow AWS best practices and security guidelines, and guide client teams in implementing according to the plan.
Stay up-to-date on emerging data technologies and mentor team members. Continuously learn new AWS services (e.g. AWS Bedrock, Lake Formation) and industry trends, and share knowledge to improve our delivery as we grow the Data & Analytics practice.
Requirements
- 5+ years of experience in data engineering, data analytics, or a related field, including 3+ years of hands-on AWS experience (designing, building, and maintaining data solutions on AWS).
- Production experience with AWS cloud and data services, including building solutions at scale with tools like AWS Glue, Amazon Redshift, Amazon S3, Amazon Kinesis, Amazon OpenSearch Service, etc.
- Skilled in AWS analytics and dashboards tools β hands-on expertise with services such as Amazon QuickSight or other BI tools (Tableau, Power BI) and Amazon Athena.
- Experience with ETL pipelines β ability to build ETL/ELT workflows (using AWS Glue, Spark, Python, SQL).
- Experience with data warehousing and data lakes - ability to design and optimize data lakes (on S3), Amazon Redshift for data warehousing, and Amazon OpenSearch for log/search analytics.
- Proficiency in programming (Python/PySpark) and SQL skills for data processing and analysis.
- Understanding of cloud security and data governance best practices (encryption, IAM, data privacy).
- Excellent communication skills with an ability to explain complex data concepts in clear terms. Comfortable working directly with clients and guiding technical discussions.
- Proven ability to lead end-to-end technical engagements and work effectively in fast-paced, Agile environments.
- AWS certification β AWS certifications, especially in Data Analytics or Machine Learning are a plus.
DevOps/MLOps knowledge β experience with Infrastructure as Code (Terraform), CI/CD pipelines, containerization, and AWS AI/ML services (SageMaker, Bedrock) is a plus.
Benefits
- Professional training and certifications covered by the company (AWS, FinOps, Kubernetes, etc.)
- International work environment
- Referral program β enjoy cooperation with your colleagues and get a bonus
- Company events and social gatherings (happy hours, team events, knowledge sharing, etc.)
- Wellbeing and professional coaching
- English classes
- Soft skills training
Country-specific benefits will be discussed during the hiring process.
Automat-it is committed to fostering a workplace that promotes equal opportunities for all and believes that a diverse workforce is crucial to our success. Our recruitment decisions are based on your experience and skills, recognising the value you bring to our team.
More -
Β· 14 views Β· 2 applications Β· 19d
IT Infrastructure Administrator
Office Work Β· Ukraine (Dnipro) Β· Product Β· 1 year of experienceBiosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT...Biosphere Corporation is one of the largest producers and distributors of household, hygiene, and professional products in Eastern Europe and Central Asia (TM Freken BOK, Smile, Selpak, Vortex, Novita, PRO service, and many others). We are inviting an IT Infrastructure Administrator to join our team.
Key responsibilities:
- Administration of Active Directory
- Managing group policies
- Managing services via PowerShell
- Administration of VMWare platform
- Administration of Azure Active Directory
- Administration of Exchange 2016/2019 mail servers
- Administration of Exchange Online
- Administration of VMWare Horizon View
Required professional knowledge and skills:
- Experience in writing automation scripts (PowerShell, Python, etc.)
- Skills in working with Azure Active Directory (user and group creation, report generation, configuring synchronization between on-premise and cloud AD)
- Skills in Exchange PowerShell (mailbox creation, search and removal of emails based on criteria, DAG creation and management)
- Experience with Veeam Backup & Replication, VMWare vSphere (vCenter, DRS, vMotion, HA), VMWare Horizon View
- Windows Server 2019/2025 (installation, configuration, and adaptation)
- Diagnostics and troubleshooting
- Working with anti-spam systems
- Managing mail transport systems (exim) and monitoring systems (Zabbix)
We offer:
- Interesting projects and tasks
- Competitive salary (discussed during the interview)
- Convenient work schedule: MonβFri, 9:00β18:00; partial remote work possible
- Official employment, paid vacation, and sick leave
- Probation period β 2 months
- Professional growth and training (internal training, reimbursement for external training programs)
- Discounts on Biosphere Corporation products
- Financial assistance (in cases of childbirth, medical treatment, force majeure, or circumstances caused by wartime events, etc.)
Office address: Dnipro, Zaporizke Highway 37 (Right Bank, Topol-1 district).
Learn more about Biosphere Corporation, our strategy, mission, and values at:
http://biosphere-corp.com/
https://www.facebook.com/biosphere.corporation/Join our team of professionals!
By submitting your CV for this vacancy, you consent to the use of your personal data in accordance with the current legislation of Ukraine.
More
If your application is successful, we will contact you within 1β2 business days. -
Β· 53 views Β· 0 applications Β· 19d
Big Data Software Engineer
Full Remote Β· Poland Β· Product Β· 2 years of experience Β· B2 - Upper IntermediateSoftware Engineer β Data Team Founded in 2009, we are a global leader in online multi-vertical marketplaces. Through our leading brands, we help millions of people worldwide make informed decisions every day. Our proprietary platform harnesses AI and ML...Software Engineer β Data Team
Founded in 2009, we are a global leader in online multi-vertical marketplaces. Through our leading brands, we help millions of people worldwide make informed decisions every day. Our proprietary platform harnesses AI and ML technologies to help consumers choose the right products and services tailored to their needs.
We are looking for a strong and passionate Java Software Developer to join our Data group. Youβll help us build the data platform and applications that power analytics, reporting, machine learning, and other critical business needs.
Responsibilities
- Design, build & maintain data services with Java, Spring Boot, Kafka, MongoDB, Kubernetes, MySQL, Python, Streamlit, and more.
- Develop large-scale data platforms using Spark, Kafka, Snowflake, Airflow, Flink, Iceberg, and other big-data frameworks.
- Translate product requirements into clear technical designs and deliver impactful projects end-to-end.
- Leverage AI developer tools (e.g., Cursor) and multi-agent frameworks to accelerate development and align output with business goals.
- Create monitoring & operational tooling to ensure data reliability, performance, and usability.
- Contribute to a collaborative engineering culture: participate in code/design reviews, share knowledge, and support teammates.
Requirements
- 2+ yearsβ experience with Big Data and Java (strong juniors or mid-level welcome).
- Experience building, optimizing, and maintaining large-scale Big Data systems with open-source frameworks (Kafka, Spark, Hive, Iceberg, Airflow, etc.).
- 2+ yearsβ experience with Python.
- Strong expertise with SQL; experience with SQL/NoSQL/key-value databases.
- Hands-on experience with Spring / Spring Boot.
- Experience with AWS cloud services (EMR, Aurora, Snowflake, S3, Athena, Glue) is an advantage.
- Proven ability to debug and identify root causes in distributed production platforms.
- Hands-on experience with CI/CD, Microservices, Docker, Kubernetes.
-
Β· 15 views Β· 1 application Β· 19d
PHP developer/ Data Engineer
Hybrid Remote Β· Poland, Ukraine (Kyiv) Β· Product Β· 3 years of experience Β· B1 - Intermediate Ukrainian Product πΊπ¦Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist. Join us on our mission to make photo editing...Skylum allows millions of photographers to make incredible images faster. Our award-winning software automates photo editing with the power of AI yet leaves all the creative control in the hands of the artist.
Join us on our mission to make photo editing enjoyable, easy, and accessible to anyone. Youβll be developing products with innovative technologies, providing value and inspiration for customers, and getting inspired in return.Thanks to our incredible team of experts, weβve built a collaborative space where you can constantly develop and grow in a supportive way. At the same time, we believe in the freedom to be creative. Our work schedule is flexible, and we trust you to give your best while we provide you with everything you need to make work hassle-free. Skylum is proud to be a Ukrainian company, and we stand with Ukraine not only with words but with actions. We regularly donate to various organizations to help speed up the Ukrainian victory.
Requirements:
- Design and develop scalable backend services using PHP 7 / 8.
- Strong understanding of OOP concepts, design patterns, clean code principles,
- Extensive experience in MySQL, with expertise in database design, query optimization, and indexing.
- Experience of work with NoSQL databases (e.g., Redis).
- Proven experience working on high-load projects
- Understanding of ETL processes and data integration
- Experience of work with ClickHouse
- Strong experience with API development
- Strong knowledge of Symfony 6+, yii2
- Experience with RabbitMQ
Nice to Have:
- AWS services
- Payment API (Stripe, SolidGate etc.)
- Docker, GitLab CI
- Python
Responsibilities:
- Data Integration & ETL: Developed and maintained robust ETL pipelines using PHP to process and integrate data from diverse sources.
- API Development: Built and managed secure RESTful APIs to facilitate seamless data exchange between internal and external systems.
- Database Management: Optimized databases and data lakes, including schema design, complex query writing, and performance tuning.
- Data Quality: Implemented data validation and error-handling mechanisms to ensure data integrity and accuracy.
- Cross-Functional Collaboration: Partnered with data analysts and business teams to gather requirements and support data-driven initiatives.
What we offer:
For personal growth:
- A chance to work with a strong team and a unique opportunity to make substantial contributions to our award-winning photo editing tools;
- An educational allowance to ensure that your skills stay sharp;
- English and German classes to strengthen your capabilities and widen your knowledge.
For comfort:
- A great environment where youβll work with true professionals and amazing colleagues whom youβll call friends quickly;
- The choice of working remotely or in our office space located on Podil, equipped with everything you might need for productive and comfortable work.
For health:
- Medical insurance;
- Twenty-one days of paid sick leave per year;
- Healthy fruit snacks full of vitamins to keep you energized
For leisure:
- Twenty-one days of paid vacation per year;
- Fun times at our frequent team-building activities.
-
Β· 56 views Β· 14 applications Β· 19d
Senior Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· B2 - Upper IntermediateWe are looking for an experienced Data Engineer to join a long-term B2C project. The main focus is on building Zero ETL pipelines, as well as maintaining and improving existing ones. Responsibilities: Build and maintain scalable Zero ETL...We are looking for an experienced Data Engineer to join a long-term B2C project. The main focus is on building Zero ETL pipelines, as well as maintaining and improving existing ones.
Responsibilities:- Build and maintain scalable Zero ETL pipelines.
- Design and optimize data warehouses and data lakes on AWS (Glue, Firehose, Lambda, SageMaker).
- Work with structured and unstructured data, ensuring quality and accuracy.
- Optimize query performance and data processing workflows (Spark, SQL, Python).
- Collaborate with engineers, analysts, and business stakeholders to deliver data-driven solutions.
Requirements:
- 5+ years of experience in Data Engineering.
- Advanced proficiency in Spark, Python, SQL.
- Expertise with AWS Glue, Firehose, Lambda, SageMaker.
- Experience with ETL tools (dbt, Airflow etc.).
- Background in B2C companies is preferred.
- JavaScript and Data Science knowledge are a plus.
- Degree in Computer Science (preferred, not mandatory).
-
Β· 25 views Β· 1 application Β· 19d
BigData Engineer (Scala, Spark) IRC273773
Full Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper IntermediateDescription Founded in 2007, Rubicon Projectβs pioneering technology created a new model for the advertising industry. Today, our automated advertising platform is used by the worldβs leading publishers and applications to transact with top brands around...Description
Founded in 2007, Rubicon Projectβs pioneering technology created a new model for the advertising industry. Today, our automated advertising platform is used by the worldβs leading publishers and applications to transact with top brands around the globe, enabling them to reach more than 1 billion consumers. Rubicon Project operates the largest independent Display Advertising Exchange and Supply Side Platform that automates the buying and selling of Display Advertising across all formats (banner, video) on all devices (desktop browsers, mobile devices, billboards). Rubicon Project auctions over 20 Billion Ads on a daily basis in real-time in less than 1/2 of a second each. Rubicon Project is a publicly traded company (NYSE: RUBI) headquartered in Los Angeles, California, USA.
Requirements
- Experience building and operating large-scale, high-throughput, enterprise apps;
- 3+ years of working experience in server-side Scala;
- Working experience with data processing systems (Hadoop, Hive, Kafka, Spark);
- A strong understanding of algorithms, data structures, and an ability to recognise the business and technical trade-offs between different
solutions; - Expertise in threading and concurrency;
- Experience with automated testing frameworks (TDD, Mocking, Unit/Functional/Integration);
- Experience with SQL queries (MySQL is a plus);
- Experience with development and CI tools (Maven, git, Jenkins, Puppet, Crucible, Jira, Airflow, Python is a plus);
- Experience working in a Linux environment;
- Expertise in building software in an agile development environment;
- Demonstrated strong English language verbal and written communication skills.
Job responsibilities
- Write production-ready code and unit tests that meet both system and business requirements;
- Respond to feature requests, bug reports, performance issues, and ad-hoc questions;
- Work collaboratively with multiple teams to deliver quality software.
- Comfortable in multi-tasking and fast-pacing dev process;
- Support operation of services in production.
-
Β· 82 views Β· 15 applications Β· 11d
Senior Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateHi! We are looking for a Senior Data Engineer for a long-term collaboration with a leading global company in digital media analytics. This international organization has offices around the world and helps top brands optimize and secure their online...Hi! We are looking for a Senior Data Engineer for a long-term collaboration with a leading global company in digital media analytics. This international organization has offices around the world and helps top brands optimize and secure their online advertising.
Responsibilities:
- Build scalable data pipelines.
- Integrate data from multiple sources.
- Optimize data storage and processing.
- Develop APIs for data access and integration.
- Work in cloud infrastructure (GCP).
- Participate in architectural initiatives and collaborate with the Architecture Team on key projects
Requirements:
- 4+ years of programming experience in coding, object-oriented design, and/or functional programming, including Python.
- Excellent SQL query writing abilities and data understanding
- Experience with Airflow and DBT
- Worked with data warehouses like Google BigQuery or Snowflake
- Experience building APIs in Python (FastAPI)
- Cloud environment, Google Cloud Platform
- Container technologies - Docker / Kubernetes
- Understanding of distributed system technologies, standards, and protocols, and have 2+ years of experience working in distributed systems like Spark, Kafka Eco System (Kafka Connect, Kafka Streams, or Kinesis), and building data pipelines at scale
- Spoken and written English
We offer:
- Work schedule is flexible β fixed amount of hours that you need to work per month
- 20 days off per year (10 days every 6 months are charged), unused days do not burn out
- Reimbursement of 5 sick days per year
- Partial compensation for external courses/conferences (after the completion of the Adaptation Period)
- Partial compensation for external professional certifications
- English group lessons in the office with teachers (free of charge; 2 times a week)
- Reimbursement for sports or massage
- Large library with a scheduled purchase of new books every half a year
- Yearly Individual Development Plan (after the completion of the Adaptation Period)
Send us your resume! Weβll be glad to talk with you in more detail!
More -
Β· 78 views Β· 9 applications Β· 19d
Data Engineer (Middle, Middle+)
Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B1 - IntermediateWe are helping to find an Data Engineer (Middle, Middle+) for our client (startup β A performance marketing and traffic arbitrage team focused on scaling marketing campaigns using AI automation). About the Role: We are expanding our Data & AI team and...We are helping to find an Data Engineer (Middle, Middle+) for our client (startup β A performance marketing and traffic arbitrage team focused on scaling marketing campaigns using AI automation).
About the Role:
We are expanding our Data & AI team and looking for a skilled Data Engineer with a strong Python backend background who has transitioned into data engineering. This role is ideal for someone who started as a backend developer (Python) and has at least 1+ year of hands-on data engineering experience, now aiming to grow further in this domain.
You will work closely with our current Data Engineer and AI Engineer to build scalable data platforms, pipelines, and services. This is a high-autonomy position within a young team where youβll influence data infrastructure decisions, design systems from scratch, and help shape our data-driven foundation.
Key Responsibilities:
- Design, build, and maintain data pipelines and services to support analytics, ML, and AI solutions.
- Work with distributed systems, optimize data processing, and handle large-scale data workloads.
- Collaborate with AI Engineers to support model integration (backend support for ML models, not full deployment responsibility).
- Design solutions for vague or high-level business requirements with strong problem-solving skills.
- Contribute to building a scalable data platform and help set best practices for data engineering in the company.Participate in rapid prototyping (PoCs and MVPs), deploying early solutions, and iterating quickly.
Requirements:
- 4 years of professional experience (with at least 1 year dedicated to data engineering).
- Strong Python backend development experience (service creation, APIs).
- Good understanding of data processing concepts, distributed systems, and system evolution.
- Experience with cloud platforms (AWS preferred, GCP acceptable).
- Familiarity with Docker and containerized environments.
- Experience with Spark, Kubernetes, and optimization of high-load systems.
- Ability to handle loosely defined requirements, propose solutions, and work independently.
- A proactive mindset β technical initiatives tied to business impact are highly valued.
- English sufficient to read technical documentation (working language: Ukrainian/Russian).
Nice-to-Haves:
- Exposure to front-end development (JavaScript/TypeScript) β not required, but a plus.
- Experience with scalable data architectures, stream processing, and data modeling.
- Understanding of the business impact of technical optimizations.
Team & Process:
- Youβll join a growing Data & AI department responsible for data infrastructure, AI agents, and analytics systems.Two interview stages:
- Technical Interview (Python & Data Engineering focus).
- Cultural Fit Interview (expectations, career growth, alignment).
- Autonomy and decision-making freedom in a small, fast-moving team.
-
Β· 33 views Β· 0 applications Β· 18d
Data Ops/Engineer (with Capital markets exp.)
Full Remote Β· Ukraine Β· 8 years of experience Β· B2 - Upper IntermediateProject Description: Develop scalable data collection, storage, and distribution platform to house data from vendors, research providers, exchanges, PBs, and web-scraping. Make data available to systematic & fundamental PMs, and enterprise functions: Ops,...Project Description:
Develop scalable data collection, storage, and distribution platform to house data from vendors, research providers, exchanges, PBs, and web-scraping. Make data available to systematic & fundamental PMs, and enterprise functions: Ops, Risk, Trading, and Compliance. Develop internal data products and analytics
Responsibilities:
Web scraping using scripts/APIs/Tools
Help build and maintain greenfield data platform running on Snowflake and AWS
Understand the existing pipelines and enhance pipelines for the new requirements.
Onboarding new data providers
Data migration projectsMandatory Skills Description:
β’ 8+ years of exp as Data Engineer
β’ SQL
β’ Python
β’ Linux
β’ Containerization(Docker, Kubernetes)
β’ Good communication skills
β’ AWS
β’ Strong on Dev ops side of things(K8s, Docker, Jenkins)
β’ Being ready to work in EU time zone
β’ Capital markets expNice-to-Have Skills Description:
β’ Market Data Projects
β’ Snowflake is a big plus
β’ Airflow- Languages:
- English: B2 Upper Intermediate
-
Β· 149 views Β· 19 applications Β· 17d
Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 1 year of experience Β· B2 - Upper IntermediateHello everyone! We are looking for a Data Engineer to join our team and help build and maintain modern data pipelines. Weβre seeking a Data Engineer to join our growing team. Youβll work with our data infrastructure built on AWS, focusing on data...Hello everyone!
We are looking for a Data Engineer to join our team and help build and maintain modern data pipelines.
Weβre seeking a Data Engineer to join our growing team. Youβll work with our data infrastructure built on AWS, focusing on data transformation, pipeline development, and database management.
This is an excellent opportunity to grow your skills in a fast-paced startup environment while working with modern data technologies.
Project Idea
As a Data Engineer, you will work directly with the companyβs data infrastructure built on AWS. Your focus will be on data transformation, pipeline development, and database management, ensuring that data is reliable, scalable, and accessible for business needs. Youβll get hands-on experience with AWS services (S3, RDS, EC2), contribute to building efficient ETL/ELT processes, and help optimize how data flows across the organization. This is a great opportunity to grow your skills in cloud-based data engineering while collaborating with a U.S.-based client and an international team.
What is the team size and structure?
Youβll be joining a growing team that includes an Architect, a Senior Node.js Engineer, and a Project Manager, working in close collaboration with the client.
How many stages of the interview are there?
- Interview with the Recruiter β up to 30 min.
- Cultural interview β up to 1 hour.
- Technical interview β up to 1 hour.
- Call with the client (optional) β up to 1 hour.
Requirements:
- 1-3 years of experience in data engineering or related field;
- Strong proficiency in PostgreSQL;
- Solid Python programming skills β experience with data manipulation libraries (pandas, numpy);
- SQL expertise;
- Experience with AWS core services (S3, RDS, EC2);
- Understanding of data pipeline concepts and ETL/ELT processes;
- Familiarity with version control (Git) and collaborative development practices;
- Upper-intermediate or higher level of English.
Responsibilities:
- Build and maintain data integrations using Python for ETL/ELT processes
- Write efficient SQL queries to extract, transform, and analyze data across PostgreSQL and Snowflake
- Collaborate with the engineering team to ensure data quality and reliability
- Work with AWS services including S3, RDS, and EC2 to support data infrastructure
- Collect and consolidate data from various sources, including databases and REST API integrations, for further analysis
- Participate in code reviews and follow best practices for data engineering
- Monitor data pipeline performance and troubleshoot issues as they arise
-
Β· 63 views Β· 2 applications Β· 3d
Senior Data (Analytics) Engineer
Ukraine Β· 4 years of experience Β· B2 - Upper IntermediateAbout the project: Our customer is the European online car market with over 30 million monthly users, with a market presence in 18 countries. The company is now merging with a similar company in Canada and needs support in this way. As a Data& Analytics...About the project:
Our customer is the European online car market with over 30 million monthly users, with a market presence in 18 countries. The company is now merging with a similar company in Canada and needs support in this way. As a Data& Analytics Engineer, you will play a pivotal role in shaping the future of online car markets and enhancing the user experience for millions of car buyers and sellers.
Requirements:
- 5+ years of experience in Data Engineering or Analytics Engineering roles
- Strong experience building and maintaining pipelines in BigQuery, Athena, Glue, and Airflow
- Advanced SQL skills and experience designing dimensional models (star/snowflake)
- Experience with AWS Cloud
- Solid Python skills, especially for data processing and workflow orchestration
- Familiarity with data quality tools like Great Expectations
- Understanding of data governance, privacy, and security principles
- Experience working with large datasets and optimizing performance
- Proactive problem solver who enjoys building scalable, reliable solutions
English - Upper-Intermediate + Great communication skills
Responsibilities:
- Collaborate with analysts, engineers, and stakeholders to understand data needs and deliver solutions
- Build and maintain robust data pipelines that deliver clean and timely data
- Organize and transform raw data into well-structured, scalable models
- Ensure data quality and consistency through validation frameworks like Great Expectations
- Work with cloud-based tools like Athena and Glue to manage datasets across different domains
- Help set and enforce data governance, security, and privacy standards
- Continuously improve the performance and reliability of data workflows
- Support the integration of modern cloud tools into the broader data platform
We offer*:
- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
More -
Β· 55 views Β· 1 application Β· 17d
Senior Data Engineer
Hybrid Remote Β· Ukraine (Kyiv, Lviv) Β· Product Β· 3 years of experience Β· A2 - ElementarySolidgate is a payment processing and orchestration platform that helps thousands of businesses to accept payments online. We develop cutting-edge fintech solutions to facilitate seamless payment processing for merchants across 150+ countries, spanning...Solidgate is a payment processing and orchestration platform that helps thousands of businesses to accept payments online. We develop cutting-edge fintech solutions to facilitate seamless payment processing for merchants across 150+ countries, spanning Europe to LATAM, the USA to Asia. We are proud to be a part of the history of every company we work with - our infrastructure gives a quick scale to new markets and maximizes revenue.
Key facts:
- Offices in Ukraine, Poland, and Cyprus
- 250+ team members
- 200+ clients went global (Ukraine, US, EU)
- Visa and Mastercard Principal Membership
- EMI license in the EU
Solidgate is part of Endeavor β a global community of the worldβs most impactful entrepreneurs. Weβre proud to be the first payment orchestrator from Europe to join β and to share our expertise within a network of outstanding global companies.
Here, weβre building a strong engineering culture: designing architectures trusted by global leaders. Our engineers donβt just maintain systems β they create them. We believe the payments world is shaped by people who think big, act responsibly, and approach challenges with curiosity and drive. Thatβs exactly the kind of teammate we want on our team.
Weβre now looking for a Senior Data Engineer who will own the end-to-end construction of our Data Platform. The mission of the role is to build products that allow other teams to quickly launch, scale, and manage their own data-driven solutions independently.
Youβll work side-by-side with Senior Engineering Manager of the Platform stream, and a team of four data enthusiasts to build the architecture that will become the foundation for all our data products.
Explore our technology stack β‘οΈ https://solidgate-tech.github.io/
What youβll own:
β Build the Data Platform from scratch (architecture, design, implementation, scaling)
β Implement a Data Lake approach and Layered Architecture (bronze β silver data layers)
β Integrate streaming processing into data engineering practices
β Foster a strong engineering culture with the team and drive best practices in data quality, observability, and reliability
What you need to join us:
β 3+ years of commercial experience as a Data Engineer
β Strong hands-on experience building data solutions in Python
β Confident SQL skills
β Experience with Airflow or similar tools
β Experience building and running DWH (BigQuery / Snowflake / Redshift)
β Expertise in streaming stacks (Kafka / AWS Kinesis)
β Experience with AWS infrastructure: S3, Glue, Athena
β High attention to detail
β Proactive, self-driven mindset
β Continuous-learning mentality
β Strong delivery focus and ownership in a changing environment
Nice to have:
β Background as an analyst or Python developer
β Experience with DBT, Grafana, Docker, LakeHouse approaches
Competitive corporate benefits:
- more than 30 days off during the year (20 working days of vacation + days off for national holidays)
- health insurance and corporate doctor
- free snacks, breakfasts, and lunches in the office
- full coverage of professional training (courses, conferences, certifications)
- yearly performance review
- sports compensation
- competitive salary
- Apple equipment
π© Ready to become a part of the team? Then cast aside all doubts and click "apply".
More -
Β· 65 views Β· 13 applications Β· 17d
Data Engineer (Microsoft Fabric)
Full Remote Β· EU Β· 1 year of experience Β· B2 - Upper IntermediateQA Madness is a European IT service company that focuses strongly on QA and cybersecurity. The company was founded in 2013 and is headquartered in Poland. Currently, we are searching for an experienced Data Engineer (Microsoft Fabric) to become a great...QA Madness is a European IT service company that focuses strongly on QA and cybersecurity. The company was founded in 2013 and is headquartered in Poland.
Currently, we are searching for an experienced Data Engineer (Microsoft Fabric) to become a great addition to our team. Currently, our client is looking for a Data Engineer to become a great addition to their team.Responsibilities:
- Build and maintain ETL/ELT pipelines using Azure Data Factory, Spark Notebooks, Fabric Pipelines;
- Design and implement Lakehouse architectures (Medallion: Bronze β Silver β Gold);
- Handle ingestion from diverse sources into Microsoft Fabric and OneLake;
- Ensure data quality, security and lineage using Microsoft Purview, RBAC, and audit trails;
- Collaborate with BI, DevOps, Cloud engineers and business stakeholders;
Contribute to modernizing DWH/BI ecosystems and enabling analytics at scale.
Required Skills:
- 1+ years of experience as a Data Engineer, DWH developer, or ETL engineer;
- Strong SQL skills and experience working with large datasets;
- Hands-on experience with Spark, Azure Data Factory, or Databricks;
- Understanding of Lakehouse/DWH concepts and data modeling;
- Upper-intermediate English level.
Soft Skills:
- Analytical thinking and the ability to quickly understand complex data structures;
- Teamwork and collaboration with various stakeholders (developers, analysts, managers).
Please note, this job is a full-time position, and it is relevant only if you meet all requirements. Any candidate who fails to meet the requirements will not be considered for the job.