Jobs

139
  • Β· 80 views Β· 12 applications Β· 8d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Hi! We are looking for a Senior Data Engineer for a long-term collaboration with a leading global company in digital media analytics. This international organization has offices around the world and helps top brands optimize and secure their online...

    Hi! We are looking for a Senior Data Engineer for a long-term collaboration with a leading global company in digital media analytics. This international organization has offices around the world and helps top brands optimize and secure their online advertising.

     

    Responsibilities:

    • Build scalable data pipelines.
    • Integrate data from multiple sources.
    • Optimize data storage and processing.
    • Develop APIs for data access and integration.
    • Work in cloud infrastructure (GCP).
    • Participate in architectural initiatives and collaborate with the Architecture Team on key projects

     

    Requirements:

    • 4+ years of programming experience in coding, object-oriented design, and/or functional programming, including Python.
    • Excellent SQL query writing abilities and data understanding
    • Experience with Airflow and DBT
    • Worked with data warehouses like Google BigQuery or Snowflake
    • Experience building APIs in Python (FastAPI)
    • Cloud environment, Google Cloud Platform
    • Container technologies - Docker / Kubernetes
    • Understanding of distributed system technologies, standards, and protocols, and have 2+ years of experience working in distributed systems like Spark, Kafka Eco System (Kafka Connect, Kafka Streams, or Kinesis), and building data pipelines at scale
    • Spoken and written English

     

    We offer:

    • Work schedule is flexible β€” fixed amount of hours that you need to work per month
    • 20 days off per year (10 days every 6 months are charged), unused days do not burn out
    • Reimbursement of 5 sick days per year
    • Partial compensation for external courses/conferences (after the completion of the Adaptation Period)
    • Partial compensation for external professional certifications
    • English group lessons in the office with teachers (free of charge; 2 times a week)
    • Reimbursement for sports or massage
    • Large library with a scheduled purchase of new books every half a year
    • Yearly Individual Development Plan (after the completion of the Adaptation Period)

     

    Send us your resume! We’ll be glad to talk with you in more detail!

    More
  • Β· 78 views Β· 9 applications Β· 16d

    Data Engineer (Middle, Middle+)

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B1 - Intermediate
    We are helping to find an Data Engineer (Middle, Middle+) for our client (startup β€” A performance marketing and traffic arbitrage team focused on scaling marketing campaigns using AI automation). About the Role: We are expanding our Data & AI team and...

    We are helping to find an Data Engineer (Middle, Middle+) for our client (startup β€” A performance marketing and traffic arbitrage team focused on scaling marketing campaigns using AI automation).
     

    About the Role:
     

    We are expanding our Data & AI team and looking for a skilled Data Engineer with a strong Python backend background who has transitioned into data engineering. This role is ideal for someone who started as a backend developer (Python) and has at least 1+ year of hands-on data engineering experience, now aiming to grow further in this domain.

    You will work closely with our current Data Engineer and AI Engineer to build scalable data platforms, pipelines, and services. This is a high-autonomy position within a young team where you’ll influence data infrastructure decisions, design systems from scratch, and help shape our data-driven foundation.
     

    Key Responsibilities: 
     

    • Design, build, and maintain data pipelines and services to support analytics, ML, and AI solutions.
    • Work with distributed systems, optimize data processing, and handle large-scale data workloads.
    • Collaborate with AI Engineers to support model integration (backend support for ML models, not full deployment responsibility).
    • Design solutions for vague or high-level business requirements with strong problem-solving skills.
    • Contribute to building a scalable data platform and help set best practices for data engineering in the company.Participate in rapid prototyping (PoCs and MVPs), deploying early solutions, and iterating quickly.
       

    Requirements:
     

    • 4 years of professional experience (with at least 1 year dedicated to data engineering).
    • Strong Python backend development experience (service creation, APIs).
    • Good understanding of data processing concepts, distributed systems, and system evolution.
    • Experience with cloud platforms (AWS preferred, GCP acceptable).
    • Familiarity with Docker and containerized environments.
    • Experience with Spark, Kubernetes, and optimization of high-load systems.
    • Ability to handle loosely defined requirements, propose solutions, and work independently.
    • A proactive mindset β€” technical initiatives tied to business impact are highly valued.
    • English sufficient to read technical documentation (working language: Ukrainian/Russian).
       

    Nice-to-Haves:
     

    • Exposure to front-end development (JavaScript/TypeScript) β€” not required, but a plus.
    • Experience with scalable data architectures, stream processing, and data modeling.
    • Understanding of the business impact of technical optimizations.
       

    Team & Process:
     

    • You’ll join a growing Data & AI department responsible for data infrastructure, AI agents, and analytics systems.Two interview stages:
    • Technical Interview (Python & Data Engineering focus).
    • Cultural Fit Interview (expectations, career growth, alignment).
    • Autonomy and decision-making freedom in a small, fast-moving team.
    More
  • Β· 33 views Β· 0 applications Β· 15d

    Data Ops/Engineer (with Capital markets exp.)

    Full Remote Β· Ukraine Β· 8 years of experience Β· B2 - Upper Intermediate
    Project Description: Develop scalable data collection, storage, and distribution platform to house data from vendors, research providers, exchanges, PBs, and web-scraping. Make data available to systematic & fundamental PMs, and enterprise functions: Ops,...
    • Project Description:

      Develop scalable data collection, storage, and distribution platform to house data from vendors, research providers, exchanges, PBs, and web-scraping. Make data available to systematic & fundamental PMs, and enterprise functions: Ops, Risk, Trading, and Compliance. Develop internal data products and analytics

    • Responsibilities:

      Web scraping using scripts/APIs/Tools
      Help build and maintain greenfield data platform running on Snowflake and AWS
      Understand the existing pipelines and enhance pipelines for the new requirements.
      Onboarding new data providers
      Data migration projects

    • Mandatory Skills Description:

      β€’ 8+ years of exp as Data Engineer
      β€’ SQL
      β€’ Python
      β€’ Linux
      β€’ Containerization(Docker, Kubernetes)
      β€’ Good communication skills
      β€’ AWS
      β€’ Strong on Dev ops side of things(K8s, Docker, Jenkins)
      β€’ Being ready to work in EU time zone
      β€’ Capital markets exp

    • Nice-to-Have Skills Description:

      β€’ Market Data Projects
      β€’ Snowflake is a big plus
      β€’ Airflow

    • Languages:
      • English: B2 Upper Intermediate
    More
  • Β· 145 views Β· 18 applications Β· 14d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 1 year of experience Β· B2 - Upper Intermediate
    Hello everyone! We are looking for a Data Engineer to join our team and help build and maintain modern data pipelines. We’re seeking a Data Engineer to join our growing team. You’ll work with our data infrastructure built on AWS, focusing on data...

    Hello everyone!

    We are looking for a Data Engineer to join our team and help build and maintain modern data pipelines.

    We’re seeking a Data Engineer to join our growing team. You’ll work with our data infrastructure built on AWS, focusing on data transformation, pipeline development, and database management.

    This is an excellent opportunity to grow your skills in a fast-paced startup environment while working with modern data technologies.

     

    Project Idea

    As a Data Engineer, you will work directly with the company’s data infrastructure built on AWS. Your focus will be on data transformation, pipeline development, and database management, ensuring that data is reliable, scalable, and accessible for business needs. You’ll get hands-on experience with AWS services (S3, RDS, EC2), contribute to building efficient ETL/ELT processes, and help optimize how data flows across the organization. This is a great opportunity to grow your skills in cloud-based data engineering while collaborating with a U.S.-based client and an international team.

     

    What is the team size and structure?

    You’ll be joining a growing team that includes an Architect, a Senior Node.js Engineer, and a Project Manager, working in close collaboration with the client.

     

    How many stages of the interview are there?

    • Interview with the Recruiter β€” up to 30 min.
    • Cultural interview β€” up to 1 hour.
    • Technical interview β€” up to 1 hour.
    • Call with the client (optional) β€” up to 1 hour.

     

    Requirements:

    • 1-3 years of experience in data engineering or related field;
    • Strong proficiency in PostgreSQL;
    • Solid Python programming skills β€” experience with data manipulation libraries (pandas, numpy);
    • SQL expertise;
    • Experience with AWS core services (S3, RDS, EC2);
    • Understanding of data pipeline concepts and ETL/ELT processes;
    • Familiarity with version control (Git) and collaborative development practices;
    • Upper-intermediate or higher level of English.

     

    Responsibilities:

    • Build and maintain data integrations using Python for ETL/ELT processes
    • Write efficient SQL queries to extract, transform, and analyze data across PostgreSQL and Snowflake
    • Collaborate with the engineering team to ensure data quality and reliability
    • Work with AWS services including S3, RDS, and EC2 to support data infrastructure
    • Collect and consolidate data from various sources, including databases and REST API integrations, for further analysis
    • Participate in code reviews and follow best practices for data engineering
    • Monitor data pipeline performance and troubleshoot issues as they arise
    More
  • Β· 58 views Β· 2 applications Β· 11h

    Senior Data (Analytics) Engineer

    Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    About the project: Our customer is the European online car market with over 30 million monthly users, with a market presence in 18 countries. The company is now merging with a similar company in Canada and needs support in this way. As a Data& Analytics...

    About the project:

    Our customer is the European online car market with over 30 million monthly users, with a market presence in 18 countries. The company is now merging with a similar company in Canada and needs support in this way. As a  Data& Analytics Engineer, you will play a pivotal role in shaping the future of online car markets and enhancing the user experience for millions of car buyers and sellers.

     

    Requirements:

    • 5+ years of experience in Data Engineering or Analytics Engineering roles
    • Strong experience building and maintaining pipelines in BigQuery, Athena, Glue, and Airflow
    • Advanced SQL skills and experience designing dimensional models (star/snowflake)
    • Experience with AWS Cloud
    • Solid Python skills, especially for data processing and workflow orchestration
    • Familiarity with data quality tools like Great Expectations
    • Understanding of data governance, privacy, and security principles
    •  Experience working with large datasets and optimizing performance
    • Proactive problem solver who enjoys building scalable, reliable solutions
    • English - Upper-Intermediate + Great communication skills

       

    Responsibilities:

    • Collaborate with analysts, engineers, and stakeholders to understand data needs and deliver solutions
    • Build and maintain robust data pipelines that deliver clean and timely data
    • Organize and transform raw data into well-structured, scalable models
    •  Ensure data quality and consistency through validation frameworks like Great Expectations
    •  Work with cloud-based tools like Athena and Glue to manage datasets across different domains
    • Help set and enforce data governance, security, and privacy standards
    • Continuously improve the performance and reliability of data workflows
    • Support the integration of modern cloud tools into the broader data platform

     

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 55 views Β· 1 application Β· 14d

    Senior Data Engineer

    Hybrid Remote Β· Ukraine (Kyiv, Lviv) Β· Product Β· 3 years of experience Β· A2 - Elementary
    Solidgate is a payment processing and orchestration platform that helps thousands of businesses to accept payments online. We develop cutting-edge fintech solutions to facilitate seamless payment processing for merchants across 150+ countries, spanning...

    Solidgate is a payment processing and orchestration platform that helps thousands of businesses to accept payments online. We develop cutting-edge fintech solutions to facilitate seamless payment processing for merchants across 150+ countries, spanning Europe to LATAM, the USA to Asia. We are proud to be a part of the history of every company we work with - our infrastructure gives a quick scale to new markets and maximizes revenue.
     

    Key facts:

    • Offices in Ukraine, Poland, and Cyprus
    • 250+ team members
    • 200+ clients went global (Ukraine, US, EU)
    • Visa and Mastercard Principal Membership
    • EMI license in the EU
       

    Solidgate is part of Endeavor β€” a global community of the world’s most impactful entrepreneurs. We’re proud to be the first payment orchestrator from Europe to join β€” and to share our expertise within a network of outstanding global companies.
     

    Here, we’re building a strong engineering culture: designing architectures trusted by global leaders. Our engineers don’t just maintain systems β€” they create them. We believe the payments world is shaped by people who think big, act responsibly, and approach challenges with curiosity and drive. That’s exactly the kind of teammate we want on our team.
     

    We’re now looking for a Senior Data Engineer who will own the end-to-end construction of our Data Platform. The mission of the role is to build products that allow other teams to quickly launch, scale, and manage their own data-driven solutions independently.
     

    You’ll work side-by-side with Senior Engineering Manager of the Platform stream, and a team of four data enthusiasts to build the architecture that will become the foundation for all our data products.

    Explore our technology stack ➑️ https://solidgate-tech.github.io/
     

    What you’ll own:
    β€” Build the Data Platform from scratch (architecture, design, implementation, scaling)
    β€” Implement a Data Lake approach and Layered Architecture (bronze β†’ silver data layers)
    β€” Integrate streaming processing into data engineering practices
    β€” Foster a strong engineering culture with the team and drive best practices in data quality, observability, and reliability
     

    What you need to join us:
    β€” 3+ years of commercial experience as a Data Engineer
    β€” Strong hands-on experience building data solutions in Python
    β€” Confident SQL skills
    β€” Experience with Airflow or similar tools
    β€” Experience building and running DWH (BigQuery / Snowflake / Redshift)
    β€” Expertise in streaming stacks (Kafka / AWS Kinesis)
    β€” Experience with AWS infrastructure: S3, Glue, Athena
    β€” High attention to detail
    β€” Proactive, self-driven mindset
    β€” Continuous-learning mentality
    β€” Strong delivery focus and ownership in a changing environment
     

    Nice to have:
    β€” Background as an analyst or Python developer
    β€” Experience with DBT, Grafana, Docker, LakeHouse approaches
     

    Competitive corporate benefits:

    • more than 30 days off during the year (20 working days of vacation + days off for national holidays)
    • health insurance and corporate doctor
    • free snacks, breakfasts, and lunches in the office
    • full coverage of professional training (courses, conferences, certifications)
    • yearly performance review 
    • sports compensation
    • competitive salary
    • Apple equipment
       

    πŸ“© Ready to become a part of the team? Then cast aside all doubts and click "apply".

    More
  • Β· 62 views Β· 12 applications Β· 14d

    Data Engineer (Microsoft Fabric)

    Full Remote Β· EU Β· 1 year of experience Β· B2 - Upper Intermediate
    QA Madness is a European IT service company that focuses strongly on QA and cybersecurity. The company was founded in 2013 and is headquartered in Poland. Currently, we are searching for an experienced Data Engineer (Microsoft Fabric) to become a great...

    QA Madness is a European IT service company that focuses strongly on QA and cybersecurity. The company was founded in 2013 and is headquartered in Poland.

    Currently, we are searching for an experienced Data Engineer (Microsoft Fabric) to become a great addition to our team. Currently, our client is looking for a Data Engineer to become a great addition to their team.

     

     

         Responsibilities:

    • Build and maintain ETL/ELT pipelines using Azure Data Factory, Spark Notebooks, Fabric Pipelines;
    • Design and implement Lakehouse architectures (Medallion: Bronze β†’ Silver β†’ Gold);
    • Handle ingestion from diverse sources into Microsoft Fabric and OneLake;
    • Ensure data quality, security and lineage using Microsoft Purview, RBAC, and audit trails;
    • Collaborate with BI, DevOps, Cloud engineers and business stakeholders;
    • Contribute to modernizing DWH/BI ecosystems and enabling analytics at scale.

       

      Required Skills:

    • 1+ years of experience as a Data Engineer, DWH developer, or ETL engineer;
    • Strong SQL skills and experience working with large datasets;
    • Hands-on experience with Spark, Azure Data Factory, or Databricks;
    • Understanding of Lakehouse/DWH concepts and data modeling;
    • Upper-intermediate English level.

     

         Soft Skills:

    • Analytical thinking and the ability to quickly understand complex data structures;
    • Teamwork and collaboration with various stakeholders (developers, analysts, managers).

     

    Please note, this job is a full-time position, and it is relevant only if you meet all requirements. Any candidate who fails to meet the requirements will not be considered for the job.

     

    More
  • Β· 96 views Β· 7 applications Β· 14d

    Data Engineer

    Full Remote Β· EU Β· 1 year of experience Β· B2 - Upper Intermediate
    QA Madness is a European IT service company that focuses strongly on QA and cybersecurity. The company was founded in 2013 and is headquartered in Poland. Currently, we are searching for an experienced Data Engineer (Microsoft Fabric) to become a great...

    QA Madness is a European IT service company that focuses strongly on QA and cybersecurity. The company was founded in 2013 and is headquartered in Poland. Currently, we are searching for an experienced Data Engineer (Microsoft Fabric) to become a great addition to our team. Currently, our client is looking for a Data Engineer to become a great addition to their team.

    Responsibilities: 
    Build and maintain ETL/ELT pipelines using Azure Data Factory, Spark Notebooks, Fabric Pipelines; 
    Design and implement Lakehouse architectures (Medallion: Bronze β†’ Silver β†’ Gold); 
    Handle ingestion from diverse sources into Microsoft Fabric and OneLake; Ensure data quality, security and lineage using Microsoft Purview, RBAC, and audit trails; 
    Collaborate with BI, DevOps, Cloud engineers and business stakeholders; Contribute to modernizing DWH/BI ecosystems and enabling analytics at scale.

    Required Skills
    1+ years of experience as a Data Engineer, DWH developer, or ETL engineer;
    Strong SQL skills and experience working with large datasets; 
    Hands-on experience with Spark, Azure Data Factory, or Databricks; Understanding of Lakehouse/DWH concepts and data modeling; 
    Upper-intermediate English level.

    Soft Skills: 
    Analytical thinking and the ability to quickly understand complex data structures; 
    Teamwork and collaboration with various stakeholders (developers, analysts, managers).   

    Please note, this job is a full-time position, and it is relevant only if you meet all requirements. Any candidate who fails to meet the requirements will not be considered for the job. 
    Your application will be considered only once you have completed the questionnaire and uploaded your CV in English. (Click on the hyperlink or copy link into your Internet browser.)


     

    More
  • Β· 26 views Β· 1 application Β· 12d

    Expert Data Engineer (GCP)

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Ciklum is looking for a Lead Data Engineer to join our team full-time in Ukraine. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a...

    Ciklum is looking for a Lead Data Engineer to join our team full-time in Ukraine.

    We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live.

    About the role:

    As a Lead Data Engineer, become a part of a cross-functional development team working to cut company’s costs on Big Data processing.

    With almost 100 million active users across 25 countries, they’re a global food tech company. As a recently formed team, they have many opportunities and ideas for sharing value back to the customers within their continuously expanding platform. They're looking for talented and trusted engineers to help them impress their customers.

    Responsibilities

    • Lead efforts to optimize BigQuery performance, identify and resolve inefficient queries, and implement best practices for data processing at scale
    • Proactively analyze SQL code for antipatterns and suggest refactoring to improve query execution, cost efficiency, and maintainability
    • Develop and implement strategies for BigQuery storage optimization, including data partitioning, clustering, and archiving
    • Collaborate extensively with various data teams to understand their pipelines, identify optimization opportunities, and provide guidance on efficient BigQuery usage
    • Design, build, and maintain robust and cost-effective data solutions that leverage BigQuery capabilities to meet evolving business needs
    • Provide technical leadership and mentorship to data engineers, fostering a culture of continuous improvement in data pipeline efficiency and quality
    • Develop and maintain documentation for BigQuery optimization techniques, best practices, and standard operating procedures
    • Participate in code reviews, ensuring adherence to BigQuery optimization principles and high-quality SQL code standards
    • Research and evaluate new BigQuery features and related technologies to enhance data processing capabilities and efficiency
    • Act as a subject matter expert for BigQuery, advising on architectural decisions and contributing to the overall data strategy
    • Provide support for the maintenance, optimization, and eventual decommissioning of Python-based data applications and pipelines

    Requirements

    We know that sometimes, you can’t tick every box. We would still love to hear from you if you think you’re a good fit!

    • 5+ years of experience coding in SQL and Python with solid CS fundamentals including data structure and algorithm design
    • 2+ years of experience with BigQuery and solid understanding of how it works under the hood
    • 3+ years contributing to production deployments of large backend data processing and analysis systems as a team lead
    • 3+ years of experience in cloud data platforms (GCP)
    • Knowledge of professional software engineering best practices for the full software
    • Knowledge of Data Warehousing, design, implementation and optimization
    • Knowledge of Data Quality testing, automation and results visualization
    • Knowledge of BI reports and dashboards design and implementation (Tableau, Looker)
    • Knowledge of development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations
    • Experience participating in an Agile software development team, e.g. SCRUM
    • Experience designing, documenting, and defending designs for key components in large distributed computing systems
    • A consistent track record of delivering exceptionally high-quality software on large, complex, cross-functional projects
    • Demonstrated ability to learn new technologies quickly and independently
    • Experience with supporting data scientists and complex statistical use cases highly desirable

    Desirable

    • Understanding of cloud infrastructure design and implementation
    • Experience in data science and machine learning
    • Experience in backend development and deployment
    • Experience in CI/CD configuration
    • Good knowledge of data analysis in enterprises
    • Experience with Kubernetes

    What's in it for you

    • Strong community: Work alongside top professionals in a friendly, open-door environment
    • Growth focus: Take on large-scale projects with a global impact and expand your expertise
    • Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications
    • Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies
    • Flexibility: Enjoy radical flexibility – work remotely or from an office, your choice

    About us:

    At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress.
    As one of Ukraine’s largest IT companies and a top employer recognized by Forbes, we’ve spent over 20 years delivering meaningful tech solutions. We proudly support diverse talent and military veterans, recognizing their unique skills and perspectives they bring to shaping the future.

    Want to learn more about us? Follow us on Instagram, Facebook, LinkedIn.

    Explore, empower, engineer with Ciklum!

    More
  • Β· 25 views Β· 5 applications Β· 10d

    Senior Data Engineer(AI)

    Full Remote Β· Armenia, Bulgaria, Hungary, Romania, Uzbekistan Β· 5 years of experience
    Location: Remote / Europe Client: Subsidiary of one of the β€œBig Four” accounting organizations Industry: Professional Services (Audit, Tax, Consulting, Risk & Advisory) About the Client You will join the 6th-largest privately owned organization in...

    πŸ“ Location: Remote / Europe
    🌐 Client: Subsidiary of one of the β€œBig Four” accounting organizations
    🏒 Industry: Professional Services (Audit, Tax, Consulting, Risk & Advisory)


    About the Client

    You will join the 6th-largest privately owned organization in the United States, part of the Big Four and the largest professional services network in the world in terms of revenue and headcount. With more than 263,900 professionals globally, the company provides audit, tax, consulting, enterprise risk, and financial advisory services worldwide.


    About the Project

    As a Senior Data Engineer (AI), you will become part of a cross-functional development team building GenAI solutions for digital transformation across enterprise products.

    The team is responsible for the design, development, and deployment of innovative enterprise technologies, tools, and standardized processes to support the delivery of tax services. It is a dynamic environment bringing together professionals from tax, technology, change management, and project management backgrounds.

    The work involves consulting and execution across initiatives including:

    • process and tool development
    • implementation of AI-driven solutions
    • training development
    • engagement management
    • tool design & rollout


    Project Tech Stack

    • Cloud: Azure Cloud
    • Architecture: Microservices
    • Backend: .NET 8, ASP.NET Core, Python
    • Databases: MongoDB, Azure SQL, Vector DBs (Milvus, Postgres, etc.)
    • Frontend: Angular 18, Kendo
    • Collaboration: GitHub Enterprise with Copilot
    • Big Data: Hadoop, MapReduce, Kafka, Hive, Spark, SQL & NoSQL


    Requirements

    • 6+ years of hands-on experience in software development
    • Strong coding skills in SQL and Python, with solid CS fundamentals (data structures & algorithms)
    • Practical experience with Hadoop, MapReduce, Kafka, Hive, Spark, SQL & NoSQL warehouses
    • Experience with Azure cloud data platform
    • Hands-on experience with vector databases (Milvus, Postgres, etc.)
    • Knowledge of embedding models and retrieval-augmented generation (RAG)
    • Understanding of LLM pipelines, including data preprocessing for GenAI models
    • Experience deploying data pipelines for AI/ML workloads (scalability & efficiency)
    • Familiarity with model monitoring, feature stores (Feast, Vertex AI), and data versioning
    • Experience with CI/CD for ML pipelines (Kubeflow, MLflow, Airflow, SageMaker Pipelines)
    • Understanding of real-time streaming for ML model inference (Kafka, Spark Streaming)
    • Strong knowledge of Data Warehousing (design, implementation, optimization)
    • Knowledge of Data Quality testing, automation & visualization
    • Experience with BI tools (PowerBI dashboards & reporting)
    • Experience supporting data scientists and complex statistical use cases is highly desirable


    Responsibilities

    • Design, build, deploy, and maintain mission-critical analytics solutions processing terabytes of data at scale
    • Contribute to design, coding, configurations and manage data ingestion, real-time streaming, batch processing, and ETL across multiple storages
    • Optimize and tune performance of complex SQL queries and large-scale data flows
    • Ensure data reliability, scalability, and efficiency across AI/ML workloads
    More
  • Β· 32 views Β· 7 applications Β· 10d

    GenAI Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· C1 - Advanced
    Who we are? We are building a next-generation AI-native sales automation platform for B2B teams. Our goal is to change the very paradigm of how people interact with business applications. Manual data entry becomes a thing of the past as the platform...

    Who we are?
    We are building a next-generation AI-native sales automation platform for B2B teams. Our goal is to change the very paradigm of how people interact with business applications.

    Manual data entry becomes a thing of the past as the platform proactively connects to your communication and information channels. It seamlessly captures, structures, and transforms data into real-time, actionable awareness.

    You no longer work for the tool. The tool works for you, anticipating your needs, surfacing the right context at the right moment, and guiding your next steps with intelligence and precision.

    Our vision is to give teams an always-on AI-driven partner that lets them focus entirely on creating value and closing deals.
     

    Philosophy

    We value open-mindedness, rapid delivery and impact. You’re not just coding features-you shape architecture, UX, and product direction. Autonomy, accountability, and a startup builder’s mindset are essential.
     

    Requirements

    • Strong backend: Python, FastAPI, Webhooks, Docker, Kubernetes, Git, CI/CD.
    • Hands-on with OpenAI-family LLMs, LangChain/LangGraph/LangSmith, prompt engineering, agentic RAG, vector stores (Azure AI Search, Pinecone, Neo4j, hFAISS).
    • SQL, Pandas, Graph DBs (Neo4j), NetworkX, advanced ETL/data cleaning, Kafka/Azure EventHub.
    • Proven experience building and operating retrieval-augmented generation (RAG) pipelines.
    • Familiarity with graph algorithms (community detection, similarity, centrality).
    • Good English (documentation, API, teamwork).
       

    Nice to Have

    • Generative UI (React).
    • Multi-agent LLM frameworks.
    • Big Data pipelines in cloud (Azure preferred).
    • Production-grade ML, NLP engineering, graph ML.
       

    Responsibilities

    • Design, deploy, and maintain GenAI/RAG pipelines for the product
    • Integrate LLM/agentic assistants into user business flows.
    • Source, ingest, cleanse, and enrich external data streams.
    • Build vector search, embedding stores, and manage knowledge graphs.
    • Explore and implement new ML/GenAI frameworks.
    • Mentor developers and encourage team knowledge-sharing.
       

    What else is important:

    • Startup drive, proactivity, independence.
    • Willingness to relocate/freedom to travel in Europe; full time.
    • Eagerness to integrate latest AI frameworks into real-world production.
       

    Our Team

    Agile, tight-knit product group (5–6 experts) with deep experience in SaaS, AI, graph data, and cloud delivery. We move fast, give each member autonomy, and engineer for impact- not just features.
     

    Who takes a final decision:

    The team makes the decision based on a technical interview.
     

    Our benefits

    • Startup culture: minimal bureaucracy, maximum flexibility
    • Remote-first: work from anywhere
    • Unlimited vacation β€” we value results, not hours spent
    • Opportunity to grow together with an AI-first product company
    • Direct impact on a breakthrough AI-native product
       

    Recruitment process

    1. HR interview (VP Team) β€” Technical prescreen (Q&A)
    2. Technical interview with CTO/Data Officer (real-life case)
    3. Offer
    More
  • Β· 39 views Β· 6 applications Β· 10d

    Senior Data Engineer for Data Streaming Platform

    Full Remote Β· EU Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    GR8 Tech is a leading B2B provider of iGaming solutions that empowers operators to grow, lead, and win. We deliver high-impact, full-cycle tech solutions designed to scale. From seamless integration and expert consulting to long-term operational...

    GR8 Tech is a leading B2B provider of iGaming solutions that empowers operators to grow, lead, and win.
     

    We deliver high-impact, full-cycle tech solutions designed to scale. From seamless integration and expert consulting to long-term operational support, our platform powers millions of active players and drives real business growth. It’s more than just a product β€” it’s the iGaming Platform for Champions, built for those who play to lead.

    We know the game and how to take it to the next level. With 1000+ talented professionals on board, we don't just build tech β€” we build success stories for iGaming operators all over the world. 
     

    Our ambition drives us, our people make it real. Join us and be part of building champion-level success!
     

    As Senior Developer in the BI&AP Streaming Team, you will play a critical role in designing, developing, and maintaining our real-time data streaming platform. You will be responsible for the implementation of mission-critical use cases from real-time analytics to personalized user experiences, by building scalable, low-latency data pipelines using Kafka Streams, Flink, and the JVM ecosystem.
     

    What You’ll Be Driving:

    • Designing and implementing real-time streaming pipelines using Kafka, Flink, and Kafka Streams;
    • Leading architecture and design decisions for the event-driven data platform;
    • Mentoring junior engineers and fostering best practices in stream processing;
    • Collaborating with product & project managers to translate business requirements into robust data systems;
    • Ensuring data quality, observability, and reliability in streaming data;
    • Championing performance, fault tolerance, and exactly-once semantics in streaming jobs;
    • Evaluating and adopting new technologies to drive innovation;
    • Enforcing quality assurance standards and ensuring thorough testing before release;
    • Maintaining comprehensive documentation for system architecture and technical designs.
       

    What Makes You a GR8 Fit:

    • At least 5 years of experience in a similar role with a strong track record of leadership and successful project delivery;
    • Proven expertise with Apache Kafka and hands-on experience with Flink, Kafka Streams, or similar real-time systems;
    • Strong programming skills in Java;
    • In-depth understanding of distributed systems, data consistency models, and streaming semantics;
    • Experience with CI/CD, containerization, and infrastructure-as-code tools is a plus;
    • Communicate effectively with stakeholders, including presenting technical concepts and project updates.
       

    Why You’ll Love Working Here:

    Benefits Cafeteria

    An annual fixed budget that you can use based on your needs and lifestyle. You decide how to allocate it:

    • Sports – gym, yoga, or any activity to keep you active;
    • Medical – insurance and wellness services;
    • Mental health– therapy or coaching support;
    • Home office – ergonomic furniture, gadgets, and tools;
    • Languages – courses to improve or learn new skills.
       

    Work-life 

    • Parental support with paid maternity/paternity leave and monthly childcare allowance;
    • 20+ vacation days, unlimited sick leave, and emergency time off;
    • Remote-first setup with full tech support and coworking compensation;
    • Regular team events – online, offline, and offsite;
    • Learning culture with internal courses, career development programs, and real growth opportunities.
       

    Our Culture & Core Values

    GR8 Tech culture is how we win. Behind every bold idea and breakthrough is a foundation of trust, ownership, and a growth mindset. We move fast, stay curious, and always keep it real, with open feedback, room to experiment, and a team that’s got your back.

    • FUELLED BY TRUST: we’re open, honest, and have each other’s backs;
    • OWN YOUR GAME: we take initiative and own what we do;
    • ACCELER8: we move fast, focus smart, and keep it simple;
    • CHALLENGE ACCEPTED: we grow through challenges and stay curious;
    • BULLETPROOF: we’re resilient, ready, and always have a plan.
       

    To keep things efficient, please apply only for roles that closely match your experience.

    More
  • Β· 54 views Β· 4 applications Β· 9d

    Data Engineer

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel the Ukrainian LLM and NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling the Data Scientists and ML Engineers to develop cutting-edge language models.

    You will work at the intersection of data engineering and machine learning, ensuring that the datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context.

    Requirements:
    - Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
    - NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    Understanding of FineWeb2 or a similar processing pipeline approach.
    - Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
    - Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
    - Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as the NLP applications may require embedding storage and fast similarity search.
    - Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
    - Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
    - Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.

    Nice to have:
    - Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
    - Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
    - CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
    - Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
    - Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimizing existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve the workflows.

    Responsibilities:
    - Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information.
    - Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
    - Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to the language modeling efforts.
    - Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
    - Automate data processing workflows and ensure their scalability and reliability.
    - Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
    - Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs.
    - Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
    - Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models.
    - Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
    - Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
    - Manage data security, access, and compliance.
    - Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 47 views Β· 0 applications Β· 9d

    Middle/Senior Implementation Engineer (Data Engineer)

    Full Remote Β· Czechia, Spain, Greece, Portugal, Slovakia Β· 3 years of experience Β· B2 - Upper Intermediate
    We’re looking for a technically strong and highly motivated Middle/Senior Implementation Engineer (Data Engineer) who will become a β€œsuper user” of the platform and help deliver end-to-end, data-driven solutions for enterprise clients. This is a hybrid...

    We’re looking for a technically strong and highly motivated Middle/Senior Implementation Engineer (Data Engineer) who will become a β€œsuper user” of the platform and help deliver end-to-end, data-driven solutions for enterprise clients.
    This is a hybrid role combining business analysis, data transformation, system integration, and configuration. You’ll work across the full solution lifecycle β€” from requirements gathering to platform setup, data modeling, and validation β€” helping clients unlock the full value of the platform through advanced configuration and light custom development.
    You’ll work closely with both internal teams and client stakeholders, leveraging your communication, analytical, and problem-solving skills to deliver impactful outcomes.

     

    Responsibilities:

    As an Implementation Engineer, you will be configuring, extending, deploying, testing and validating complete solutions end to end for our customers:

    • Data analysis, exploration, testing and validation, interacting with the client to understand data structures and use cases
    • Configure connectors (e.g. Shopify, Akeneo, Bloomreach, Optimove, etc)
    • Setup client’s Platform workflows/data transformations
    • Functions and transforms - configure, write new plugins
    • Canonical data models (XDM) and mapping- configure, extend and map the data using Platform portal, Jsonata or code plugins
    • Setup Platform data hubs (data mapping, domain specific component)
    • Platform data quality dashboards
    • Data pipelines and warehouse/lake tables, views using Databricks and other tooling and the medallion architecture
    • Data lineage, ML models
    • Configure, extend of create Platform BI dashboards with PowerBI on top of data layers
    • Testing and validation
    • Work with our client clients to analyse data driven business processes and understand and check the data
    • Produce documentation and training guides on the packs and how use them
    • Advise on the best approach to leverage the Platform and achieve end results

     

    Requirements:

    • Leadership - Independent. Driven. Get things done
    • Communication - Very strong written and verbal. Comfortable in front of a client
    • Business Analysis - Capture requirements, understand use cases and translate into solutions
    • Domain knowledge - Retail preferred. Must know at least one complex domain
    • Data modelling, transformation - Strong data modelling knowledge and ability to design complex model
    • Integration and data warehousing - Have used a platform and configured it to create complete solutions for customers
    • Programming and software engineering - Python or equivalent at competent level to write functions and plugins, not expert
    • BI and dashboarding - PowerBI or else required - or ability to pick up rapidly
    • Databases and SQL is a must have
    • Technical understanding - Good technical understanding of modern architectures and leading data platforms, if possible Databricks, Spark (as a user, not an expert)
    • Cloud - Should know way around at least one cloud
    • Previous experience working for a system integrator, a consultancy or professional services organization to build solutions for customers would be extremely beneficial
    More
  • Β· 61 views Β· 4 applications Β· 9d

    Data engineer

    Ukraine Β· 6 years of experience Β· B2 - Upper Intermediate
    We are seeking a skilled and motivated Data Engineer to join our team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure that support analytics, machine learning, and business...

    We are seeking a skilled and motivated Data Engineer to join our team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure that support analytics, machine learning, and business intelligence initiatives. You’ll collaborate closely with the tech team, product design team, and COO on business goals and priorities.

    Data Engineering plays a vital role in ensuring the success of our marketplace platform. Connecting companies and GIG workers is at the heart of the business model. Both sides of the marketplace rely on Search, Matching, and Suggestion capabilities β€” all of which are driven by data. The more effective these systems are, the stronger our business becomes.

    Requirements:
     

    ● bachelor's degree in Computer Science, Engineering, Information Systems, or related field
    ● proficiency in SQL
    ● experience with Elasticsearch or similar technologies
    ● experience with modern data platforms (e.g., Snowflake, BigQuery, Redshift)
    ● familiarity with data pipeline orchestration tools (e.g., Airflow, dbt, Luigi)
    ● knowledge of cloud platforms (AWS, GCP) and related data services
    ● strong understanding of data modeling, normalization, and warehousing concepts

    Would be a plus:‍

    ● exposure to DevOps practices and CI/CD pipelines
    ● familiarity with data governance, lineage, and cataloging tools
     

    Responsibilities:

    ● design, develop, and enhance sophisticated matching and recommendation systems
    ● build and maintain scalable ETL/ELT data pipelines
    ● work with structured and unstructured data from internal and external sources
    ● implement data quality checks and monitoring systems to ensure data integrity
    ● build and manage data warehouses, data lakes, or other storage systems
    ● collaborate with cross-functional teams to understand data requirements
    ● optimize database performance and ensure data availability and security
    ● automate data processes and develop reusable frameworks and tools
    ● maintain documentation related to data architecture and pipelines

    More
Log In or Sign Up to see all posted jobs