Jobs

115
  • Β· 12 views Β· 0 applications Β· 6d

    Senior/Tech Lead Data Engineer

    Hybrid Remote Β· Poland, Ukraine (Kyiv, Lviv) Β· 5 years of experience Β· Upper-Intermediate
    Quantum is a global technology partner delivering high-end software products that address real-world problems. We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps,...

    Quantum is a global technology partner delivering high-end software products that address real-world problems. 

    We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps, Blockchain, and more.

    Here at Quantum, we are dedicated to creating state-of-art solutions that effectively address the pressing issues faced by businesses and the world. To date, our team of exceptional people has already helped many organizations globally attain technological leadership.

    We constantly discover new ways to solve never-ending business challenges by adopting new technologies, even when there isn’t yet a best practice. If you share our passion for problem-solving and making an impact, join us and enjoy getting to know our wealth of experience!

     

    About the position

    Quantum is expanding the team and has brilliant opportunities for a Data Engineer. As a Senior/Tech Lead Data Engineer, you will be pivotal in designing, implementing, and optimizing data platforms. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Air Flow, Spark, using Python, and various cloud-based solutions. 

    The client is a technological research company that utilizes proprietary AI-based analysis and language models to provide comprehensive insights into global stocks in all languages. Our mission is to bridge the knowledge gap in the investment world and empower investors of all types to become β€œsuper-investors.”

    Through our generative AI technology implemented into brokerage platforms and other financial institutions’ infrastructures, we offer instant fundamental analyses of global stocks alongside bespoke investment strategies, enabling informed investment decisions for millions of investors worldwide. 

     

    Must have skills:

    • Bachelor's Degree in Computer Science or related field
    • At least 5 years of experience in Data Engineering
    • Proven experience as a Tech Lead or Architect in data-focused projects, leadership skills, and experience managing or mentoring data engineering teams
    • Strong proficiency in Python and PySpark for building ETL pipelines and large-scale data processing
    • Deep understanding of Apache Spark, including performance tuning and optimization (job execution plans, broadcast joins, partitioning, skew handling, lazy evaluation)
    • Hands-on experience with AWS Cloud (minimum 2 years), including EMR and Glue
    • Familiarity with PySpark internals and concepts (Window functions, Broadcast joins, Sort & merge joins, Watermarking, UDFs, Lazy computation, Partition skew)
    • Practical experience with performance optimization of Spark jobs (MUST)
    • Strong understanding of OOD principles and familiarity with SOLID (MUST)
    • Experience with cloud-native data platforms and lakehouse architectures
    • Comfortable with SQL & NoSQL databases
    • Experience with testing practices such as TDD, unit testing, and integration testing
    • Strong problem-solving skills and a collaborative mindset
    • Upper-Intermediate or higher level of English (spoken and written)

     

    Your tasks will include:

    • Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources
    • Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions
    • Lead efforts in performance tuning and query optimization to enhance data processing efficiency
    • Provide expertise in data modeling and database design to ensure the scalability and reliability of data platforms
    • Contribute to the development of best practices and standards for data engineering processes
    • Stay updated on emerging technologies and trends in the data engineering landscape

     

    We offer:

    • Delivering high-end software projects that address real-world problems
    • Surrounding experts who are ready to move forward professionally
    • Professional growth plan and team leader support
    • Taking ownership of R&D and socially significant projects
    • Participation in worldwide tech conferences and competitions
    • Taking part in regular educational activities
    • Being a part of a multicultural company with a fun and lighthearted atmosphere
    • Working from anywhere with flexible working hours
    • Paid vacation and sick leave days

       

    Join Quantum and take a step toward your data-driven future.

    More
  • Β· 27 views Β· 1 application Β· 3d

    Senior\Lead Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· Upper-Intermediate
    Job Description WHAT WE VALUE Most importantly, you can see yourself contributing and thriving in the position described above. How you gained the skills needed for doing that is less important. We expect you to be good at and have had hands-on experience...

    Job Description

    WHAT WE VALUE

    Most importantly, you can see yourself contributing and thriving in the position described above. How you gained the skills needed for doing that is less important.

    We expect you to be good at and have had hands-on experience with the following:

    • Expert in T-SQL
    • Proficiency in Python
    • Experience in Microsoft cloud technologies data services including but not limited to Azure SQL and Azure Data Factory
    • Experience with Snowflake and star schema and data modeling – experience with migrations to Snowflake will be an advantage
    • Experience or strong interest with DBT (data build tool) for transformations, test. Validation, data quality etc.
    • English - Upper Intermediate

    On top of that, it would an advantage to have knowledge / interest in the following:β€―

    • Some proficiency in C# .NET
    • Security first mindset, with knowledge on how to implement row level security etc.
    • Agile development methodologies and DevOps / DataOps practices such as continuous integration, continuous delivery, and continuous deployment. For example, automated DB validations and deployment of DB schema using DACPAC.

    As a person, you have following traits:

    • Strong collaborator with team mates and stakeholders
    • Clear communicator who speaks up when needed.

    Job Responsibilities

    WHAT YOU WILL BE RESPONSIBLE FOR

    Ensure quality in our data solutions and that we can ensure good data quality across multiple customer tenants every time we release.

    Work together with the Product Architect on defining and refining the data architecture and roadmap.

    Facilitate the migration of our current data platform towards a more modern tool stack that can be easier maintained by both data engineers and software engineers.

    Ensure that new data entities get implemented in the data model using schemas that are appropriate for their use, facilitating good performance and analytics needs.

    Guide and support people of other roles (engineers, testers, etc.), to ensure the spread of data knowledge and experience more broadly in the team

    Department/Project Description

    WHO WE ARE

    For over 50 years, we have worked closely with investment and asset managers to become the world’s leading provider of integrated investment management solutions. We are 3,000+ colleagues with a broad range of nationalities, educations, professional experiences, ages, and backgrounds in general. β€―

    SimCorp is an independent subsidiary of the Deutsche BΓΆrse Group. Following the recent merger with Axioma, we leverage the combined strength of our brands to provide an industry-leading, full, front-to-back offering for our clients. β€―

    SimCorp is an equal-opportunity employer. We are committed to building a culture where diverse perspectives and expertise are integrated into our everyday work. We believe in the continual growth and development of our employees, so that we can provide best-in-class solutions to our clients. β€―

     

    WHY THIS ROLE IS IMPORTANT TO US

    You will be joining an innovative application development team within SimCorp's Product Division. As a primary provider of SaaS offerings based on next-generation technologies, our Digital Engagement Platform is a cloud-native data application developed on Azure, utilizing SRE methodologies and continuous delivery. Your contribution to evolving DEP’s data platform will be vital in ensuring we can scale to future customer needs and support future analytics requirements. Our future growth as a SaaS product is rooted in a cloud-native strategy that emphasizes adopting a modern data platform tool stack and the application of modern engineering principles as essential components.

    We are looking into a technology shift from Azure SQL to SnowFlake in order to meet new client demands for scalability. You will be an important addition to the team for achieving this goal.

    More
  • Β· 18 views Β· 3 applications Β· 1d

    Senior Data Engineer (FinTech Project)

    Full Remote Β· EU Β· 4.5 years of experience Β· Upper-Intermediate
    Company Description We are looking for a Senior Data Engineer to join our Data Center of Excellence, part of Sigma Software’s complex organizational structure, which combines collaboration with diverse clients, challenging projects, and continuous...

    Company Description

    We are looking for a Senior Data Engineer to join our Data Center of Excellence, part of Sigma Software’s complex organizational structure, which combines collaboration with diverse clients, challenging projects, and continuous opportunities to enhance your expertise in a collaborative and innovative environment. 

    CUSTOMER

    Our client is one of Europe’s fastest-growing FinTech innovators, revolutionizing how businesses manage their financial operations. They offer an all-in-one platform that covers everything from virtual cards and account management to wire transfers and spend tracking. As a licensed payment institution, the client seamlessly integrates core financial services into their product, enabling companies to streamline their financial workflows with speed and security. 

    PROJECT

    You will join a dynamic team driving the evolution of a high-performance data platform that supports real-time financial operations and analytics. The project focuses on building scalable data infrastructure that will guarantee accuracy, reliability, and compliance across multiple financial products and services. 

    Job Description

    • Collaborate with stakeholders to identify business requirements and translate them into technical specifications 
    • Design, build, monitor, and maintain data pipelines in production, including complex pipelines (Airflow, Python, event-driven systems) 
    • Develop and maintain ETL processes for ingesting and transforming data from various sources 
    • Monitor and troubleshoot infrastructure issues, such as Kubernetes and Terraform, including data quality, ETL processes, and cost optimization 
    • Collaborate closely with analytics engineers on CI and infrastructure management 
    • Drive the establishment and maintenance of the highest coding standards and practices, ensuring the development of efficient, scalable, and reliable data pipelines and systems 
    • Participate in data governance initiatives to ensure data accuracy and integrity 
    • Actively participate in the data team's routines and enhancement plans 
    • Stay up to date with the latest developments in data technology and provide recommendations for improving our analytics capabilities 

    Qualifications

    • At least 5 years of experience in data engineering or software engineering with a strong focus on data infrastructure 
    • Hands-on experience in AWS (or equivalent cloud platforms like GCP) and data analytics services 
    • Strong proficiency in Python and SQL  
    • Good understandingβ€―of database design, optimization, and maintenance (using DBT) 
    • Strong experience with data modeling, ETL processes, and data warehousing 
    • Familiarity with Terraform and Kubernetes 
    • Expertise in developing and managing large-scale data flows efficiently 
    • Experience with job orchestrators or scheduling tools like Airflow 
    • At least an Upper-Intermediate level of English 

    Would be a plus: 

    • Experience managing RBAC on data warehouse 
    • Experience maintaining security on data warehouse (IPs whitelist, masking, sharing data between accounts/clusters, etc.) 

    I'm interested
     

    More
  • Β· 36 views Β· 1 application Β· 12 May

    Senior Data Engineer

    Full Remote Β· Poland Β· 5 years of experience Β· Upper-Intermediate
    As a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization,...

    As a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Airflow, Spark, using Python and various cloud-based solutions.

     

    Key Responsibilities:

    • Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources.
    • Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions.
    • Lead efforts in performance tuning and query optimization to enhance data processing efficiency.
    • Provide expertise in data modeling and database design to ensure scalability and reliability of data platforms.
    • Contribute to the development of best practices and standards for data engineering processes.
    • Stay updated on emerging technologies and trends in the data engineering landscape.

     

    Required Skills and Qualifications:

    • Bachelor's Degree in Computer Science or related field.
    • Minimum of 5 years of experience in tech lead data engineering or architecture roles.
    • Proficiency in Python and PySpark for ETL development and data processing.
    • AWS CLOUD at least 2 years
    • Extensive experience with cloud-based data platforms, particularly EMR.
    • Must have knowledge with Spark.
    • Excellent problem-solving skills and ability to work effectively in a collaborative team environment.
    • Leadership experience, with a proven track record of leading data engineering teams.

     

    Benefits

     

    • 20 days of paid vacation, 5 sick leave
    • National holidays observed
    • Company-provided laptop

     

     

    More
  • Β· 62 views Β· 2 applications Β· 12 May

    Middle Data Support Engineer (Python, SQL)

    Ukraine Β· 3 years of experience Β· Upper-Intermediate
    N-iX is looking for a Middle Data Support Engineer to join our team. Our customer is the leading school transportation solutions provider in North America. Every day, the company completes 5 million student journeys, moving more passengers than all U.S....

    N-iX is looking for a Middle Data Support Engineer to join our team. Our customer is the leading school transportation solutions provider in North America. Every day, the company completes 5 million student journeys, moving more passengers than all U.S. airlines combined and delivers reliable, quality services, including full-service transportation and management, special-needs transportation, route optimization and scheduling, maintenance, and charter services for 1,100 school district contracts.

     

    Responsibilities:

    • Provide support in production and non-production environments (Azure cloud)
    • Install, configure and provide day-to-day support after implementation, including off hours as needed;
    • Troubleshooting defects and errors, arising problems resolution;
    • Plan, test, and implement server upgrades, maintenance fixes, and vendor-supplied patches;
    • Help in resolving incidents;
    • Monitor ETL jobs;
    • Perform small enhancements (Azure/SQL). 

       

    Requirements:

    • Proven knowledge and 3+ years experience in Python
    • Proficiency in RDBMS systems (MS SQL experience as a plus);
    • Experience with Azure cloud provider service;
    • Understanding of Azure Data Lake / Storage Accounts;
    • Experience in creation and managing data pipelines in Azure Data Factory;
    • Upper Intermediate/Advanced English level.

       

    Nice to have:

    • Experience with administration of Windows Server 2012 and higher;
    • Experience with AWS, Snowflake, Power BI;
    • Experience with technical support;
    • Experience in .Net.

       

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 44 views Β· 5 applications Β· 12 May

    Data engineer (relocation to Berlin)

    Office Work Β· Germany Β· 5 years of experience Β· Upper-Intermediate
    At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment,...

    At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.

     

    About the Data Solution Team As a Data Engineer, you will join our Data Solution Team, which drives our data-driven innovation. The team is pivotal to powering our business processes and enhancing customer experiences through effective data utilization. Our focus areas include:
     

    ● Developing integrations between systems.

    ● Analyzing customer data to derive actionable insights.

    ● Improving customer experience by leveraging statistical and machine learning models. Our tech stack includes:

    ● Cloud & Infrastructure: AWS (S3, EKS, Quicksight, and monitoring tools).

    ● Data Engineering & Analytics: Apache Spark (Scala and PySpark on Databricks), Apache Kafka (Confluence Cloud).

    ● Infrastructure as Code: Terraform.

    ● Development & Collaboration: BitBucket, Jira.

    ● Integration Tools & APIs: Segment.io, Blueshift, Zendesk, Google Maps API, and other external systems

     

    Job requirements

    As A Data Engineer, You Will:

    ● Design, build, and own near-time and batch data processing workflows.

    ● Develop efficient, low-latency data pipelines and systems.

    ● Maintain high data quality while ensuring GDPR compliance.

    ● Analyze customer data and extract insights to drive business decisions.

    ● Collaborate with Product, Backend, Marketing, and other teams to deliver impactful features.

    ● Help Data scientists deliver ML/AI solutions.

     

    Requirements:

    ● 5+ years of experience as a Data Engineer, with expertise in Apache Spark using Python and Scala.

    ● 3+ years of experience with Apache Kafka.

    ● Management experience or Tech Lead experience

    ● Strong proficiency in SQL.

    ● Experience with CI/CD processes and platforms.

    ● Hands-on experience with cloud technologies such as AWS, GCP or Azure.

    ● Familiarity with Terraform.

    ● Comfortable working in an agile environment.

    ● Excellent problem-solving and self-learning skills, with the ability to operate both independently and as part of a team.

     

    Nice to have:

    ● Hands-on experience with Databricks.

    ● Experience with document databases, particularly Amazon DocumentDB.

    ● Familiarity with handling high-risk data.

    ● Exposure to BI tools such as AWS Quicksight or Redash.

    ● Work experience in a Software B2C company, especially in the FinTech industry.

     

    What we offer:

    Our goal is to set up a great working environment. Become part of the process and:

    ● Shape the future of our organization as part of the international founding team.

    ● Take on responsibility from day one.

    ● Benefit from various coaching and training opportunities, including a Sports Subscription, German classes, and a €1000 yearly self-development budget.

    ● Work in a hybrid working model from the comfortable Berlin office

    ● Enjoy a modern workplace in the heart of Berlin with drinks, fresh fruit, kicker and ping pong

    More
  • Β· 44 views Β· 1 application Β· 30d

    Data Engineer

    Hybrid Remote Β· Slovakia Β· 4 years of experience Β· Upper-Intermediate
    Now is an amazing time to join our company as we continue to empower innovators to change the world. We provide top-tier technology consulting, R&D, design, and software development services across the USA, UK, and EU markets. And this is where you come...

    Now is an amazing time to join our company as we continue to empower innovators to change the world. We provide top-tier technology consulting, R&D, design, and software development services across the USA, UK, and EU markets. And this is where you come in!

    We are looking for a Skilled Data Engineer to join our team.

    About the Project

    We’re launching a Snowflake Proof of Concept (PoC) for a leading football organization in Germany. The project aims to demonstrate how structured and well-managed data can support strategic decision-making in the sports domain.

    Key Responsibilities

    • Define data scope and identify data sources
    • Design and build the data architecture
    • Implement ETL pipelines into a data lake
    • Ensure data quality and consistency
    • Collaborate with stakeholders to define analytics needs
    • Deliver data visualizations using Power BI

    Required Skills

    • Strong experience with Snowflake, ETL pipelines, and data lakes
    • Power BI proficiency
    • Knowledge of data architecture and modeling
    • Data quality assurance expertise
    • Solid communication in English (B2+)

    Nice to Have

    • Familiarity with GDPR
    • Experience in sports or media-related data projects
    • Experience with short-term PoCs and agile delivery

    What We Offer

    • Contract for the PoC phase with potential long-term involvement
    • All cloud resources and licenses provided by the client
    • Hybrid/onsite work in Bratislava
    • Opportunity to join a meaningful data-driven sports project with European visibility

    πŸ“¬ Interested? Send us your CV and hourly rate (EUR).

    We’re prioritizing candidates based in Bratislava or in Europe

    Interview Process:

    1️⃣ internal technical interview
    2️⃣ interview with the client

    More
  • Β· 66 views Β· 6 applications Β· 29d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Upper-Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    We are Boosta β€” an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+. We’re looking for a...

    We are Boosta β€” an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+.

    We’re looking for a Data Engineer to join our team in the iGaming industry, where real-time insights, affiliate performance, and marketing analytics are at the center of decision-making. In this role, you’ll own and scale our data infrastructure, working across affiliate integrations, product analytics, and experimentation workflows.

    Your primary responsibilities will include building and maintaining data pipelines, implementing automated data validation, integrating external data sources via APIs, and creating dashboards to monitor data quality, consistency, and reliability. You’ll collaborate daily with the Affiliate Management team, Product Analysts, and Data Scientists to ensure the data powering our reports and models is clean, consistent, and trustworthy.

     

    WHAT YOU’LL DO

    • Design, develop, and maintain ETL/ELT pipelines to transform raw, multi-source data into clean, analytics-ready tables in Google BigQuery, using tools such as dbt for modular SQL transformations, testing, and documentation.
    • Integrate and automate affiliate data workflows, replacing manual processes in collaboration with the related stakeholders.
    • Proactively monitor and manage data pipelines using tools such as Airflow, Prefect, or Dagster, with proper alerting and retry mechanisms in place.
    • Emphasize data quality, consistency, and reliability by implementing robust validation checks, including schema drift detection, null/missing value tracking, and duplicate detection using tools like Great Expectations or
    • Build a Data Consistency Dashboard (in Looker Studio, Power BI, Tableau or Grafana) to track schema mismatches, partner anomalies, and source freshness, with built-in alerts and escalation logic.
    • Ensure timely availability and freshness of all critical datasets, resolving latency and reliability issues quickly and sustainably.
    • Control access to cloud resources, implement data governance policies, and ensure secure, structured access across internal teams.
    • Monitor and optimize data infrastructure costs, particularly related to BigQuery usage, storage, and API-based ingestion.
    • Document all pipelines, dataset structures, transformation logic, and data contracts clearly to support internal alignment and knowledge sharing.
    • Build and maintain postback-based ingestion pipelines to support event-level tracking and attribution across the affiliate ecosystem.
    • Collaborate closely with Data Scientists and Product Analysts to deliver high-quality, structured datasets for modeling, experimentation, and KPI reporting.
    • Act as a go-to resource across the organization for troubleshooting data discrepancies, supporting analytics workflows, and enabling self-service data access.

       

    WHAT WE EXPECT FROM YOU

    • Strong proficiency in SQL and Python.
    • Experience with Google BigQuery and other GCP tools (e.g., Cloud Storage, Cloud Functions, Composer).
    • Proven ability to design, deploy, and scale ETL/ELT pipelines.
    • Hands-on experience integrating and automating data from various platforms.
    • Familiarity with postback tracking, attribution logic, and affiliate data reconciliation.
    • Skilled in orchestration tools like Airflow, Prefect, or Dagster.
    • Experience with Looker Studio, Power BI, Tableau, or Grafana for building dashboards for data quality monitoring.
    • Use of Git for version control and experience managing CI/CD pipelines (e.g., GitHub Actions).
    • Experience with Docker to build isolated and reproducible environments for data workflows.
    • Exposure to iGaming data structures and KPIs is a strong advantage.
    • Strong sense of data ownership, documentation, and operational excellence.

       

    HOW IT WORKS

    • Stage 1: pre-screen with a recruiter.
    • Stage 2: test task.
    • Stage 3: interview.
    • Stage 4: bar-raising.
    • Stage 5: reference check.
    • Stage 6: job offer!

    A trial period for this position is 3 months, during which we will get used to working together.

     

    WHAT WE OFFER

    • 28 business days of paid off
    • Flexible hours and the possibility to work remotely
    • Medical insurance and mental health care
    • Compensation for courses, trainings
    • English classes and speaking clubs
    • Internal library, educational events
    • Outstanding corporate parties, teambuildings

     

    More
  • Β· 56 views Β· 7 applications Β· 28d

    Consultant Data Engineer (Python/Databricks)

    Part-time Β· Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-Intermediate
    Softermii is looking for a part-time Data Engineering Consultant / Tech Lead who will do technical interviews, assist with upcoming projects, and occasionally be hands-on with complex development tasks β€” including data pipeline design and solution...

    Softermii is looking for a part-time Data Engineering Consultant / Tech Lead who will do technical interviews, assist with upcoming projects, and occasionally be hands-on with complex development tasks β€” including data pipeline design and solution optimization on Databricks.

     


    Type of cooperation: Part-time

     

    ⚑️Your responsibilities on the project will be:

    • Interview and hire Data Engineers
    • Supervise work of other Engineers and have hands on for the most complicated tasks from backlog, focus on unblocking other data Engineers in case of technical difficulties
    • Develop and maintain scalable data pipelines using Databricks (Apache Spark) for batch and streaming use cases.
    • Work with data scientists and analysts to provide reliable, performant, and well-modeled data sets for analytics and machine learning.
    • Optimize and manage data workflows using Databricks Workflows and orchestrate jobs for complex data transformation tasks.
    • Design and implement data ingestion frameworks to bring data from various sources (files, APIs, databases) into Delta Lake.
    • Ensure data quality, lineage, and governance using tools such as Unity Catalog, Delta Live Tables, and built-in monitoring features.
    • Collaborate with cross-functional teams to understand data needs and support production-grade machine learning workflows.
    • Apply data engineering best practices: versioning, testing (e.g., with pytest or dbx), documentation, and CI/CD pipelines



     

    πŸ•ΉTools we use: Jira, Confluence, Git, Figma

     

    πŸ—žOur requirements to you:

    • 5+ years of experience in data engineering or big data development, with production-level work.
    • Architect and develop scalable data solutions on the Databricks platform, leveraging Apache Spark, Delta Lake, and the lakehouse architecture to support advanced analytics and machine learning initiatives.
    • Design, build, and maintain production-grade data pipelines using Python (or Scala) and SQL, ensuring efficient data ingestion, transformation, and delivery across distributed systems.
    • Lead the implementation of Databricks features such as Delta Live Tables, Unity Catalog, and Workflows to ensure secure, reliable, and automated data operations.
    • Optimize Spark performance and resource utilization, applying best practices in distributed computing, caching, and tuning for large-scale data processing.
    • Integrate data from cloud-based sources (e.g., AWS S3), ensuring data quality, lineage, and consistency throughout the pipeline lifecycle.
    • Manage orchestration and automation of data workflows using tools like Airflow or Databricks Jobs, while implementing robust CI/CD pipelines for code deployment and testing.
    • Collaborate cross-functionally with data scientists, analysts, and business stakeholders to understand data needs and deliver actionable insights through robust data infrastructure.
    • Mentor and guide junior engineers, promoting engineering best practices, code quality, and continuous learning within the team.
    • Ensure adherence to data governance and security policies, utilizing tools such as Unity Catalog for access control and compliance.
    • Continuously evaluate new technologies and practices, driving innovation and improvements in data engineering strategy and execution.
    • Experience in designing, building, and maintaining data pipelines using Apache Airflow, including DAG creation, task orchestration, and workflow optimization for scalable data processing.
    • Upper-Intermediate English level.

     

     

    πŸ‘¨β€πŸ’»Who will you have the opportunity to meet during the hiring process (stages):
    Call, HR, Tech interview, PM interview.

     

    πŸ₯―What we can offer you:

    • We have stable and highly-functioning processes – everyone has their own role and clear responsibilities, so decisions are made quickly and without unnecessary approvals. 
    • You will have enough independence to make decisions that can affect not only the project but also the work of the company.
    • We are a team of like-minded experts who create interesting products during working hours and enjoy spending free time together.
    • Do you like to learn something new in your profession or do you want to improve your English? We will be happy to pay 50% of the cost of courses/conferences/speaking clubs.
    • Do you want an individual development plan? We will form one especially for you + you can count on mentoring from our seniors and leaders.
    • Do you have a friend who is currently looking for new job opportunities? Recommend them to us and get a bonus.
    • And what if you want to relax? Then we have 21 working days off.
    • What if you are feeling bad? You can take 5 sick leaves a year.
    • Do you want to volunteer? We will add you to a chat, where we can get a bulletproof vest, buy a pickup truck or send children's drawings to the front.
    • And we have the most empathetic HRs (who also volunteers!). So we are ready to support your well-being in various ways.

     

    πŸ‘¨β€πŸ«A little more information that you may find useful:

    - our adaptation period lasts 3 months, this period of time is enough for us to understand  each other better;

    - there is a performance review after each year of our collaboration where we use a skills map to track your growth;

    - we really have no boundaries in the truest sense of the word – we have flexible working day is up to you.

     

    Of course, we have a referral bonus syst

    More
  • Β· 136 views Β· 37 applications Β· 24d

    Middle+ Data Engineer

    Part-time Β· Full Remote Β· Worldwide Β· 2 years of experience Β· Upper-Intermediate
    Start Date: ASAP Weekly Hours: ~15–20 hours Location: Remote Client: USA-based LegalTech Platform About the Project Join a growing team working on an AI-powered legal advisory platform designed to simplify and streamline legal support for businesses....

    Start Date: ASAP
    Weekly Hours: ~15–20 hours
    Location: Remote
    Client: USA-based LegalTech Platform

     

    About the Project

    Join a growing team working on an AI-powered legal advisory platform designed to simplify and streamline legal support for businesses. The platform includes:

    • A robust contract library
    • AI-assisted document generation and guidance
    • Interactive legal questionnaires
    • A dynamic legal insights blog

       

    We're currently developing a Proof of Concept (PoC) for an advanced AI agent and are looking for a skilled Python/Data Engineer to support core backend logic and data workflows.

     

    Your Core Responsibilities

    • Design and implement ETL/ELT pipelines in the context of LLMs and AI agents
    • Collaborate directly with the AI Architect on PoC features and architecture
    • Contribute to scalable, production-ready backend systems for AI components
    • Handle structured and unstructured data processing
    • Support data integrations with vector databases and AI model inputs

     

    Must-have experience with:

    • Python (3+ years)
    • FastAPI
    • ETL / ELT pipelines
    • Vector Databases (e.g., Pinecone, Weaviate, Qdrant)
    • pandas, numpy, unstructured.io
    • Working with transformers and LLM-adjacent tools

     

    Being a part of 3asoft means having:
    - High level of flexibility and freedom
    - p2p relationship with worldwide customers
    - Competitive compensation paid in USD
    - Fully remote working

    More
  • Β· 40 views Β· 6 applications Β· 24d

    Data Engineer (Azure stack)

    Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· Upper-Intermediate
    Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are...

    Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV.

    Key Responsibilities:
    - Create and manage scalable data pipelines with Azure SQL and other databases.
    - Use Azure Data Factory to automate data workflows.
    - Write efficient Python code for data analysis and processing.
    - Use Docker for application containerization and deployment streamlining.
    - Manage code quality and version control with Git.

    Skills Requirements:
    - 3+ years of experience with Python.
    - 2+ years of experience as a Data Engineer.
    - Strong SQL knowledge, preferably with Azure SQL experience.
    - Python skills for data manipulation.
    - Expertise in Docker for app containerization.
    - Familiarity with Git for managing code versions and collaboration.
    - Upper-intermediate level of English.

    Optional Skills (as a plus):
    - Experience with Azure Data Factory for orchestrating data processes.
    - Experience developing APIs with FastAPI or Flask.
    - Proficiency in Databricks for big data tasks.
    - Experience in a dynamic, agile work environment.
    - Ability to manage multiple projects independently.
    - Proactive attitude toward continuous learning and improvement.

    We offer:

    - Great networking opportunities with international clients, challenging tasks;

    - Building interesting projects from scratch using new technologies;

    - Personal and professional development opportunities;

    - Competitive salary fixed in USD;

    - Paid vacation and sick leaves;

    - Flexible work schedule;

    - Friendly working environment with minimal hierarchy;

    - Team building activities and corporate events.

    More
  • Β· 107 views Β· 2 applications Β· 23d

    Middle/Senior Data Engineer (3445)

    Full Remote Β· Ukraine Β· 3 years of experience Β· Intermediate
    General information: We’re ultimately looking for someone who understands data flows well, has strong analytical thinking, and can grasp the bigger picture. If you’re the kind of person who asks the right questions and brings smart ideas to the table,...

    General information:
    We’re ultimately looking for someone who understands data flows well, has strong analytical thinking, and can grasp the bigger picture. If you’re the kind of person who asks the right questions and brings smart ideas to the table, some specific requirements can be flexible β€” we’re more interested in finding "our person" :)
     

    Responsibilities:
    Implementation of business logic in Data Warehouse according with the specifications
    Some business analysis required to enable providing the relevant data in a relevant manner
    Conversion of business requirements into data models
    Pipelines management (ETL pipelines in Datafactory)
    Loadings and query performance tuning
    Working with senior staff on the customer's side who will provide requirements while engineer may propose some own ideas
     

    Requirements:
    Experience with Azure and readiness to work (up to 80% of time) with SQL is a must
    Development of data base systems (MS-SQL/T-SQL,SQL)
    Writing well performing SQL code and investigating & implementing performance measures
    Data warehousing / dimensional modeling
    Working within an Agile project setup
    Creation and maintenance of Azure DevOps & Data Factory pipelines
    Developing robust data pipelines with DBT

    Experience with Databricks (optional)
    Work in Supply Chain & Logistics and aware of SAP MM Data structure (optional).

     

    More
  • Β· 47 views Β· 5 applications Β· 23d

    Senior Data Platform Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 8 years of experience Β· Upper-Intermediate
    Position Summary: We are looking for a talented Senior Data Platform Engineer to join our Blockchain team, to participate in the development of the data collection and processing framework to integrate new chains. This is a remote role and we are...

    Position Summary: 

    We are looking for a talented Senior Data Platform Engineer to join our Blockchain team, to participate in the development of the data collection and processing framework to integrate new chains. This is a remote role and we are flexible with considering applications from anywhere in Europe. 
     
    Duties and responsibilities: 

    • Integration of blockchains, Automated Market Maker (AMM) protocols, and bridges within Crystal's platform; 
    • Active participation in development and maintenance of our data pipelines and backend services; 
    • Integrate new technologies into our processes and tools; 
    • End-to-end feature designing and implementation; 
    • Code, debug, test and deliver features and improvements in a continuous manner; 
    • Provide code review, assistance and feedback for other team members. 


    Required: 

    • 8+ years of experience developing Python backend services and APIs; 
    • Advanced knowledge of SQL - ability to write, understand and debug complex queries; 
    • Data Warehousing and database basic architecture principles; 
    • POSIX/Unix/Linux ecosystem knowledge; 
    • Strong knowledge and experience with Python, and API frameworks such as Flask or FastAPI; 
    • Knowledge about blockchain technologies or willingness to learn; 
    • Experience with PostgreSQL database system; 
    • Knowledge of Unit Testing principles; 
    • Experience with Docker containers and proven ability to migrate existing services; 
    • Independent and autonomous way of working; 
    • Team-oriented work and good communication skills are an asset. 


    Would be a plus: 

    • Practical experience in big data and frameworks – Kafka, Spark, Flink, Data Lakes and Analytical Databases such as ClickHouse; 
    • Knowledge of Kubernetes and Infrastructure as Code – Terraform and Ansible; 
    • Passion for Bitcoin and Blockchain technologies; 
    • Experience with distributed systems; 
    • Experience with opensource solutions; 
    • Experience with Java or willingness to learn. 
    More
  • Β· 41 views Β· 0 applications Β· 22d

    Middle BI/DB Developer

    Office Work Β· Ukraine (Lviv) Β· Product Β· 2 years of experience Β· Upper-Intermediate
    About us: EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide. But that's not all! We're not just about...

    About us:

    EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide.

    But that's not all! We're not just about numbers, we're about people. With a team of over 1000 passionate individuals spread across twelve countries in Europe, Asia, and the US, we're all united by our love for innovation and teamwork.

    EveryMatrix is a member of the World Lottery Association (WLA) and European Lotteries Association. In September 2023 it became the first iGaming supplier to receive WLA Safer Gambling Certification. EveryMatrix is proud of its commitment to safer gambling and player protection whilst producing market leading gaming solutions.

    Join us on this exciting journey as we continue to redefine the iGaming landscape, one groundbreaking solution at a time.
     

    We are looking for a passionate and dedicated Junior QA to join our team in Lviv!

    About the unit:

    DataMatrix is a part of EveryMatrix platform that is responsible for collecting, storing, processing and utilizing hundreds of millions of transactions from the whole platform every single day. We develop Business Intelligent solutions, reports, 3rd party integrations, data streaming and other products for both external and internal use. The team consists of 35 people and is located in Lviv.

    What You'll get to do:

    • Develop real time data processing and aggregations
    • Create and modify data marts (enhance our data warehouse)
    • Take care of internal and external integrations
    • Forge various types of reports

    Our main stack:

    • DB: BigQuery, PostgreSQL
    • ETL: Apache Airflow, Apache NiFi
    • Streaming: Apache Kafka

    What You need to know:

    Here's what we offer:

    • Start with 22 days of annual leave, with 2 additional days added each year, up to 32 days by your fifth year with us.
    • Stay Healthy: 10 sick leave days per year, no doctor's note required; 30 medical leave days with medical allowance
    • Support for New Parents:
    • 21 weeks of paid maternity leave, with the flexibility to work from home full-time until your child turns 1 year old.
    • 4 weeks of paternity leave, plus the flexibility to work from home full-time until your child is 13 weeks old.

    Our office perks include on-site massages and frequent team-building activities in various locations.

    Benefits & Perks:

    • Daily catered lunch or monthly lunch allowance.β€―
    • Private Medical Subscription.β€―
    • Access online learning platforms like Udemy for Business, LinkedIn Learning or O’Reilly, and a budget for external training.
    • Gym allowance

    At EveryMatrix, we're committed to creating a supportive and inclusive workplace where you can thrive both personally and professionally. Come join us and experience the difference!

    More
  • Β· 49 views Β· 5 applications Β· 22d

    Data Engineer (PostgreSQL, Snowflake, Google BigQuery, MongoDB, Elasticsearch)

    Full Remote Β· Worldwide Β· 5 years of experience Β· Intermediate
    We are looking for a Data Engineer with a diverse background in data integration to join the Data Management team. Some data are small, some are very large (1 trillion+ rows), some data is structured, some data is not. Our data comes in all kinds of...

    We are looking for a Data Engineer with a diverse background in data integration to join the Data Management team. Some data are small, some are very large (1 trillion+ rows), some data is structured, some data is not.  Our data comes in all kinds of sizes, shapes and formats.  Traditional RDBMS like PostgreSQL, Oracle, SQL Server, MPPs like StarRocks, Vertica, Snowflake, Google BigQuery, and unstructured, key-value like MongoDB, Elasticsearch, to name a few.

     

    We are looking for individuals who can design and solve any data problems using different types of databases and technology supported within our team.  We use MPP databases to analyze billions of rows in seconds.  We use Spark and Iceberg, batch or streaming to process whatever the data needs are.  We also use Trino to connect all different types of data without moving them around. 

     

    Besides a competitive compensation package, you’ll be working with a great group of technologists interested in finding the right database to use, the right technology for the job in a culture that encourages innovation.  If you’re ready to step up and take on some new technical challenges at a well-respected company, this is a unique opportunity for you.

     

    Responsibilities:

    • Work within our on-prem Hadoop ecosystem to develop and maintain ETL jobs
    • Design and develop data projects against RDBMS such as PostgreSQL 
    • Implement ETL/ELT processes using various tools (Pentaho) or programming languages (Java, Python) at our disposal 
    • Analyze business requirements, design and implement required data models
    • Lead data architecture and engineering decision making/planning.
    • Translate complex technical subjects into terms that can be understood by both technical and non-technical audiences.

     

    Qualifications: (must have)

    • BA/BS in Computer Science or in related field
    • 5+ years of experience with RDBMS databases such as Oracle, MSSQL or PostgreSQL
    • 2+ years of experience managing or developing in the Hadoop ecosystem
    • Programming background with either Python, Scala, Java or C/C++
    • Experience with Spark. PySpark, SparkSQL, Spark Streaming, etc…
    • Strong in any of the Linux distributions, RHEL,CentOS or Fedora
    • Working knowledge of orchestration tools such Oozie and Airflow
    • Experience working in both OLAP and OLTP environments
    • Experience working on-prem, not just cloud environments
    • Experience working with teams outside of IT (i.e. Application Developers, Business Intelligence, Finance, Marketing, Sales)

     

    Desired: (nice to have)

    • Experience with Pentaho Data Integration or any ETL tools such as Talend, Informatica, DataStage or HOP.
    • Deep knowledge shell scripting, scheduling, and monitoring processes on Linux
    • Experience using reporting and Data Visualization platforms (Tableau, Pentaho BI)
    • Working knowledge of data unification and setup using Presto/Trino
    • Web analytics or Business Intelligence a plus
    • Understanding of Ad stack and data (Ad Servers, DSM, Programmatic, DMP, etc)
    More
Log In or Sign Up to see all posted jobs