Jobs

118
  • Β· 59 views Β· 8 applications Β· 15d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Pre-Intermediate
    We’re Applyft - an IT product company which creates value-driven mobile apps. Our journey began with the Geozilla family locator product, but now our portfolio consists of four apps in Family Safety, Entertainment and Mental Health Spheres. We’re proud...

    We’re Applyft  - an IT product company which creates value-driven mobile apps. Our journey began with the Geozilla family locator product, but now our portfolio consists of four apps in Family Safety, Entertainment and Mental Health Spheres. We’re proud to have a 5M monthly active users base and to achieve 20% QoQ revenue growth

     

    Now we are looking for a Middle/Senior Data Engineer to join our Analytics team

     

    What you’ll do:

     

    • Design, develop and maintain Data pipelines and ETL processes for internal DWH
    • Develop and support integrations with 3rd party systems
    • Be responsible for the quality of data presented in BI dashboards
    • Collaborate with data analysts to troubleshoot data issues and optimize data workflows

     

    Your professional qualities:

     

    • 3+ years of BI/DWH development experience
    • Excellent knowledge of database concepts and hands-on experience with SQL
    • Proven experience of designing, implementing, and maintaining ETL data pipelines
    • Hands-on experience writing production-level Python code and managing workflows with Airflow
    • Experience working with cloud-native technologies (AWS/GCP)

     

    Will be a plus:

     

    • Experience with billing systems, enterprise financial reporting, subscription monetization products
    • Experience supporting product and marketing data analytics

     

    We offer:

     

    • Remote-First culture: We provide a flexible working schedule and you can work anywhere in the world 
    • Health taking care program: We provide Health insurance, sport compensation and 20 paid sick days
    • Professional Development:  The company provides budget for each employee for courses, trainings and conferences
    • Personal Equipment Policy: We provide all necessary equipment for your work. For Ukrainian employees we also provide Ecoflow
    • Vacation Policy: Each employee in our company has 20 paid vacation days and extra days on the occasion of special evens
    • Knowledge sharing: We are glad to share our knowledge and experience in our internal events
    • Corporate Events: We organize corporate events and team-building activities across our hubs
    More
  • Β· 43 views Β· 1 application Β· 5d

    Data Engineer (Python)

    Full Remote Β· Countries of Europe or Ukraine Β· 1.5 years of experience
    Dataforest is seeking an experienced Data Engineer (Python) to join our dynamic team. You will be responsible for developing and maintaining data-processing architecture, as well as optimizing and monitoring our internal systems. Requirements: - 1,5+...

    Dataforest is seeking an experienced Data Engineer (Python) to join our dynamic team. You will be responsible for developing and maintaining data-processing architecture, as well as optimizing and monitoring our internal systems.

    Requirements:
    - 1,5+ years of commercial experience with Python.
    - Experience with ElasticSearch and PostgreSQL.
    - Knowledge and experience with Redis, Kafka, and SQS.
    - Experience setting up monitoring systems with CloudWatch, Prometheus, and Grafana.
    - Deep understanding of algorithms and their complexities.
    - Excellent programming skills in Python with a focus on optimization and code structuring.
    - Knowledge of ETL principles and practices.
    - Ability to work collaboratively and communicate effectively.
    - Experience with Linux environments, cloud services (AWS), and Docker.

    Will be plus:
    - Knowledge in web scraping, data extraction, cleaning, and visualization.
    - Understanding of multiprocessing and multithreading, including process and thread management.
    -  Experience with Flask / Flask-RESTful for API development.

    Key Responsibilities:
    - Develop and maintain data processing architecture using Python.
    - Efficiently utilize ElasticSearch and PostgreSQL for data management.
    - Implement and manage data pipelines using Redis, Kafka, and SQS.
    - Set up and monitor logging systems using CloudWatch, Prometheus, and Grafana.
    - Optimize code and improve its structure and performance.
    - Understand and implement ETL processes.
    - Analyze algorithms and code complexity to enhance efficiency.
    - Work with the AWS stack to ensure flexibility and reliability in data processing.

    We offer:
    - Great networking opportunities with international clients, challenging tasks;
    - Building interesting projects from scratch using new technologies;
    - Personal and professional development opportunities;
    - Competitive salary fixed in USD;
    - Paid vacation and sick leaves;
    - Flexible work schedule;
    - Friendly working environment with minimal hierarchy;
    - Team building activities and corporate events.

    More
  • Β· 18 views Β· 1 application Β· 1d

    Senior Big Data\ ML Engineer to $8000

    Full Remote Β· Spain, Poland, Portugal, Romania, Ukraine Β· Product Β· 7 years of experience Β· Upper-Intermediate
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...

    Who we are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product:

    Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.

     

    About the Role:

    As a data engineer you’ll have end-to-end ownership – from system architecture and software development to operational excellence.

     

    Key Responsibilities:

    • Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.
    • Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
    • Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
    • Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
    • Collaborate closely with software architects, DevOps engineers, and product teams to deliver resilient, scalable, production-grade machine learning pipelines.

     

    Required Competence and Skills:
    To excel in this role, candidates should possess the following qualifications and experiences:

    • A Bachelor’s or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
    • At least 5 years of experience as a data engineer, software engineer, or similar role and using data to drive business results.
    • At least 5 years of experience with Python, building modular, testable, and production-ready code.
    • Solid understanding of SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
    • Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
    • A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
    • Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.

     

    Nice-to-Have:

    • Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
    • Familiarity with API development frameworks (e.g., FastAPI).
    • Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
    • Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.

     

    Why Us?

    We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).

    We provide full accounting and legal support in all countries we operate.

    We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

    We offer a highly competitive package with yearly performance and compensation reviews.

    More
  • Β· 79 views Β· 9 applications Β· 5d

    Junior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience Β· Intermediate
    We seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...

    We seek a Junior Data Engineer with basic pandas and SQL experience.

    At Dataforest, we are actively seeking Data Engineers of all experience levels.

    If you're ready to take on a challenge and join our team, please send us your resume.

    We will review it and discuss potential opportunities with you.

     

    Requirements:

    β€’ 6+ months of experience as a Data Engineer

    β€’ Experience with SQL ;

    β€’ Experience with Python;

     

     

    Optional skills (as a plus):

    β€’ Experience with ETL / ELT pipelines;

    β€’ Experience with PySpark;

    β€’ Experience with Airflow;

    β€’ Experience with Databricks;

     

    Key Responsibilities:

    β€’ Apply data processing algorithms;

    β€’ Create ETL/ELT pipelines and data management solutions;

    β€’ Work with SQL queries for data extraction and analysis;

    β€’ Data analysis and application of data processing algorithms to solve business problems;

     

     

    We offer:

    β€’ Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark

    β€’ Opportunity to work with the high-skilled engineering team on challenging projects;

    β€’ Interesting projects with new technologies;

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 63 views Β· 1 application Β· 25d

    Data Engineer

    Office Work Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Ukrainian Product πŸ‡ΊπŸ‡¦
    Ajax Systems is a full-cycle company working from idea generation and R&D to mass production and sales. We do everything: we produce physical devices (the system includes many different sensors and hubs), write firmware for them, develop the server part...

    Ajax Systems is a full-cycle company working from idea generation and R&D to mass production and sales. We do everything: we produce physical devices (the system includes many different sensors and hubs), write firmware for them, develop the server part and release mobile applications. The whole team is in one office in Kyiv, all technical and product decisions are made locally. We’re looking for a Data Engineer to join us and continue the evolution of a product that we love: someone who takes pride in their work to ensure that user experience and development quality are superb.
     

    Required skills:
     

    Proven experience as a Data Architect or Architect Data Engineer role

    At least 3 years of experience as a Python Developer

    Strong problem solving, troubleshooting and analysis skills

    Previous years of experience and a substantial understanding in:  

    Data ingestion frameworks for real-time and batch processing

    Development and optimization of relational databases such as MySQL or PostgreSQL

    Working with NoSQL databases and search systems (including Elasticsearch, Kibana, and MongoDB)

    Cloud-based object storage systems (e.g. S3-compatible services)

    Data access and warehousing tools for analytical querying (e.g. distributed query engines, cloud data warehouses)
     

    Will be a plus:
     

    Working with large volumes of data and databases

    Knowledge of version control tools such as Git

    English at the level of reading and understanding technical documentation

    Create complex SQL queries against data warehouses and application databases
     

    Tasks and responsibilities:
     

    Develop and manage large scale data systems and ingestion capabilities and infrastructure. Support Design and development of solutions for the deployment of dashboards and reports to various stakeholders.

    Architect data pipelines and ETL processes to connect with various data sources Design and maintain enterprise data warehouse models Manage cloud based data & analytics platform Deploy updates and fixes

    Evaluate large and complex data sets

    Ensure queries are efficient and use the least amount of resources possible Troubleshoot queries to address critical production issues

    Assist other team members in refining complex queues and performance tuning

    Understand and analyze requirements to develop, test and deploy complex SQL queries used to extract business data for regulatory and other purposes;

    Write and maintain technical documentation.

    Apply for this job

     

    More
  • Β· 149 views Β· 24 applications Β· 18d

    Middle\Senior Database Engineer to $5500

    Full Remote Β· Worldwide Β· Product Β· 1 year of experience Β· Intermediate
    Responsibilities: Support the development and maintenance of data pipelines using PostgreSQL, Python, Bash, and Airflow Write and optimize SQL queries for data extraction and transformation Assist with SQL performance tuning and monitoring database...

    Responsibilities:

    • Support the development and maintenance of data pipelines using PostgreSQLPythonBash, and Airflow
    • Write and optimize SQL queries for data extraction and transformation
    • Assist with SQL performance tuning and monitoring database performance (mainly PostgreSQL)
    • Work closely with senior engineers to implement and improve ETL processes
    • Participate in automation of data workflows and ensure data quality
    • Document solutions and contribute to knowledge sharing

    Requirements:

    • 3-5 years of experience in a similar role (Database Engineer, Data Engineer, etc.)
    • Solid knowledge of PostgreSQL, Oracle* and SQL (must be confident writing complex queries)
    • Basic to intermediate knowledge of Python and Bash scripting
    • Familiarity with Apache Airflow or similar workflow tools
    • Willingness to learn and grow in a data-focused engineering role

    Nice to Have:

    • Experience with OracleMS SQL Server, or Talend
    • Understanding of SQL performance tuning techniques
    • Exposure to cloud platforms (AWS, GCP, etc.)

     

    More
  • Β· 90 views Β· 19 applications Β· 13d

    Software Engineer to $4000

    Full Remote Β· Worldwide Β· 3 years of experience Β· Upper-Intermediate
    We are looking for a strong Software Engineer with experience in data engineering to join an international team working on large-scale solutions in the financial domain. This role involves building robust, scalable, and maintainable data pipelines and...

    We are looking for a strong Software Engineer with experience in data engineering to join an international team working on large-scale solutions in the financial domain. This role involves building robust, scalable, and maintainable data pipelines and services in a cloud-based environment. You’ll be part of a cross-functional, high-performance team working with real-time and high-volume data systems.

    As part of a fast-growing and dynamic team, we value people who are proactive, self-driven, and detail-oriented β€” professionals who can work independently while keeping the broader product vision in mind.

     

    Key Responsibilities:

    • Design and develop microservices for the data engineering team (Java-based, running on Kubernetes)
    • Build and maintain high-performance ETL workflows and data ingestion logic
    • Handle data velocity, duplication, schema validation/versioning, and availability
    • Integrate third-party data sources to enrich financial data
    • Collaborate with cross-functional teams to align data consumption formats and standards
    • Optimize data storage, queries, and delivery for internal and external consumers
    • Maintain observability and monitoring across services and pipelines
       

    Requirements:

    • 3+ years of experience with Java (in production environments)
    • 3+ years in data engineering and pipeline development with large volumes of data
    • Experience with ETL workflows and data processing using cloud-native tools
    • Strong knowledge of SQL, relational and non-relational databases, and performance optimization
    • Experience with monitoring tools (e.g., Prometheus, Grafana, Datadog)
    • Familiarity with Kubernetes, Kafka, Redis, Snowflake, Clickhouse, Apache Airflow
    • Solid understanding of software engineering principles and object-oriented design
    • Ability to work independently and proactively, with strong communication skills

     

    Nice to Have:

    • Background in fintech or trading-related industries
    • Degree in Computer Science or related technical field
    • Experience with high-availability infrastructure design
       

    About the project:

    This is a long-term FinTech project focused on trade data monitoring and fraud detection. The engineering team is distributed across several countries and works with modern cloud-native technologies. You'll be part of an environment that values accountability, clarity, and product thinking.

    More
  • Β· 165 views Β· 25 applications Β· 13d

    Senior Software Engineer to $9000

    Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-Intermediate
    We are looking for a Senior Software Engineer with strong algorithmic and data processing expertise to join a global team working on a complex trade surveillance system in the financial sector. The project focuses on batch and real-time analysis of...

    We are looking for a Senior Software Engineer with strong algorithmic and data processing expertise to join a global team working on a complex trade surveillance system in the financial sector. The project focuses on batch and real-time analysis of trading data, leveraging advanced algorithmic models to detect fraud, manipulation, and other compliance breaches.

    You will work alongside quantitative analysts, compliance specialists, and other engineers to build, maintain, and scale a high-throughput, low-latency system for global markets.

     

    Key Responsibilities:

    • Design and implement algorithms for real-time and batch monitoring of financial transactions
    • Collaborate with data scientists and compliance experts to optimize detection models
    • Contribute to system architecture design for high availability and low-latency performance
    • Optimize and maintain an existing codebase for clarity, performance, and scalability
      Work with distributed systems and databases for high-volume data ingestion and processing
      Analyze performance bottlenecks and improve system reliability

     

    Requirements:

    • 5+ years of professional experience in backend or algorithmic development
    • At least 3 years working with algorithms in financial/trading systems or related fields
    • Strong proficiency in Java, Kotlin, C#, or C++
    • Solid understanding of software design principles and architectural patterns
    • Experience with real-time systems, distributed computing, and large-scale data pipelines
    • Proficiency with relational and non-relational databases
    • Excellent problem-solving and debugging skills
    • Strong interpersonal and communication skills
    • Python experience is a plus
    • Familiarity with statistical modeling and machine learning is an advantage
    • Bachelor's degree in Computer Science, Mathematics, or related field (Master’s or PhD is a plus)
       

    About the project:

    You will be part of an international engineering team focused on developing a modern, intelligent surveillance platform for financial institutions. The system processes high-frequency market data to identify irregular behavior and ensure regulatory compliance across jurisdictions.

    This role offers exposure to complex engineering challenges, financial domain knowledge, and the opportunity to shape a next-generation platform from within a collaborative and technically strong team.

    More
  • Β· 13 views Β· 0 applications Β· 15d

    Senior/Tech Lead Data Engineer

    Hybrid Remote Β· Poland, Ukraine (Kyiv, Lviv) Β· 5 years of experience Β· Upper-Intermediate
    Quantum is a global technology partner delivering high-end software products that address real-world problems. We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps,...

    Quantum is a global technology partner delivering high-end software products that address real-world problems. 

    We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps, Blockchain, and more.

    Here at Quantum, we are dedicated to creating state-of-art solutions that effectively address the pressing issues faced by businesses and the world. To date, our team of exceptional people has already helped many organizations globally attain technological leadership.

    We constantly discover new ways to solve never-ending business challenges by adopting new technologies, even when there isn’t yet a best practice. If you share our passion for problem-solving and making an impact, join us and enjoy getting to know our wealth of experience!

     

    About the position

    Quantum is expanding the team and has brilliant opportunities for a Data Engineer. As a Senior/Tech Lead Data Engineer, you will be pivotal in designing, implementing, and optimizing data platforms. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Air Flow, Spark, using Python, and various cloud-based solutions. 

    The client is a technological research company that utilizes proprietary AI-based analysis and language models to provide comprehensive insights into global stocks in all languages. Our mission is to bridge the knowledge gap in the investment world and empower investors of all types to become β€œsuper-investors.”

    Through our generative AI technology implemented into brokerage platforms and other financial institutions’ infrastructures, we offer instant fundamental analyses of global stocks alongside bespoke investment strategies, enabling informed investment decisions for millions of investors worldwide. 

     

    Must have skills:

    • Bachelor's Degree in Computer Science or related field
    • At least 5 years of experience in Data Engineering
    • Proven experience as a Tech Lead or Architect in data-focused projects, leadership skills, and experience managing or mentoring data engineering teams
    • Strong proficiency in Python and PySpark for building ETL pipelines and large-scale data processing
    • Deep understanding of Apache Spark, including performance tuning and optimization (job execution plans, broadcast joins, partitioning, skew handling, lazy evaluation)
    • Hands-on experience with AWS Cloud (minimum 2 years), including EMR and Glue
    • Familiarity with PySpark internals and concepts (Window functions, Broadcast joins, Sort & merge joins, Watermarking, UDFs, Lazy computation, Partition skew)
    • Practical experience with performance optimization of Spark jobs (MUST)
    • Strong understanding of OOD principles and familiarity with SOLID (MUST)
    • Experience with cloud-native data platforms and lakehouse architectures
    • Comfortable with SQL & NoSQL databases
    • Experience with testing practices such as TDD, unit testing, and integration testing
    • Strong problem-solving skills and a collaborative mindset
    • Upper-Intermediate or higher level of English (spoken and written)

     

    Your tasks will include:

    • Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources
    • Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions
    • Lead efforts in performance tuning and query optimization to enhance data processing efficiency
    • Provide expertise in data modeling and database design to ensure the scalability and reliability of data platforms
    • Contribute to the development of best practices and standards for data engineering processes
    • Stay updated on emerging technologies and trends in the data engineering landscape

     

    We offer:

    • Delivering high-end software projects that address real-world problems
    • Surrounding experts who are ready to move forward professionally
    • Professional growth plan and team leader support
    • Taking ownership of R&D and socially significant projects
    • Participation in worldwide tech conferences and competitions
    • Taking part in regular educational activities
    • Being a part of a multicultural company with a fun and lighthearted atmosphere
    • Working from anywhere with flexible working hours
    • Paid vacation and sick leave days

       

    Join Quantum and take a step toward your data-driven future.

    More
  • Β· 37 views Β· 3 applications Β· 12d

    Senior\Lead Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· Upper-Intermediate
    Job Description WHAT WE VALUE Most importantly, you can see yourself contributing and thriving in the position described above. How you gained the skills needed for doing that is less important. We expect you to be good at and have had hands-on experience...

    Job Description

    WHAT WE VALUE

    Most importantly, you can see yourself contributing and thriving in the position described above. How you gained the skills needed for doing that is less important.

    We expect you to be good at and have had hands-on experience with the following:

    • Expert in T-SQL
    • Proficiency in Python
    • Experience in Microsoft cloud technologies data services including but not limited to Azure SQL and Azure Data Factory
    • Experience with Snowflake and star schema and data modeling – experience with migrations to Snowflake will be an advantage
    • Experience or strong interest with DBT (data build tool) for transformations, test. Validation, data quality etc.
    • English - Upper Intermediate

    On top of that, it would an advantage to have knowledge / interest in the following:β€―

    • Some proficiency in C# .NET
    • Security first mindset, with knowledge on how to implement row level security etc.
    • Agile development methodologies and DevOps / DataOps practices such as continuous integration, continuous delivery, and continuous deployment. For example, automated DB validations and deployment of DB schema using DACPAC.

    As a person, you have following traits:

    • Strong collaborator with team mates and stakeholders
    • Clear communicator who speaks up when needed.

    Job Responsibilities

    WHAT YOU WILL BE RESPONSIBLE FOR

    Ensure quality in our data solutions and that we can ensure good data quality across multiple customer tenants every time we release.

    Work together with the Product Architect on defining and refining the data architecture and roadmap.

    Facilitate the migration of our current data platform towards a more modern tool stack that can be easier maintained by both data engineers and software engineers.

    Ensure that new data entities get implemented in the data model using schemas that are appropriate for their use, facilitating good performance and analytics needs.

    Guide and support people of other roles (engineers, testers, etc.), to ensure the spread of data knowledge and experience more broadly in the team

    Department/Project Description

    WHO WE ARE

    For over 50 years, we have worked closely with investment and asset managers to become the world’s leading provider of integrated investment management solutions. We are 3,000+ colleagues with a broad range of nationalities, educations, professional experiences, ages, and backgrounds in general. β€―

    SimCorp is an independent subsidiary of the Deutsche BΓΆrse Group. Following the recent merger with Axioma, we leverage the combined strength of our brands to provide an industry-leading, full, front-to-back offering for our clients. β€―

    SimCorp is an equal-opportunity employer. We are committed to building a culture where diverse perspectives and expertise are integrated into our everyday work. We believe in the continual growth and development of our employees, so that we can provide best-in-class solutions to our clients. β€―

     

    WHY THIS ROLE IS IMPORTANT TO US

    You will be joining an innovative application development team within SimCorp's Product Division. As a primary provider of SaaS offerings based on next-generation technologies, our Digital Engagement Platform is a cloud-native data application developed on Azure, utilizing SRE methodologies and continuous delivery. Your contribution to evolving DEP’s data platform will be vital in ensuring we can scale to future customer needs and support future analytics requirements. Our future growth as a SaaS product is rooted in a cloud-native strategy that emphasizes adopting a modern data platform tool stack and the application of modern engineering principles as essential components.

    We are looking into a technology shift from Azure SQL to SnowFlake in order to meet new client demands for scalability. You will be an important addition to the team for achieving this goal.

    More
  • Β· 40 views Β· 6 applications Β· 10d

    Senior Data Engineer (FinTech Project)

    Full Remote Β· EU Β· 4.5 years of experience Β· Upper-Intermediate
    Company Description We are looking for a Senior Data Engineer to join our Data Center of Excellence, part of Sigma Software’s complex organizational structure, which combines collaboration with diverse clients, challenging projects, and continuous...

    Company Description

    We are looking for a Senior Data Engineer to join our Data Center of Excellence, part of Sigma Software’s complex organizational structure, which combines collaboration with diverse clients, challenging projects, and continuous opportunities to enhance your expertise in a collaborative and innovative environment. 

    CUSTOMER

    Our client is one of Europe’s fastest-growing FinTech innovators, revolutionizing how businesses manage their financial operations. They offer an all-in-one platform that covers everything from virtual cards and account management to wire transfers and spend tracking. As a licensed payment institution, the client seamlessly integrates core financial services into their product, enabling companies to streamline their financial workflows with speed and security. 

    PROJECT

    You will join a dynamic team driving the evolution of a high-performance data platform that supports real-time financial operations and analytics. The project focuses on building scalable data infrastructure that will guarantee accuracy, reliability, and compliance across multiple financial products and services. 

    Job Description

    • Collaborate with stakeholders to identify business requirements and translate them into technical specifications 
    • Design, build, monitor, and maintain data pipelines in production, including complex pipelines (Airflow, Python, event-driven systems) 
    • Develop and maintain ETL processes for ingesting and transforming data from various sources 
    • Monitor and troubleshoot infrastructure issues, such as Kubernetes and Terraform, including data quality, ETL processes, and cost optimization 
    • Collaborate closely with analytics engineers on CI and infrastructure management 
    • Drive the establishment and maintenance of the highest coding standards and practices, ensuring the development of efficient, scalable, and reliable data pipelines and systems 
    • Participate in data governance initiatives to ensure data accuracy and integrity 
    • Actively participate in the data team's routines and enhancement plans 
    • Stay up to date with the latest developments in data technology and provide recommendations for improving our analytics capabilities 

    Qualifications

    • At least 5 years of experience in data engineering or software engineering with a strong focus on data infrastructure 
    • Hands-on experience in AWS (or equivalent cloud platforms like GCP) and data analytics services 
    • Strong proficiency in Python and SQL  
    • Good understandingβ€―of database design, optimization, and maintenance (using DBT) 
    • Strong experience with data modeling, ETL processes, and data warehousing 
    • Familiarity with Terraform and Kubernetes 
    • Expertise in developing and managing large-scale data flows efficiently 
    • Experience with job orchestrators or scheduling tools like Airflow 
    • At least an Upper-Intermediate level of English 

    Would be a plus: 

    • Experience managing RBAC on data warehouse 
    • Experience maintaining security on data warehouse (IPs whitelist, masking, sharing data between accounts/clusters, etc.) 

    I'm interested
     

    More
  • Β· 31 views Β· 5 applications Β· 4d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Upper-Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    We are Boosta β€” an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+. We’re looking for a...

    We are Boosta β€” an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+.

    We’re looking for a Data Engineer to join our team in the iGaming industry, where real-time insights, affiliate performance, and marketing analytics are at the center of decision-making. In this role, you’ll own and scale our data infrastructure, working across affiliate integrations, product analytics, and experimentation workflows.

    Your primary responsibilities will include building and maintaining data pipelines, implementing automated data validation, integrating external data sources via APIs, and creating dashboards to monitor data quality, consistency, and reliability. You’ll collaborate daily with the Affiliate Management team, Product Analysts, and Data Scientists to ensure the data powering our reports and models is clean, consistent, and trustworthy.

     

    WHAT YOU’LL DO

    • Design, develop, and maintain ETL/ELT pipelines to transform raw, multi-source data into clean, analytics-ready tables in Google BigQuery, using tools such as dbt for modular SQL transformations, testing, and documentation.
    • Integrate and automate affiliate data workflows, replacing manual processes in collaboration with the related stakeholders.
    • Proactively monitor and manage data pipelines using tools such as Airflow, Prefect, or Dagster, with proper alerting and retry mechanisms in place.
    • Emphasize data quality, consistency, and reliability by implementing robust validation checks, including schema drift detection, null/missing value tracking, and duplicate detection using tools like Great Expectations or
    • Build a Data Consistency Dashboard (in Looker Studio, Power BI, Tableau or Grafana) to track schema mismatches, partner anomalies, and source freshness, with built-in alerts and escalation logic.
    • Ensure timely availability and freshness of all critical datasets, resolving latency and reliability issues quickly and sustainably.
    • Control access to cloud resources, implement data governance policies, and ensure secure, structured access across internal teams.
    • Monitor and optimize data infrastructure costs, particularly related to BigQuery usage, storage, and API-based ingestion.
    • Document all pipelines, dataset structures, transformation logic, and data contracts clearly to support internal alignment and knowledge sharing.
    • Build and maintain postback-based ingestion pipelines to support event-level tracking and attribution across the affiliate ecosystem.
    • Collaborate closely with Data Scientists and Product Analysts to deliver high-quality, structured datasets for modeling, experimentation, and KPI reporting.
    • Act as a go-to resource across the organization for troubleshooting data discrepancies, supporting analytics workflows, and enabling self-service data access.

       

    WHAT WE EXPECT FROM YOU

    • Strong proficiency in SQL and Python.
    • Experience with Google BigQuery and other GCP tools (e.g., Cloud Storage, Cloud Functions, Composer).
    • Proven ability to design, deploy, and scale ETL/ELT pipelines.
    • Hands-on experience integrating and automating data from various platforms.
    • Familiarity with postback tracking, attribution logic, and affiliate data reconciliation.
    • Skilled in orchestration tools like Airflow, Prefect, or Dagster.
    • Experience with Looker Studio, Power BI, Tableau, or Grafana for building dashboards for data quality monitoring.
    • Use of Git for version control and experience managing CI/CD pipelines (e.g., GitHub Actions).
    • Experience with Docker to build isolated and reproducible environments for data workflows.
    • Exposure to iGaming data structures and KPIs is a strong advantage.
    • Strong sense of data ownership, documentation, and operational excellence.

       

    HOW IT WORKS

    • Stage 1: pre-screen with a recruiter.
    • Stage 2: test task.
    • Stage 3: interview.
    • Stage 4: bar-raising.
    • Stage 5: reference check.
    • Stage 6: job offer!

    A trial period for this position is 3 months, during which we will get used to working together.

     

    WHAT WE OFFER

    • 28 business days of paid off
    • Flexible hours and the possibility to work remotely
    • Medical insurance and mental health care
    • Compensation for courses, trainings
    • English classes and speaking clubs
    • Internal library, educational events
    • Outstanding corporate parties, teambuildings

     

    More
  • Β· 43 views Β· 0 applications Β· 21 May

    Middle BI/DB Developer

    Office Work Β· Ukraine (Lviv) Β· Product Β· 2 years of experience Β· Upper-Intermediate
    About us: EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide. But that's not all! We're not just about...

    About us:

    EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide.

    But that's not all! We're not just about numbers, we're about people. With a team of over 1000 passionate individuals spread across twelve countries in Europe, Asia, and the US, we're all united by our love for innovation and teamwork.

    EveryMatrix is a member of the World Lottery Association (WLA) and European Lotteries Association. In September 2023 it became the first iGaming supplier to receive WLA Safer Gambling Certification. EveryMatrix is proud of its commitment to safer gambling and player protection whilst producing market leading gaming solutions.

    Join us on this exciting journey as we continue to redefine the iGaming landscape, one groundbreaking solution at a time.
     

    We are looking for a passionate and dedicated Junior QA to join our team in Lviv!

    About the unit:

    DataMatrix is a part of EveryMatrix platform that is responsible for collecting, storing, processing and utilizing hundreds of millions of transactions from the whole platform every single day. We develop Business Intelligent solutions, reports, 3rd party integrations, data streaming and other products for both external and internal use. The team consists of 35 people and is located in Lviv.

    What You'll get to do:

    • Develop real time data processing and aggregations
    • Create and modify data marts (enhance our data warehouse)
    • Take care of internal and external integrations
    • Forge various types of reports

    Our main stack:

    • DB: BigQuery, PostgreSQL
    • ETL: Apache Airflow, Apache NiFi
    • Streaming: Apache Kafka

    What You need to know:

    Here's what we offer:

    • Start with 22 days of annual leave, with 2 additional days added each year, up to 32 days by your fifth year with us.
    • Stay Healthy: 10 sick leave days per year, no doctor's note required; 30 medical leave days with medical allowance
    • Support for New Parents:
    • 21 weeks of paid maternity leave, with the flexibility to work from home full-time until your child turns 1 year old.
    • 4 weeks of paternity leave, plus the flexibility to work from home full-time until your child is 13 weeks old.

    Our office perks include on-site massages and frequent team-building activities in various locations.

    Benefits & Perks:

    • Daily catered lunch or monthly lunch allowance.β€―
    • Private Medical Subscription.β€―
    • Access online learning platforms like Udemy for Business, LinkedIn Learning or O’Reilly, and a budget for external training.
    • Gym allowance

    At EveryMatrix, we're committed to creating a supportive and inclusive workplace where you can thrive both personally and professionally. Come join us and experience the difference!

    More
  • Β· 50 views Β· 5 applications Β· 21 May

    Data Engineer (PostgreSQL, Snowflake, Google BigQuery, MongoDB, Elasticsearch)

    Full Remote Β· Worldwide Β· 5 years of experience Β· Intermediate
    We are looking for a Data Engineer with a diverse background in data integration to join the Data Management team. Some data are small, some are very large (1 trillion+ rows), some data is structured, some data is not. Our data comes in all kinds of...

    We are looking for a Data Engineer with a diverse background in data integration to join the Data Management team. Some data are small, some are very large (1 trillion+ rows), some data is structured, some data is not.  Our data comes in all kinds of sizes, shapes and formats.  Traditional RDBMS like PostgreSQL, Oracle, SQL Server, MPPs like StarRocks, Vertica, Snowflake, Google BigQuery, and unstructured, key-value like MongoDB, Elasticsearch, to name a few.

     

    We are looking for individuals who can design and solve any data problems using different types of databases and technology supported within our team.  We use MPP databases to analyze billions of rows in seconds.  We use Spark and Iceberg, batch or streaming to process whatever the data needs are.  We also use Trino to connect all different types of data without moving them around. 

     

    Besides a competitive compensation package, you’ll be working with a great group of technologists interested in finding the right database to use, the right technology for the job in a culture that encourages innovation.  If you’re ready to step up and take on some new technical challenges at a well-respected company, this is a unique opportunity for you.

     

    Responsibilities:

    • Work within our on-prem Hadoop ecosystem to develop and maintain ETL jobs
    • Design and develop data projects against RDBMS such as PostgreSQL 
    • Implement ETL/ELT processes using various tools (Pentaho) or programming languages (Java, Python) at our disposal 
    • Analyze business requirements, design and implement required data models
    • Lead data architecture and engineering decision making/planning.
    • Translate complex technical subjects into terms that can be understood by both technical and non-technical audiences.

     

    Qualifications: (must have)

    • BA/BS in Computer Science or in related field
    • 5+ years of experience with RDBMS databases such as Oracle, MSSQL or PostgreSQL
    • 2+ years of experience managing or developing in the Hadoop ecosystem
    • Programming background with either Python, Scala, Java or C/C++
    • Experience with Spark. PySpark, SparkSQL, Spark Streaming, etc…
    • Strong in any of the Linux distributions, RHEL,CentOS or Fedora
    • Working knowledge of orchestration tools such Oozie and Airflow
    • Experience working in both OLAP and OLTP environments
    • Experience working on-prem, not just cloud environments
    • Experience working with teams outside of IT (i.e. Application Developers, Business Intelligence, Finance, Marketing, Sales)

     

    Desired: (nice to have)

    • Experience with Pentaho Data Integration or any ETL tools such as Talend, Informatica, DataStage or HOP.
    • Deep knowledge shell scripting, scheduling, and monitoring processes on Linux
    • Experience using reporting and Data Visualization platforms (Tableau, Pentaho BI)
    • Working knowledge of data unification and setup using Presto/Trino
    • Web analytics or Business Intelligence a plus
    • Understanding of Ad stack and data (Ad Servers, DSM, Programmatic, DMP, etc)
    More
  • Β· 68 views Β· 6 applications Β· 30d

    Associate Director, Analytics - Data Engineering to $4000

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· Advanced/Fluent
    We are looking for someone who is experienced and familiar with the following tools: Strong programming skills in SQL, Python, R, or other programming languages Experience with SQL and NoSQL databases Knowledge of ETL tools such as Google BigQuery,...

     

    We are looking for someone who is experienced and familiar with the following tools:

    Strong programming skills in SQL, Python, R, or other programming languages

    Experience with SQL and NoSQL databases

    Knowledge of ETL tools such as Google BigQuery, Funnel.io, and Tableau Prep

    Business Intelligence (Tableau, Looker, Google Looker Studio, Power BI, Datorama, etc.)

    Analytics platforms UIs / APIs (Google Analytics, Adobe, etc.)

    Media reporting UIs / APIs (DV360, SA360, Meta, etc.)

    Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform is a plus
    What we’re looking for

    A Bachelor’s or Master’s degree in computer science, operations research, statistics, applied mathematics, management information systems, or a related quantitative field or equivalent work experience such as, economics, engineering and physics

    3+ years of experience with any combination of analytics, marketing analytics, and analytic techniques for marketing, customer, and business applications

    Hands-on experience using SQL, Google BigQuery, Python, R, and Google Cloud (GCP) related products

    Hands-on experience working with common ETL tools

    Hands-on experience using Tableau, Datorama, and other common visualisation tools

    Expertise across programmatic display, video, native, and ad serving technology, as well as digital advertising reporting, measurement, and attribution tools

    Adept to agile methodologies and well-versed in applying DataOps methods to the construction of pipelines and delivery

    Demonstrated ability to effectively operate both independently and in a team environment

    Responsibilities

    Design, develop and maintain complex data pipelines and systems to process large volumes of data

    Collaborate with cross-functional teams to gather requirements, identify data sources, and ensure data quality

    Architect data solutions that are scalable, reliable, and secure, and meet business requirements

    Develop and maintain data models, ETL processes, and data integration strategies

    Design and implement data governance policies and procedures to ensure data accuracy, consistency, and security

    Create visualisations and reports to communicate insights to stakeholders across multiple data streams

    Collaborate as part of a team to drive analyses and insights that lead to more informed decisions and improved business performance

    Work with other teams to ensure teamwork and timely delivery of client projects
     

    More
Log In or Sign Up to see all posted jobs