Jobs
112-
Β· 26 views Β· 2 applications Β· 2d
Senior Big Data\ ML Engineer to $8000
Full Remote Β· Spain, Poland, Portugal, Romania, Ukraine Β· Product Β· 7 years of experience Β· Upper-IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.
About the Role:
As a data engineer youβll have end-to-end ownership β from system architecture and software development to operational excellence.
Key Responsibilities:
- Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.
- Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
- Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
- Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
- Collaborate closely with software architects, DevOps engineers, and product teams to deliver resilient, scalable, production-grade machine learning pipelines.
Required Competence and Skills:
To excel in this role, candidates should possess the following qualifications and experiences:- A Bachelorβs or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
- At least 5 years of experience as a data engineer, software engineer, or similar role and using data to drive business results.
- At least 5 years of experience with Python, building modular, testable, and production-ready code.
- Solid understanding of SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
- Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
- A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
- Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.
Nice-to-Have:
- Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
- Familiarity with API development frameworks (e.g., FastAPI).
- Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
- Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.
Why Us?
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).
We provide full accounting and legal support in all countries we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.
More -
Β· 86 views Β· 9 applications Β· 6d
Junior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience Β· IntermediateWe seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...We seek a Junior Data Engineer with basic pandas and SQL experience.
At Dataforest, we are actively seeking Data Engineers of all experience levels.
If you're ready to take on a challenge and join our team, please send us your resume.
We will review it and discuss potential opportunities with you.
Requirements:
β’ 6+ months of experience as a Data Engineer
β’ Experience with SQL ;
β’ Experience with Python;
Optional skills (as a plus):
β’ Experience with ETL / ELT pipelines;
β’ Experience with PySpark;
β’ Experience with Airflow;
β’ Experience with Databricks;
Key Responsibilities:
β’ Apply data processing algorithms;
β’ Create ETL/ELT pipelines and data management solutions;
β’ Work with SQL queries for data extraction and analysis;
β’ Data analysis and application of data processing algorithms to solve business problems;
We offer:
β’ Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark
β’ Opportunity to work with the high-skilled engineering team on challenging projects;
β’ Interesting projects with new technologies;
β’ Great networking opportunities with international clients, challenging tasks;
β’ Building interesting projects from scratch using new technologies;
β’ Personal and professional development opportunities;
β’ Competitive salary fixed in USD;
β’ Paid vacation and sick leaves;
β’ Flexible work schedule;
β’ Friendly working environment with minimal hierarchy;
β’ Team building activities, corporate events.
More -
Β· 63 views Β· 1 application Β· 26d
Data Engineer
Office Work Β· Ukraine (Kyiv) Β· Product Β· 3 years of experience Ukrainian Product πΊπ¦Ajax Systems is a full-cycle company working from idea generation and R&D to mass production and sales. We do everything: we produce physical devices (the system includes many different sensors and hubs), write firmware for them, develop the server part...Ajax Systems is a full-cycle company working from idea generation and R&D to mass production and sales. We do everything: we produce physical devices (the system includes many different sensors and hubs), write firmware for them, develop the server part and release mobile applications. The whole team is in one office in Kyiv, all technical and product decisions are made locally. Weβre looking for a Data Engineer to join us and continue the evolution of a product that we love: someone who takes pride in their work to ensure that user experience and development quality are superb.
Required skills:
Proven experience as a Data Architect or Architect Data Engineer role
At least 3 years of experience as a Python Developer
Strong problem solving, troubleshooting and analysis skills
Previous years of experience and a substantial understanding in:
Data ingestion frameworks for real-time and batch processing
Development and optimization of relational databases such as MySQL or PostgreSQL
Working with NoSQL databases and search systems (including Elasticsearch, Kibana, and MongoDB)
Cloud-based object storage systems (e.g. S3-compatible services)
Data access and warehousing tools for analytical querying (e.g. distributed query engines, cloud data warehouses)
Will be a plus:
Working with large volumes of data and databases
Knowledge of version control tools such as Git
English at the level of reading and understanding technical documentation
Create complex SQL queries against data warehouses and application databases
Tasks and responsibilities:
Develop and manage large scale data systems and ingestion capabilities and infrastructure. Support Design and development of solutions for the deployment of dashboards and reports to various stakeholders.
Architect data pipelines and ETL processes to connect with various data sources Design and maintain enterprise data warehouse models Manage cloud based data & analytics platform Deploy updates and fixes
Evaluate large and complex data sets
Ensure queries are efficient and use the least amount of resources possible Troubleshoot queries to address critical production issues
Assist other team members in refining complex queues and performance tuning
Understand and analyze requirements to develop, test and deploy complex SQL queries used to extract business data for regulatory and other purposes;
Write and maintain technical documentation.
Apply for this job
-
Β· 149 views Β· 24 applications Β· 19d
Middle\Senior Database Engineer to $5500
Full Remote Β· Worldwide Β· Product Β· 1 year of experience Β· IntermediateResponsibilities: Support the development and maintenance of data pipelines using PostgreSQL, Python, Bash, and Airflow Write and optimize SQL queries for data extraction and transformation Assist with SQL performance tuning and monitoring database...Responsibilities:
- Support the development and maintenance of data pipelines using PostgreSQL, Python, Bash, and Airflow
- Write and optimize SQL queries for data extraction and transformation
- Assist with SQL performance tuning and monitoring database performance (mainly PostgreSQL)
- Work closely with senior engineers to implement and improve ETL processes
- Participate in automation of data workflows and ensure data quality
- Document solutions and contribute to knowledge sharing
Requirements:
- 3-5 years of experience in a similar role (Database Engineer, Data Engineer, etc.)
- Solid knowledge of PostgreSQL, Oracle* and SQL (must be confident writing complex queries)
- Basic to intermediate knowledge of Python and Bash scripting
- Familiarity with Apache Airflow or similar workflow tools
- Willingness to learn and grow in a data-focused engineering role
Nice to Have:
- Experience with Oracle, MS SQL Server, or Talend
- Understanding of SQL performance tuning techniques
- Exposure to cloud platforms (AWS, GCP, etc.)
-
Β· 90 views Β· 19 applications Β· 14d
Software Engineer to $4000
Full Remote Β· Worldwide Β· 3 years of experience Β· Upper-IntermediateWe are looking for a strong Software Engineer with experience in data engineering to join an international team working on large-scale solutions in the financial domain. This role involves building robust, scalable, and maintainable data pipelines and...We are looking for a strong Software Engineer with experience in data engineering to join an international team working on large-scale solutions in the financial domain. This role involves building robust, scalable, and maintainable data pipelines and services in a cloud-based environment. Youβll be part of a cross-functional, high-performance team working with real-time and high-volume data systems.
As part of a fast-growing and dynamic team, we value people who are proactive, self-driven, and detail-oriented β professionals who can work independently while keeping the broader product vision in mind.
Key Responsibilities:
- Design and develop microservices for the data engineering team (Java-based, running on Kubernetes)
- Build and maintain high-performance ETL workflows and data ingestion logic
- Handle data velocity, duplication, schema validation/versioning, and availability
- Integrate third-party data sources to enrich financial data
- Collaborate with cross-functional teams to align data consumption formats and standards
- Optimize data storage, queries, and delivery for internal and external consumers
- Maintain observability and monitoring across services and pipelines
Requirements:
- 3+ years of experience with Java (in production environments)
- 3+ years in data engineering and pipeline development with large volumes of data
- Experience with ETL workflows and data processing using cloud-native tools
- Strong knowledge of SQL, relational and non-relational databases, and performance optimization
- Experience with monitoring tools (e.g., Prometheus, Grafana, Datadog)
- Familiarity with Kubernetes, Kafka, Redis, Snowflake, Clickhouse, Apache Airflow
- Solid understanding of software engineering principles and object-oriented design
- Ability to work independently and proactively, with strong communication skills
Nice to Have:
- Background in fintech or trading-related industries
- Degree in Computer Science or related technical field
- Experience with high-availability infrastructure design
About the project:
This is a long-term FinTech project focused on trade data monitoring and fraud detection. The engineering team is distributed across several countries and works with modern cloud-native technologies. You'll be part of an environment that values accountability, clarity, and product thinking.
More -
Β· 166 views Β· 25 applications Β· 14d
Senior Software Engineer to $9000
Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-IntermediateWe are looking for a Senior Software Engineer with strong algorithmic and data processing expertise to join a global team working on a complex trade surveillance system in the financial sector. The project focuses on batch and real-time analysis of...We are looking for a Senior Software Engineer with strong algorithmic and data processing expertise to join a global team working on a complex trade surveillance system in the financial sector. The project focuses on batch and real-time analysis of trading data, leveraging advanced algorithmic models to detect fraud, manipulation, and other compliance breaches.
You will work alongside quantitative analysts, compliance specialists, and other engineers to build, maintain, and scale a high-throughput, low-latency system for global markets.
Key Responsibilities:
- Design and implement algorithms for real-time and batch monitoring of financial transactions
- Collaborate with data scientists and compliance experts to optimize detection models
- Contribute to system architecture design for high availability and low-latency performance
- Optimize and maintain an existing codebase for clarity, performance, and scalability
Work with distributed systems and databases for high-volume data ingestion and processing
Analyze performance bottlenecks and improve system reliability
Requirements:
- 5+ years of professional experience in backend or algorithmic development
- At least 3 years working with algorithms in financial/trading systems or related fields
- Strong proficiency in Java, Kotlin, C#, or C++
- Solid understanding of software design principles and architectural patterns
- Experience with real-time systems, distributed computing, and large-scale data pipelines
- Proficiency with relational and non-relational databases
- Excellent problem-solving and debugging skills
- Strong interpersonal and communication skills
- Python experience is a plus
- Familiarity with statistical modeling and machine learning is an advantage
- Bachelor's degree in Computer Science, Mathematics, or related field (Masterβs or PhD is a plus)
About the project:
You will be part of an international engineering team focused on developing a modern, intelligent surveillance platform for financial institutions. The system processes high-frequency market data to identify irregular behavior and ensure regulatory compliance across jurisdictions.
This role offers exposure to complex engineering challenges, financial domain knowledge, and the opportunity to shape a next-generation platform from within a collaborative and technically strong team.
More -
Β· 13 views Β· 0 applications Β· 16d
Senior/Tech Lead Data Engineer
Hybrid Remote Β· Poland, Ukraine (Kyiv, Lviv) Β· 5 years of experience Β· Upper-IntermediateQuantum is a global technology partner delivering high-end software products that address real-world problems. We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps,...Quantum is a global technology partner delivering high-end software products that address real-world problems.
We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps, Blockchain, and more.
Here at Quantum, we are dedicated to creating state-of-art solutions that effectively address the pressing issues faced by businesses and the world. To date, our team of exceptional people has already helped many organizations globally attain technological leadership.
We constantly discover new ways to solve never-ending business challenges by adopting new technologies, even when there isnβt yet a best practice. If you share our passion for problem-solving and making an impact, join us and enjoy getting to know our wealth of experience!
About the position
Quantum is expanding the team and has brilliant opportunities for a Data Engineer. As a Senior/Tech Lead Data Engineer, you will be pivotal in designing, implementing, and optimizing data platforms. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Air Flow, Spark, using Python, and various cloud-based solutions.
The client is a technological research company that utilizes proprietary AI-based analysis and language models to provide comprehensive insights into global stocks in all languages. Our mission is to bridge the knowledge gap in the investment world and empower investors of all types to become βsuper-investors.β
Through our generative AI technology implemented into brokerage platforms and other financial institutionsβ infrastructures, we offer instant fundamental analyses of global stocks alongside bespoke investment strategies, enabling informed investment decisions for millions of investors worldwide.
Must have skills:
- Bachelor's Degree in Computer Science or related field
- At least 5 years of experience in Data Engineering
- Proven experience as a Tech Lead or Architect in data-focused projects, leadership skills, and experience managing or mentoring data engineering teams
- Strong proficiency in Python and PySpark for building ETL pipelines and large-scale data processing
- Deep understanding of Apache Spark, including performance tuning and optimization (job execution plans, broadcast joins, partitioning, skew handling, lazy evaluation)
- Hands-on experience with AWS Cloud (minimum 2 years), including EMR and Glue
- Familiarity with PySpark internals and concepts (Window functions, Broadcast joins, Sort & merge joins, Watermarking, UDFs, Lazy computation, Partition skew)
- Practical experience with performance optimization of Spark jobs (MUST)
- Strong understanding of OOD principles and familiarity with SOLID (MUST)
- Experience with cloud-native data platforms and lakehouse architectures
- Comfortable with SQL & NoSQL databases
- Experience with testing practices such as TDD, unit testing, and integration testing
- Strong problem-solving skills and a collaborative mindset
- Upper-Intermediate or higher level of English (spoken and written)
Your tasks will include:
- Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources
- Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions
- Lead efforts in performance tuning and query optimization to enhance data processing efficiency
- Provide expertise in data modeling and database design to ensure the scalability and reliability of data platforms
- Contribute to the development of best practices and standards for data engineering processes
- Stay updated on emerging technologies and trends in the data engineering landscape
We offer:
- Delivering high-end software projects that address real-world problems
- Surrounding experts who are ready to move forward professionally
- Professional growth plan and team leader support
- Taking ownership of R&D and socially significant projects
- Participation in worldwide tech conferences and competitions
- Taking part in regular educational activities
- Being a part of a multicultural company with a fun and lighthearted atmosphere
- Working from anywhere with flexible working hours
Paid vacation and sick leave days
Join Quantum and take a step toward your data-driven future.
More -
Β· 37 views Β· 3 applications Β· 13d
Senior\Lead Data Engineer
Full Remote Β· Ukraine Β· 4 years of experience Β· Upper-IntermediateJob Description WHAT WE VALUE Most importantly, you can see yourself contributing and thriving in the position described above. How you gained the skills needed for doing that is less important. We expect you to be good at and have had hands-on experience...Job Description
WHAT WE VALUE
Most importantly, you can see yourself contributing and thriving in the position described above. How you gained the skills needed for doing that is less important.
We expect you to be good at and have had hands-on experience with the following:
- Expert in T-SQL
- Proficiency in Python
- Experience in Microsoft cloud technologies data services including but not limited to Azure SQL and Azure Data Factory
- Experience with Snowflake and star schema and data modeling β experience with migrations to Snowflake will be an advantage
- Experience or strong interest with DBT (data build tool) for transformations, test. Validation, data quality etc.
- English - Upper Intermediate
On top of that, it would an advantage to have knowledge / interest in the following:β―
- Some proficiency in C# .NET
- Security first mindset, with knowledge on how to implement row level security etc.
- Agile development methodologies and DevOps / DataOps practices such as continuous integration, continuous delivery, and continuous deployment. For example, automated DB validations and deployment of DB schema using DACPAC.
As a person, you have following traits:
- Strong collaborator with team mates and stakeholders
- Clear communicator who speaks up when needed.
Job Responsibilities
WHAT YOU WILL BE RESPONSIBLE FOR
Ensure quality in our data solutions and that we can ensure good data quality across multiple customer tenants every time we release.
Work together with the Product Architect on defining and refining the data architecture and roadmap.
Facilitate the migration of our current data platform towards a more modern tool stack that can be easier maintained by both data engineers and software engineers.
Ensure that new data entities get implemented in the data model using schemas that are appropriate for their use, facilitating good performance and analytics needs.
Guide and support people of other roles (engineers, testers, etc.), to ensure the spread of data knowledge and experience more broadly in the team
Department/Project Description
WHO WE ARE
For over 50 years, we have worked closely with investment and asset managers to become the worldβs leading provider of integrated investment management solutions. We are 3,000+ colleagues with a broad range of nationalities, educations, professional experiences, ages, and backgrounds in general. β―
SimCorp is an independent subsidiary of the Deutsche BΓΆrse Group. Following the recent merger with Axioma, we leverage the combined strength of our brands to provide an industry-leading, full, front-to-back offering for our clients. β―
SimCorp is an equal-opportunity employer. We are committed to building a culture where diverse perspectives and expertise are integrated into our everyday work. We believe in the continual growth and development of our employees, so that we can provide best-in-class solutions to our clients. β―
WHY THIS ROLE IS IMPORTANT TO US
You will be joining an innovative application development team within SimCorp's Product Division. As a primary provider of SaaS offerings based on next-generation technologies, our Digital Engagement Platform is a cloud-native data application developed on Azure, utilizing SRE methodologies and continuous delivery. Your contribution to evolving DEPβs data platform will be vital in ensuring we can scale to future customer needs and support future analytics requirements. Our future growth as a SaaS product is rooted in a cloud-native strategy that emphasizes adopting a modern data platform tool stack and the application of modern engineering principles as essential components.
We are looking into a technology shift from Azure SQL to SnowFlake in order to meet new client demands for scalability. You will be an important addition to the team for achieving this goal.
More -
Β· 40 views Β· 6 applications Β· 11d
Senior Data Engineer (FinTech Project)
Full Remote Β· EU Β· 4.5 years of experience Β· Upper-IntermediateCompany Description We are looking for a Senior Data Engineer to join our Data Center of Excellence, part of Sigma Softwareβs complex organizational structure, which combines collaboration with diverse clients, challenging projects, and continuous...Company Description
We are looking for a Senior Data Engineer to join our Data Center of Excellence, part of Sigma Softwareβs complex organizational structure, which combines collaboration with diverse clients, challenging projects, and continuous opportunities to enhance your expertise in a collaborative and innovative environment.
CUSTOMER
Our client is one of Europeβs fastest-growing FinTech innovators, revolutionizing how businesses manage their financial operations. They offer an all-in-one platform that covers everything from virtual cards and account management to wire transfers and spend tracking. As a licensed payment institution, the client seamlessly integrates core financial services into their product, enabling companies to streamline their financial workflows with speed and security.
PROJECT
You will join a dynamic team driving the evolution of a high-performance data platform that supports real-time financial operations and analytics. The project focuses on building scalable data infrastructure that will guarantee accuracy, reliability, and compliance across multiple financial products and services.
Job Description
- Collaborate with stakeholders to identify business requirements and translate them into technical specifications
- Design, build, monitor, and maintain data pipelines in production, including complex pipelines (Airflow, Python, event-driven systems)
- Develop and maintain ETL processes for ingesting and transforming data from various sources
- Monitor and troubleshoot infrastructure issues, such as Kubernetes and Terraform, including data quality, ETL processes, and cost optimization
- Collaborate closely with analytics engineers on CI and infrastructure management
- Drive the establishment and maintenance of the highest coding standards and practices, ensuring the development of efficient, scalable, and reliable data pipelines and systems
- Participate in data governance initiatives to ensure data accuracy and integrity
- Actively participate in the data team's routines and enhancement plans
- Stay up to date with the latest developments in data technology and provide recommendations for improving our analytics capabilities
Qualifications
- At least 5 years of experience in data engineering or software engineering with a strong focus on data infrastructure
- Hands-on experience in AWS (or equivalent cloud platforms like GCP) and data analytics services
- Strong proficiency in Python and SQL
- Good understandingβ―of database design, optimization, and maintenance (using DBT)
- Strong experience with data modeling, ETL processes, and data warehousing
- Familiarity with Terraform and Kubernetes
- Expertise in developing and managing large-scale data flows efficiently
- Experience with job orchestrators or scheduling tools like Airflow
- At least an Upper-Intermediate level of English
Would be a plus:
- Experience managing RBAC on data warehouse
- Experience maintaining security on data warehouse (IPs whitelist, masking, sharing data between accounts/clusters, etc.)
I'm interested
More
-
Β· 33 views Β· 5 applications Β· 5d
Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Upper-Intermediate Ukrainian Product πΊπ¦We are Boosta β an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+. Weβre looking for a...We are Boosta β an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+.
Weβre looking for a Data Engineer to join our team in the iGaming industry, where real-time insights, affiliate performance, and marketing analytics are at the center of decision-making. In this role, youβll own and scale our data infrastructure, working across affiliate integrations, product analytics, and experimentation workflows.
Your primary responsibilities will include building and maintaining data pipelines, implementing automated data validation, integrating external data sources via APIs, and creating dashboards to monitor data quality, consistency, and reliability. Youβll collaborate daily with the Affiliate Management team, Product Analysts, and Data Scientists to ensure the data powering our reports and models is clean, consistent, and trustworthy.
WHAT YOUβLL DO
- Design, develop, and maintain ETL/ELT pipelines to transform raw, multi-source data into clean, analytics-ready tables in Google BigQuery, using tools such as dbt for modular SQL transformations, testing, and documentation.
- Integrate and automate affiliate data workflows, replacing manual processes in collaboration with the related stakeholders.
- Proactively monitor and manage data pipelines using tools such as Airflow, Prefect, or Dagster, with proper alerting and retry mechanisms in place.
- Emphasize data quality, consistency, and reliability by implementing robust validation checks, including schema drift detection, null/missing value tracking, and duplicate detection using tools like Great Expectations or
- Build a Data Consistency Dashboard (in Looker Studio, Power BI, Tableau or Grafana) to track schema mismatches, partner anomalies, and source freshness, with built-in alerts and escalation logic.
- Ensure timely availability and freshness of all critical datasets, resolving latency and reliability issues quickly and sustainably.
- Control access to cloud resources, implement data governance policies, and ensure secure, structured access across internal teams.
- Monitor and optimize data infrastructure costs, particularly related to BigQuery usage, storage, and API-based ingestion.
- Document all pipelines, dataset structures, transformation logic, and data contracts clearly to support internal alignment and knowledge sharing.
- Build and maintain postback-based ingestion pipelines to support event-level tracking and attribution across the affiliate ecosystem.
- Collaborate closely with Data Scientists and Product Analysts to deliver high-quality, structured datasets for modeling, experimentation, and KPI reporting.
Act as a go-to resource across the organization for troubleshooting data discrepancies, supporting analytics workflows, and enabling self-service data access.
WHAT WE EXPECT FROM YOU
- Strong proficiency in SQL and Python.
- Experience with Google BigQuery and other GCP tools (e.g., Cloud Storage, Cloud Functions, Composer).
- Proven ability to design, deploy, and scale ETL/ELT pipelines.
- Hands-on experience integrating and automating data from various platforms.
- Familiarity with postback tracking, attribution logic, and affiliate data reconciliation.
- Skilled in orchestration tools like Airflow, Prefect, or Dagster.
- Experience with Looker Studio, Power BI, Tableau, or Grafana for building dashboards for data quality monitoring.
- Use of Git for version control and experience managing CI/CD pipelines (e.g., GitHub Actions).
- Experience with Docker to build isolated and reproducible environments for data workflows.
- Exposure to iGaming data structures and KPIs is a strong advantage.
Strong sense of data ownership, documentation, and operational excellence.
HOW IT WORKS
- Stage 1: pre-screen with a recruiter.
- Stage 2: test task.
- Stage 3: interview.
- Stage 4: bar-raising.
- Stage 5: reference check.
- Stage 6: job offer!
A trial period for this position is 3 months, during which we will get used to working together.
WHAT WE OFFER
- 28 business days of paid off
- Flexible hours and the possibility to work remotely
- Medical insurance and mental health care
- Compensation for courses, trainings
- English classes and speaking clubs
- Internal library, educational events
- Outstanding corporate parties, teambuildings
-
Β· 44 views Β· 7 applications Β· 30d
Lead Data Engineer (AWS)
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Pre-IntermediateWe are looking for an experienced Lead Data Engineer to manage a team responsible for building and maintaining data pipelines using AWS and Pentaho Data Integration (PDI). The role involves designing, implementing, and optimizing ETL processes in a...We are looking for an experienced Lead Data Engineer to manage a team responsible for building and maintaining data pipelines using AWS and Pentaho Data Integration (PDI). The role involves designing, implementing, and optimizing ETL processes in a distributed cloud environment.
Key responsibilities:
- Lead a team of data engineers: task planning, deadline tracking, code review
- Design and evolve data flow architecture within the AWS ecosystem (S3, Glue, Redshift, Lambda, etc.)
- Develop and support ETL processes using Pentaho Data Integration
- Set up CI/CD pipelines for data workflows
- Optimize performance, monitor and debug ETL processes
- Collaborate closely with analytics, data science, and DevOps teams
- Implement best practices for Data Governance and Data Quality
Requirements:
- 5+ years of experience in data engineering
- Strong hands-on knowledge of AWS services: S3, Glue, Redshift, Lambda, CloudWatch, IAM
- Proven experience with Pentaho Data Integration (Kettle): building complex transformations and jobs
- Proficiency in SQL (preferably Redshift or PostgreSQL)
- Experience with Git and CI/CD tools (e.g., Jenkins, GitLab CI)
- Understanding of DWH, Data Lake, ETL/ELT concepts
- Solid Python skills
- Team leadership and project management experience
Nice to have:
- Experience with other ETL tools (e.g., Apache NiFi, Talend, Airflow)
- Experience migrating on-premises solutions to the cloud (especially AWS)
-
Β· 45 views Β· 6 applications Β· 30d
Senior Data Engineer
Full Remote Β· Poland Β· 5 years of experience Β· Upper-IntermediateDescription Method is a global design and engineering consultancy founded in 1999. We believe that innovation should be meaningful, beautiful and human. We craft practical, powerful digital experiences that improve lives and transform businesses. Our...Description
Method is a global design and engineering consultancy founded in 1999. We believe that innovation should be meaningful, beautiful and human. We craft practical, powerful digital experiences that improve lives and transform businesses. Our teams based in New York, Charlotte, Atlanta, London, Bengaluru, and remote work with a wide range of organizations in many industries, including Healthcare, Financial Services, Retail, Automotive, Aviation, and Professional Services.
Method is part of GlobalLogic, a digital product engineering company. GlobalLogic integrates experience design and complex engineering to help our clients imagine whatβs possible and accelerate their transition into tomorrowβs digital businesses. GlobalLogic is a Hitachi Group Company.
Your role is to collaborate with multidisciplinary individuals and support the project lead on data strategy and implementation projects. You will be responsible for data and systems assessment, identifying the critical data and quality gaps required for effective decision support, and contributing to the data platform modernization roadmap.
Responsibilities:
- Work closely with data scientists, data architects, business analysts, and other disciplines to understand data requirements and deliver accurate data solutions.
- Analyze and document existing data system processes to identify areas for improvement.
- Develop detailed process maps that describe data flow and integration across systems.
- Create a data catalog and document data structures across various databases and systems.
- Compare data across systems to identify inconsistencies and discrepancies.
- Contribute towards gap analysis and recommend solutions for standardizing data.
- Recommend data governance best practices to organize and manage data assets effectively.
- Propose database design standards and best practices to suit various downstream systems, applications, and business objectives
- Strong problem-solving abilities with meticulous attention to detail and experience.
- Experience with requirements gathering and methodologies.
- Excellent communication and presentation skills with the ability to clearly articulate technical concepts, methodologies, and business impact to both technical teams and clients.
- A unique point of view. You are trusted to question approaches, processes, and strategy to better serve your team and clients.
Skills Required
Technical skills
- Proven experience (5+ years) in data engineering.
- 5+ years of proven data engineering experience with expertise in data warehousing, data management, and data governance in SQL or NoSQL databases.
- Deep understanding of data modeling, data architecture, and data integration techniques.
- Advanced proficiency in ETL/ELT processes and data pipeline development from raw, structured to business/analytics layers to support BI Analytics and AI/GenAI models.
- Hands-on experience with ETL tools, including: Databricks (preferred), Matillion, Alteryx, or similar platforms.
- Commercial experience with a major cloud platform like Microsoft Azure (e.g., Azure Data Factory, Azure Synapse, Azure Blob Storage).
Core Technology stack
Databases
- Oracle RDBMS (for OLTP): Expert SQL for complex queries, DML, DDL.
- Oracle Exadata (for OLAP/Data Warehouse): Advanced SQL optimized for analytical workloads. Experience with data loading techniques and performance optimization on Exadata.
Storage:
- S3-Compatible Object Storage (On-Prem): Proficiency with S3 APIs for data ingest, retrieval, and management.
Programming & Scripting:
- Python: Core language for ETL/ELT development, automation, and data manipulation.
- Shell Scripting (Linux/Unix): Bash/sh for automation, file system operations, and job control.
Version Control:
Git: Managing all code artifacts (SQL scripts, Python code, configuration files).ββ
Related Technologies & Concepts:
- Data Pipeline Orchestration Concepts: Understanding of scheduling, dependency management, monitoring, and alerting for data pipelines
- Containerization: Docker, basic understanding of how containerization works
- API Interaction: Understanding of REST APIs for data exchange (as they might need to integrate with the Java Spring Boot microservices).
Location
- Remote across Poland
Why Method?
We look for individuals who are smart, kind and brave. Curious people with a natural ability to think on their feet, learn fast, and develop points of view for a constantly changing world find Method an exciting place to work. Our employees are excited to collaborate with dispersed and diverse teams that bring together the best in thinking and making. We champion the ability to listen and believe that critique and dissonance lead to better outcomes. We believe everyone has the capacity to lead and look for proactive individuals who can take and give direction, lead by example, enjoy the making as much as they do the thinking, especially at senior and leadership levels.
Next Steps
If Method sounds like the place for you, please submit an application. Also, let us know if you have a presence online with a portfolio, GitHub, Dribbble, or another platform.
* For information on how we process your personal data, please see Privacy: https://www.method.com/privacy/
More -
Β· 46 views Β· 1 application Β· 27d
Senior Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-IntermediateProject Description: The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week. Solutions are delivered by several...Project Description:
The primary goal of the project is the modernization, maintenance and development of an eCommerce platform for a big US-based retail company, serving millions of omnichannel customers each week.
Solutions are delivered by several Product Teams focused on different domains - Customer, Loyalty, Search and Browse, Data Integration, Cart.
Current overriding priorities are new brands onboarding, re-architecture, database migrations, migration of microservices to a unified cloud-native solution without any disruption to business.Responsibilities:
We are looking for Data Engineer who will be responsible for designing a solution for a big retail company. The main focus is to support processing of big data volumes and integrate solution to current architecture.
Mandatory Skills Description:
β’ Recent hands on experience with Azure Data Factory and Synapse.
β’ Experience in leading a distributed team.
β’ Strong expertise in designing and implementing data models, including conceptual, logical, and physical data models, to support efficient data storage and retrieval.
β’ Strong knowledge of Microsoft Azure, including Azure Data Lake Storage, Azure Synapse Analytics, Azure Data Factory, and Azure Databricks, pySpark for building scalable and reliable data solutions.
β’ Extensive experience with building robust and scalable ETL/ELT pipelines to extract, transform, and load data from various sources into data lakes or data warehouses.
β’ Ability to integrate data from disparate sources, including databases, APIs, and external data providers, using appropriate techniques such as API integration or message queuing.
β’ Proficiency in designing and implementing data warehousing solutions (dimensional modeling, star schemas, Data Mesh, Data/Delta Lakehouse, Data Vault)
β’ Proficiency in SQL to perform complex queries, data transformations, and performance tuning on cloud-based data storages.
β’ Experience integrating metadata and governance processes into cloud-based data platforms
β’ Certification in Azure, Databricks, or other relevant technologies is an added advantage
β’ Experience with cloud-based analytical databases.
β’ Experience with Azure MI, Azure Database for Postgres, Azure Cosmos DB, Azure Analysis Services, and Informix.
β’ Experience with Python and Python-based ETL tools.
β’ Experience with shell scripting in Bash, Unix or windows shell is preferable.Nice-to-Have Skills Description:
β’ Experience with Elasticsearch
β’ Familiarity with containerization and orchestration technologies (Docker, Kubernetes).
β’ Troubleshooting and Performance Tuning: Ability to identify and resolve performance bottlenecks in data processing workflows and optimize data pipelines for efficient data ingestion and analysis.
β’ Collaboration and Communication: Strong interpersonal skills to collaborate effectively with stakeholders, data engineers, data scientists, and other cross-functional teams.- Languages:
- English: B2 Upper Intermediate
-
Β· 140 views Β· 10 applications Β· 27d
Junior AI Data Engineer to $1300
Full Remote Β· Ukraine Β· 2 years of experience Β· IntermediateJunior AI Data Engineer Location: Remote Type: Full-time/Contract Requirements 1β3 years of experience in a technical field (cybernetics, statistics, analytics, etc.) Strong knowledge of SQL and Python Familiarity with GitHub: branching, pull...Junior AI Data Engineer
Location: Remote
Type: Full-time/Contract
π§ Requirements
- 1β3 years of experience in a technical field (cybernetics, statistics, analytics, etc.)
- Strong knowledge of SQL and Python
- Familiarity with GitHub: branching, pull requests, version control
- Experience with API documentation and external integrations (REST, GraphQL)
- Interest in Machine Learning and proficiency with AI tools (ChatGPT, Copilot, LangChain, etc.)
- Basic frontend understanding (HTML/CSS/JS, JSON, API payloads)
- Understanding of data engineering tools (Airflow, Kafka, ETL/ELT concepts)
- Familiarity with cloud platforms (AWS preferred, GCP is a plus)
- Solid grasp of financial KPIs (LTV, retention, CAC)
- B1/B2 English proficiency
- Responsible, self-motivated, and proactive in learning
π§ Responsibilities:
- Work with APIs to extract, transform, and load data
- Build and support data pipelines (ETL/ELT) for analytics and ML workflows
- Connect frontend and backend systems by designing and maintaining pipelines that serve data to dashboards or apps
- Deploy and support ML models in production environments
- Create and modify PDFs using Python for reporting or client deliverables
- Monitor, maintain, and optimize cloud-based databases
- Interpret and implement API documentation for CRM and third-party tools
- Use GitHub for collaborative development and code reviews
- Track KPIs and generate actionable data insights
- Communicate directly with project managersβclear tasks, quick feedback, ownership encouraged
- Work flexibly: no fixed hours, just deadlines
π‘ About data212
data212 is a pragmatic, fast-moving start-up offering full-service analytics, engineering, and AI support. We help businesses turn data into growthβlean, sharp, and fully remote.
More -
Β· 147 views Β· 24 applications Β· 26d
Data Engineer
Full Remote Β· Worldwide Β· Product Β· 3 years of experience Β· Pre-IntermediateResponsibilities: Design and develop ETL pipelines using Airflow and Apache Spark for Snowflake and Trino Optimize existing pipelines and improve the Airflow framework Collaborate with analysts, optimize complex SQL queries, and help foster a strong...Responsibilities:
- Design and develop ETL pipelines using Airflow and Apache Spark for Snowflake and Trino
- Optimize existing pipelines and improve the Airflow framework
- Collaborate with analysts, optimize complex SQL queries, and help foster a strong data-driven culture
- Research and implement new data engineering tools and practices
Requirements:
- Experience with Apache Spark
- Experience with Airflow
- Proficiency in Python
- Familiarity with Snowflake and Trino is a plus
- Understanding of data architecture, including logical and physical data layers
- Strong SQL skills for analytical queries
- English proficiency at B1/B2 level
About the Project:
Weβre a fast-growing tech startup in the B2B marketing space, developing a next-generation platform for identifying and engaging target customers.
Our product combines artificial intelligence, big data, and proprietary de-anonymization tools to detect behavioral signals from potential buyers in real time and convert them into high-quality leads.
The team is building a solution that helps businesses identify βhotβ prospects even before they express interest β making marketing and sales efforts highly targeted and personalized.
More