Jobs

101
  • Β· 54 views Β· 3 applications Β· 30d

    Senior\Lead Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· Upper-Intermediate
    Job Description WHAT WE VALUE Most importantly, you can see yourself contributing and thriving in the position described above. How you gained the skills needed for doing that is less important. We expect you to be good at and have had hands-on experience...

    Job Description

    WHAT WE VALUE

    Most importantly, you can see yourself contributing and thriving in the position described above. How you gained the skills needed for doing that is less important.

    We expect you to be good at and have had hands-on experience with the following:

    • Expert in T-SQL
    • Proficiency in Python
    • Experience in Microsoft cloud technologies data services including but not limited to Azure SQL and Azure Data Factory
    • Experience with Snowflake and star schema and data modeling – experience with migrations to Snowflake will be an advantage
    • Experience or strong interest with DBT (data build tool) for transformations, test. Validation, data quality etc.
    • English - Upper Intermediate

    On top of that, it would an advantage to have knowledge / interest in the following:β€―

    • Some proficiency in C# .NET
    • Security first mindset, with knowledge on how to implement row level security etc.
    • Agile development methodologies and DevOps / DataOps practices such as continuous integration, continuous delivery, and continuous deployment. For example, automated DB validations and deployment of DB schema using DACPAC.

    As a person, you have following traits:

    • Strong collaborator with team mates and stakeholders
    • Clear communicator who speaks up when needed.

    Job Responsibilities

    WHAT YOU WILL BE RESPONSIBLE FOR

    Ensure quality in our data solutions and that we can ensure good data quality across multiple customer tenants every time we release.

    Work together with the Product Architect on defining and refining the data architecture and roadmap.

    Facilitate the migration of our current data platform towards a more modern tool stack that can be easier maintained by both data engineers and software engineers.

    Ensure that new data entities get implemented in the data model using schemas that are appropriate for their use, facilitating good performance and analytics needs.

    Guide and support people of other roles (engineers, testers, etc.), to ensure the spread of data knowledge and experience more broadly in the team

    Department/Project Description

    WHO WE ARE

    For over 50 years, we have worked closely with investment and asset managers to become the world’s leading provider of integrated investment management solutions. We are 3,000+ colleagues with a broad range of nationalities, educations, professional experiences, ages, and backgrounds in general. β€―

    SimCorp is an independent subsidiary of the Deutsche BΓΆrse Group. Following the recent merger with Axioma, we leverage the combined strength of our brands to provide an industry-leading, full, front-to-back offering for our clients. β€―

    SimCorp is an equal-opportunity employer. We are committed to building a culture where diverse perspectives and expertise are integrated into our everyday work. We believe in the continual growth and development of our employees, so that we can provide best-in-class solutions to our clients. β€―

     

    WHY THIS ROLE IS IMPORTANT TO US

    You will be joining an innovative application development team within SimCorp's Product Division. As a primary provider of SaaS offerings based on next-generation technologies, our Digital Engagement Platform is a cloud-native data application developed on Azure, utilizing SRE methodologies and continuous delivery. Your contribution to evolving DEP’s data platform will be vital in ensuring we can scale to future customer needs and support future analytics requirements. Our future growth as a SaaS product is rooted in a cloud-native strategy that emphasizes adopting a modern data platform tool stack and the application of modern engineering principles as essential components.

    We are looking into a technology shift from Azure SQL to SnowFlake in order to meet new client demands for scalability. You will be an important addition to the team for achieving this goal.

    More
  • Β· 48 views Β· 7 applications Β· 29d

    Senior Data Engineer (FinTech Project)

    Full Remote Β· EU Β· 4.5 years of experience Β· Upper-Intermediate
    Company Description We are looking for a Senior Data Engineer to join our Data Center of Excellence, part of Sigma Software’s complex organizational structure, which combines collaboration with diverse clients, challenging projects, and continuous...

    Company Description

    We are looking for a Senior Data Engineer to join our Data Center of Excellence, part of Sigma Software’s complex organizational structure, which combines collaboration with diverse clients, challenging projects, and continuous opportunities to enhance your expertise in a collaborative and innovative environment. 

    CUSTOMER

    Our client is one of Europe’s fastest-growing FinTech innovators, revolutionizing how businesses manage their financial operations. They offer an all-in-one platform that covers everything from virtual cards and account management to wire transfers and spend tracking. As a licensed payment institution, the client seamlessly integrates core financial services into their product, enabling companies to streamline their financial workflows with speed and security. 

    PROJECT

    You will join a dynamic team driving the evolution of a high-performance data platform that supports real-time financial operations and analytics. The project focuses on building scalable data infrastructure that will guarantee accuracy, reliability, and compliance across multiple financial products and services. 

    Job Description

    • Collaborate with stakeholders to identify business requirements and translate them into technical specifications 
    • Design, build, monitor, and maintain data pipelines in production, including complex pipelines (Airflow, Python, event-driven systems) 
    • Develop and maintain ETL processes for ingesting and transforming data from various sources 
    • Monitor and troubleshoot infrastructure issues, such as Kubernetes and Terraform, including data quality, ETL processes, and cost optimization 
    • Collaborate closely with analytics engineers on CI and infrastructure management 
    • Drive the establishment and maintenance of the highest coding standards and practices, ensuring the development of efficient, scalable, and reliable data pipelines and systems 
    • Participate in data governance initiatives to ensure data accuracy and integrity 
    • Actively participate in the data team's routines and enhancement plans 
    • Stay up to date with the latest developments in data technology and provide recommendations for improving our analytics capabilities 

    Qualifications

    • At least 5 years of experience in data engineering or software engineering with a strong focus on data infrastructure 
    • Hands-on experience in AWS (or equivalent cloud platforms like GCP) and data analytics services 
    • Strong proficiency in Python and SQL  
    • Good understandingβ€―of database design, optimization, and maintenance (using DBT) 
    • Strong experience with data modeling, ETL processes, and data warehousing 
    • Familiarity with Terraform and Kubernetes 
    • Expertise in developing and managing large-scale data flows efficiently 
    • Experience with job orchestrators or scheduling tools like Airflow 
    • At least an Upper-Intermediate level of English 

    Would be a plus: 

    • Experience managing RBAC on data warehouse 
    • Experience maintaining security on data warehouse (IPs whitelist, masking, sharing data between accounts/clusters, etc.) 

    I'm interested
     

    More
  • Β· 54 views Β· 6 applications Β· 29d

    Senior Big Data Engineer (Python)

    Full Remote Β· Ukraine, Poland, Spain, Portugal, Bulgaria Β· Product Β· 6 years of experience Β· Upper-Intermediate
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...

    Who we are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product:

    Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.

     

    About the Role:

    As a data engineer you’ll have end-to-end ownership – from system architecture and software development to operational excellence.

     

    Key Responsibilities:

    • Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.
    • Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
    • Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
    • Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
    • Collaborate closely with software architects, DevOps engineers, and product teams to deliver resilient, scalable, production-grade machine learning pipelines.

     

    Required Competence and Skills:
    To excel in this role, candidates should possess the following qualifications and experiences:

    • A Bachelor’s or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
    • At least 5 years of experience as a data engineer, software engineer, or similar role and using data to drive business results.
    • At least 5 years of experience with Python, building modular, testable, and production-ready code.
    • Solid understanding of SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
    • Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
    • A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
    • Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.

     

    Nice-to-Have:

    • Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
    • Familiarity with API development frameworks (e.g., FastAPI).
    • Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
    • Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.

     

    Why Us?

    We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).

    We provide full accounting and legal support in all countries we operate.

    We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

    We offer a highly competitive package with yearly performance and compensation reviews.

    More
  • Β· 55 views Β· 1 application Β· 29d

    Senior Data Engineer (FinTech Project)

    Full Remote Β· EU Β· 5 years of experience Β· Upper-Intermediate
    We are looking for a Senior Data Engineer to join our Data Center of Excellence, part of Sigma Software’s complex organizational structure, which combines collaboration with diverse clients, challenging projects, and continuous opportunities to enhance...

    We are looking for a Senior Data Engineer to join our Data Center of Excellence, part of Sigma Software’s complex organizational structure, which combines collaboration with diverse clients, challenging projects, and continuous opportunities to enhance your expertise in a collaborative and innovative environment.
     

    Customer

    Our client is one of Europe’s fastest-growing FinTech innovators, revolutionizing how businesses manage their financial operations. They offer an all-in-one platform that covers everything from virtual cards and account management to wire transfers and spend tracking. As a licensed payment institution, the client seamlessly integrates core financial services into their product, enabling companies to streamline their financial workflows with speed and security.
     

    Project

    You will join a dynamic team driving the evolution of a high-performance data platform that supports real-time financial operations and analytics. The project focuses on building scalable data infrastructure that will guarantee accuracy, reliability, and compliance across multiple financial products and services.

     

    Requirements:

    • At least 5 years of experience in data engineering or software engineering with a strong focus on data infrastructure
    • Hands-on experience in AWS (or equivalent cloud platforms like GCP) and data analytics services
    • Strong proficiency in Python and SQL
    • Good understandingβ€―of database design, optimization, and maintenance (using DBT)
    • Strong experience with data modeling, ETL processes, and data warehousing
    • Familiarity with Terraform and Kubernetes
    • Expertise in developing and managing large-scale data flows efficiently
    • Experience with job orchestrators or scheduling tools like Airflow
    • At least an Upper-Intermediate level of English
       

    Would be a plus:

    • Experience managing RBAC on data warehouse
    • Experience maintaining security on data warehouse (IPs whitelist, masking, sharing data between accounts/clusters, etc.)

     

    Responsibilities:

    • Collaborate with stakeholders to identify business requirements and translate them into technical specifications
    • Design, build, monitor, and maintain data pipelines in production, including complex pipelines (Airflow, Python, event-driven systems)
    • Develop and maintain ETL processes for ingesting and transforming data from various sources
    • Monitor and troubleshoot infrastructure issues, such as Kubernetes and Terraform, including data quality, ETL processes, and cost optimization
    • Collaborate closely with analytics engineers on CI and infrastructure management
    • Drive the establishment and maintenance of the highest coding standards and practices, ensuring the development of efficient, scalable, and reliable data pipelines and systems
    • Participate in data governance initiatives to ensure data accuracy and integrity
    • Actively participate in the data team’s routines and enhancement plans
    • Stay up to date with the latest developments in data technology and provide recommendations for improving our analytics capabilities
    More
  • Β· 84 views Β· 18 applications Β· 28d

    Data Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· Upper-Intermediate
    We are Uvik Software β€” a successful company in software development with a global presence in the world market and we work with the world’s most successful companies. We seek a highly skilled and autonomous Data Engineer to join our dynamic team. ...

    We are Uvik Software β€” a successful company in software development with a global presence in the world market and we work with the world’s most successful companies.
     

    We seek a highly skilled and autonomous 🟣Data Engineer 🟣 to join our dynamic team.  This role requires a blend of technical expertise, creative problem-solving, and leadership to drive projects from concept to deployment.
     

    πŸ’»Key Responsibilities:
     

    • Develop and implement robust data models and software architectures.
    • Utilize Python and advanced ML libraries to build and deploy AI systems.
    • Engage in machine learning engineering, particularly in NLP/NLU and language model development using platforms like GPT.
    • Stay abreast of current trends in AI, including MLLMs, AI Agents, and RAG technologies.
    • Lead and guide teams through the project lifecycle to meet strategic business goals.
       

    Qualifications:
     

    • Profound knowledge of data structures, data modelling, and software architecture principles.
    • Expertise in Python.
    • Proven track record in the engineering and deployment of AI systems.
    • Strong interest and experience in NLP/NLU and developing language models.
    • Familiarity with major cloud platforms including GCP, Azure, and AWS.
    • Excellent problem-solving, communication, and leadership skills.
       

    Nice-to-Have:
     

    • Experience in startup environments, ideally scaling new ventures from ground zero.
    • Hands-on experience with major ML libraries.
    • Active engagement with the AI community, whether through research, presentations, or contributions to open-source projects.
    • Experience with innovative interfaces like SMS apps, browser extensions, or interactive modules.
    • Technical proficiency in React/Next.js, FastAPI, MongoDB, and Marqo AI Vector DB.

      We offer:
       

    βœ”οΈ12 sick leaves and 18 paid vacation business days per year

    βœ”οΈComfortable work conditions (including MacBook Pro and Dell monitor in each workplace)

    βœ”οΈSmart environment

    βœ”οΈInteresting projects from renowned clients

    βœ”οΈFlexible work schedule

    βœ”οΈCompetitive salary according to the qualifications

    βœ”οΈGuaranteed full workload during the term of the contract

    βœ”οΈCorporate leisure activities

    βœ”οΈGame, lounge, sports zones.

     

    More
  • Β· 172 views Β· 45 applications Β· 27d

    Middle Python / Data Engineer

    Part-time Β· Full Remote Β· Worldwide Β· 2 years of experience Β· Upper-Intermediate
    Involvement: ~15–20 hours/week Start Date: ASAP Location: Remote Client: USA-based Project: Legal IT – AI-powered legal advisory platform About the Project Join a growing team behind Legal IT, an intelligent legal advisory platform that simplifies legal...

    Involvement: ~15–20 hours/week
    Start Date: ASAP
    Location: Remote
    Client: USA-based
    Project: Legal IT – AI-powered legal advisory platform

     

    About the Project

    Join a growing team behind Legal IT, an intelligent legal advisory platform that simplifies legal support for businesses. The platform features:

    - A robust contract library

    - AI-assisted document generation & guidance

    - Interactive legal questionnaires

    - A dynamic legal blog with curated insights

     

    We’re building out advanced AI-driven proof-of-concepts (PoCs) and are looking for a strong Python/Data Engineer to support the backend logic and data pipelines powering these tools.

     

    Core Responsibility

    - Collaborate directly with the AI Architect to develop and iterate on proof-of-concept features with ongoing development

     

    Being a part of 3asoft means having:
    - High level of flexibility and freedom
    - p2p relationship with worldwide customers
    - Competitive compensation paid in USD
    - Fully remote working

    More
  • Β· 35 views Β· 1 application Β· 26d

    Senior Data Engineer

    Full Remote Β· Poland Β· 5 years of experience Β· Upper-Intermediate
    As a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization,...

    As a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Airflow, Spark, using Python and various cloud-based solutions.

     

    Key Responsibilities:

    • Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources.
    • Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions.
    • Lead efforts in performance tuning and query optimization to enhance data processing efficiency.
    • Provide expertise in data modeling and database design to ensure scalability and reliability of data platforms.
    • Contribute to the development of best practices and standards for data engineering processes.
    • Stay updated on emerging technologies and trends in the data engineering landscape.

     

    Required Skills and Qualifications:

    • Bachelor's Degree in Computer Science or related field.
    • Minimum of 5 years of experience in tech lead data engineering or architecture roles.
    • Proficiency in Python and PySpark for ETL development and data processing.
    • AWS CLOUD at least 2 years
    • Extensive experience with cloud-based data platforms, particularly EMR.
    • Must have knowledge with Spark.
    • Excellent problem-solving skills and ability to work effectively in a collaborative team environment.
    • Leadership experience, with a proven track record of leading data engineering teams.

     

    Benefits

     

    • 20 days of paid vacation, 5 sick leave
    • National holidays observed
    • Company-provided laptop

     

     

    More
  • Β· 62 views Β· 2 applications Β· 26d

    Middle Data Support Engineer (Python, SQL)

    Ukraine Β· 3 years of experience Β· Upper-Intermediate
    N-iX is looking for a Middle Data Support Engineer to join our team. Our customer is the leading school transportation solutions provider in North America. Every day, the company completes 5 million student journeys, moving more passengers than all U.S....

    N-iX is looking for a Middle Data Support Engineer to join our team. Our customer is the leading school transportation solutions provider in North America. Every day, the company completes 5 million student journeys, moving more passengers than all U.S. airlines combined and delivers reliable, quality services, including full-service transportation and management, special-needs transportation, route optimization and scheduling, maintenance, and charter services for 1,100 school district contracts.

     

    Responsibilities:

    • Provide support in production and non-production environments (Azure cloud)
    • Install, configure and provide day-to-day support after implementation, including off hours as needed;
    • Troubleshooting defects and errors, arising problems resolution;
    • Plan, test, and implement server upgrades, maintenance fixes, and vendor-supplied patches;
    • Help in resolving incidents;
    • Monitor ETL jobs;
    • Perform small enhancements (Azure/SQL). 

       

    Requirements:

    • Proven knowledge and 3+ years experience in Python
    • Proficiency in RDBMS systems (MS SQL experience as a plus);
    • Experience with Azure cloud provider service;
    • Understanding of Azure Data Lake / Storage Accounts;
    • Experience in creation and managing data pipelines in Azure Data Factory;
    • Upper Intermediate/Advanced English level.

       

    Nice to have:

    • Experience with administration of Windows Server 2012 and higher;
    • Experience with AWS, Snowflake, Power BI;
    • Experience with technical support;
    • Experience in .Net.

       

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 44 views Β· 5 applications Β· 26d

    Data engineer (relocation to Berlin)

    Office Work Β· Germany Β· 5 years of experience Β· Upper-Intermediate
    At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment,...

    At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.

     

    About the Data Solution Team As a Data Engineer, you will join our Data Solution Team, which drives our data-driven innovation. The team is pivotal to powering our business processes and enhancing customer experiences through effective data utilization. Our focus areas include:
     

    ● Developing integrations between systems.

    ● Analyzing customer data to derive actionable insights.

    ● Improving customer experience by leveraging statistical and machine learning models. Our tech stack includes:

    ● Cloud & Infrastructure: AWS (S3, EKS, Quicksight, and monitoring tools).

    ● Data Engineering & Analytics: Apache Spark (Scala and PySpark on Databricks), Apache Kafka (Confluence Cloud).

    ● Infrastructure as Code: Terraform.

    ● Development & Collaboration: BitBucket, Jira.

    ● Integration Tools & APIs: Segment.io, Blueshift, Zendesk, Google Maps API, and other external systems

     

    Job requirements

    As A Data Engineer, You Will:

    ● Design, build, and own near-time and batch data processing workflows.

    ● Develop efficient, low-latency data pipelines and systems.

    ● Maintain high data quality while ensuring GDPR compliance.

    ● Analyze customer data and extract insights to drive business decisions.

    ● Collaborate with Product, Backend, Marketing, and other teams to deliver impactful features.

    ● Help Data scientists deliver ML/AI solutions.

     

    Requirements:

    ● 5+ years of experience as a Data Engineer, with expertise in Apache Spark using Python and Scala.

    ● 3+ years of experience with Apache Kafka.

    ● Management experience or Tech Lead experience

    ● Strong proficiency in SQL.

    ● Experience with CI/CD processes and platforms.

    ● Hands-on experience with cloud technologies such as AWS, GCP or Azure.

    ● Familiarity with Terraform.

    ● Comfortable working in an agile environment.

    ● Excellent problem-solving and self-learning skills, with the ability to operate both independently and as part of a team.

     

    Nice to have:

    ● Hands-on experience with Databricks.

    ● Experience with document databases, particularly Amazon DocumentDB.

    ● Familiarity with handling high-risk data.

    ● Exposure to BI tools such as AWS Quicksight or Redash.

    ● Work experience in a Software B2C company, especially in the FinTech industry.

     

    What we offer:

    Our goal is to set up a great working environment. Become part of the process and:

    ● Shape the future of our organization as part of the international founding team.

    ● Take on responsibility from day one.

    ● Benefit from various coaching and training opportunities, including a Sports Subscription, German classes, and a €1000 yearly self-development budget.

    ● Work in a hybrid working model from the comfortable Berlin office

    ● Enjoy a modern workplace in the heart of Berlin with drinks, fresh fruit, kicker and ping pong

    More
  • Β· 44 views Β· 1 application Β· 25d

    Data Engineer

    Hybrid Remote Β· Slovakia Β· 4 years of experience Β· Upper-Intermediate
    Now is an amazing time to join our company as we continue to empower innovators to change the world. We provide top-tier technology consulting, R&D, design, and software development services across the USA, UK, and EU markets. And this is where you come...

    Now is an amazing time to join our company as we continue to empower innovators to change the world. We provide top-tier technology consulting, R&D, design, and software development services across the USA, UK, and EU markets. And this is where you come in!

    We are looking for a Skilled Data Engineer to join our team.

    About the Project

    We’re launching a Snowflake Proof of Concept (PoC) for a leading football organization in Germany. The project aims to demonstrate how structured and well-managed data can support strategic decision-making in the sports domain.

    Key Responsibilities

    • Define data scope and identify data sources
    • Design and build the data architecture
    • Implement ETL pipelines into a data lake
    • Ensure data quality and consistency
    • Collaborate with stakeholders to define analytics needs
    • Deliver data visualizations using Power BI

    Required Skills

    • Strong experience with Snowflake, ETL pipelines, and data lakes
    • Power BI proficiency
    • Knowledge of data architecture and modeling
    • Data quality assurance expertise
    • Solid communication in English (B2+)

    Nice to Have

    • Familiarity with GDPR
    • Experience in sports or media-related data projects
    • Experience with short-term PoCs and agile delivery

    What We Offer

    • Contract for the PoC phase with potential long-term involvement
    • All cloud resources and licenses provided by the client
    • Hybrid/onsite work in Bratislava
    • Opportunity to join a meaningful data-driven sports project with European visibility

    πŸ“¬ Interested? Send us your CV and hourly rate (EUR).

    We’re prioritizing candidates based in Bratislava or in Europe

    Interview Process:

    1️⃣ internal technical interview
    2️⃣ interview with the client

    More
  • Β· 63 views Β· 6 applications Β· 24d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Upper-Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    We are Boosta β€” an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+. We’re looking for a...

    We are Boosta β€” an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+.

    We’re looking for a Data Engineer to join our team in the iGaming industry, where real-time insights, affiliate performance, and marketing analytics are at the center of decision-making. In this role, you’ll own and scale our data infrastructure, working across affiliate integrations, product analytics, and experimentation workflows.

    Your primary responsibilities will include building and maintaining data pipelines, implementing automated data validation, integrating external data sources via APIs, and creating dashboards to monitor data quality, consistency, and reliability. You’ll collaborate daily with the Affiliate Management team, Product Analysts, and Data Scientists to ensure the data powering our reports and models is clean, consistent, and trustworthy.

     

    WHAT YOU’LL DO

    • Design, develop, and maintain ETL/ELT pipelines to transform raw, multi-source data into clean, analytics-ready tables in Google BigQuery, using tools such as dbt for modular SQL transformations, testing, and documentation.
    • Integrate and automate affiliate data workflows, replacing manual processes in collaboration with the related stakeholders.
    • Proactively monitor and manage data pipelines using tools such as Airflow, Prefect, or Dagster, with proper alerting and retry mechanisms in place.
    • Emphasize data quality, consistency, and reliability by implementing robust validation checks, including schema drift detection, null/missing value tracking, and duplicate detection using tools like Great Expectations or
    • Build a Data Consistency Dashboard (in Looker Studio, Power BI, Tableau or Grafana) to track schema mismatches, partner anomalies, and source freshness, with built-in alerts and escalation logic.
    • Ensure timely availability and freshness of all critical datasets, resolving latency and reliability issues quickly and sustainably.
    • Control access to cloud resources, implement data governance policies, and ensure secure, structured access across internal teams.
    • Monitor and optimize data infrastructure costs, particularly related to BigQuery usage, storage, and API-based ingestion.
    • Document all pipelines, dataset structures, transformation logic, and data contracts clearly to support internal alignment and knowledge sharing.
    • Build and maintain postback-based ingestion pipelines to support event-level tracking and attribution across the affiliate ecosystem.
    • Collaborate closely with Data Scientists and Product Analysts to deliver high-quality, structured datasets for modeling, experimentation, and KPI reporting.
    • Act as a go-to resource across the organization for troubleshooting data discrepancies, supporting analytics workflows, and enabling self-service data access.

       

    WHAT WE EXPECT FROM YOU

    • Strong proficiency in SQL and Python.
    • Experience with Google BigQuery and other GCP tools (e.g., Cloud Storage, Cloud Functions, Composer).
    • Proven ability to design, deploy, and scale ETL/ELT pipelines.
    • Hands-on experience integrating and automating data from various platforms.
    • Familiarity with postback tracking, attribution logic, and affiliate data reconciliation.
    • Skilled in orchestration tools like Airflow, Prefect, or Dagster.
    • Experience with Looker Studio, Power BI, Tableau, or Grafana for building dashboards for data quality monitoring.
    • Use of Git for version control and experience managing CI/CD pipelines (e.g., GitHub Actions).
    • Experience with Docker to build isolated and reproducible environments for data workflows.
    • Exposure to iGaming data structures and KPIs is a strong advantage.
    • Strong sense of data ownership, documentation, and operational excellence.

       

    HOW IT WORKS

    • Stage 1: pre-screen with a recruiter.
    • Stage 2: test task.
    • Stage 3: interview.
    • Stage 4: bar-raising.
    • Stage 5: reference check.
    • Stage 6: job offer!

    A trial period for this position is 3 months, during which we will get used to working together.

     

    WHAT WE OFFER

    • 28 business days of paid off
    • Flexible hours and the possibility to work remotely
    • Medical insurance and mental health care
    • Compensation for courses, trainings
    • English classes and speaking clubs
    • Internal library, educational events
    • Outstanding corporate parties, teambuildings

     

    More
  • Β· 49 views Β· 1 application Β· 24d

    Data Engineer 2070/06 to $5500

    Office Work Β· Poland Β· 3 years of experience Β· Upper-Intermediate
    Our partner is a leading programmatic media company, specializing in ingesting large volumes of data, modeling insights, and offering a range of products and services across Media, Analytics, and Technology. Among their clients are well-known brands such...

    Our partner is a leading programmatic media company, specializing in ingesting large volumes of data, modeling insights, and offering a range of products and services across Media, Analytics, and Technology. Among their clients are well-known brands such as Walmart, Barclaycard, and Ford.

     

    The company has expanded to over 700 employees, with 15 global offices spanning four continents. With the imminent opening of a new office in Warsaw, we are seeking experienced 

    Data Engineers to join their expanding team.

     

    The Data Engineer will be responsible for developing, designing, and maintaining end-to-end optimized, scalable Big Data pipelines for our products and applications. In this role, you will collaborate closely with team leads across various departments and receive support from peers and experts across multiple fields.

     

    Opportunities:

     

    • Possibility to work in a successful company
    • Career and professional growth
    • Competitive salary
    • Hybrid work model (3 days per week work from office space in the heart of Warsaw city)
    • Long-term employment with 20 working days of paid vacation, sick leaves, and national holidays

     

    Responsibilities:

     

    • Follow and promote best practices and design principles for Big Data ETL jobs
    • Help in technological decision-making for the business’s future data management and analysis needs by conducting POCs
    • Monitor and troubleshoot performance issues on data warehouse/lakehouse systems
    • Provide day-to-day support of data warehouse management
    • Assist in improving data organization and accuracy
    • Collaborate with data analysts, scientists, and engineers to ensure best practices in terms of technology, coding, data processing, and storage technologies
    • Ensure that all deliverables adhere to our world-class standards

     

    Skills:

     

    • 3+ years of overall experience in Data Warehouse development and database design
    • Deep understanding of distributed computing principles
    • Experience with AWS cloud platform, and big data platforms like EMR, Databricks, EC2, S3, Redshift
    • Experience with Spark, PySpark, Hive, Yarn, etc.
    • Experience in SQL and NoSQL databases, as well as experience with data modeling and schema design
    • Proficiency in programming languages such as Python for implementing data processing algorithms and workflows
    • Experience with Presto and Kafka is a plus
    • Experience with DevOps practices and tools for automating deployment, monitoring, and management of big data applications is a plus
    • Excellent communication, analytical, and problem-solving skills
    • Knowledge of scalable service architecture
    • Experience in scalable data processing jobs on high-volume data
    • Self-starter, proactive, and able to work to deadlines
    • Noce to have: Experience with Scala

     

    If you are looking for an environment where you can grow professionally, learn from the best in the field, balance work and life, and enjoy a pleasant and enthusiastic atmosphere, submit your CV today and become part of our team!

    Everything you do will help us lead the programmatic industry and make it better.

    More
  • Β· 54 views Β· 6 applications Β· 23d

    Consultant Data Engineer (Python/Databricks)

    Part-time Β· Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-Intermediate
    Softermii is looking for a part-time Data Engineering Consultant / Tech Lead who will do technical interviews, assist with upcoming projects, and occasionally be hands-on with complex development tasks β€” including data pipeline design and solution...

    Softermii is looking for a part-time Data Engineering Consultant / Tech Lead who will do technical interviews, assist with upcoming projects, and occasionally be hands-on with complex development tasks β€” including data pipeline design and solution optimization on Databricks.

     


    Type of cooperation: Part-time

     

    ⚑️Your responsibilities on the project will be:

    • Interview and hire Data Engineers
    • Supervise work of other Engineers and have hands on for the most complicated tasks from backlog, focus on unblocking other data Engineers in case of technical difficulties
    • Develop and maintain scalable data pipelines using Databricks (Apache Spark) for batch and streaming use cases.
    • Work with data scientists and analysts to provide reliable, performant, and well-modeled data sets for analytics and machine learning.
    • Optimize and manage data workflows using Databricks Workflows and orchestrate jobs for complex data transformation tasks.
    • Design and implement data ingestion frameworks to bring data from various sources (files, APIs, databases) into Delta Lake.
    • Ensure data quality, lineage, and governance using tools such as Unity Catalog, Delta Live Tables, and built-in monitoring features.
    • Collaborate with cross-functional teams to understand data needs and support production-grade machine learning workflows.
    • Apply data engineering best practices: versioning, testing (e.g., with pytest or dbx), documentation, and CI/CD pipelines



     

    πŸ•ΉTools we use: Jira, Confluence, Git, Figma

     

    πŸ—žOur requirements to you:

    • 5+ years of experience in data engineering or big data development, with production-level work.
    • Architect and develop scalable data solutions on the Databricks platform, leveraging Apache Spark, Delta Lake, and the lakehouse architecture to support advanced analytics and machine learning initiatives.
    • Design, build, and maintain production-grade data pipelines using Python (or Scala) and SQL, ensuring efficient data ingestion, transformation, and delivery across distributed systems.
    • Lead the implementation of Databricks features such as Delta Live Tables, Unity Catalog, and Workflows to ensure secure, reliable, and automated data operations.
    • Optimize Spark performance and resource utilization, applying best practices in distributed computing, caching, and tuning for large-scale data processing.
    • Integrate data from cloud-based sources (e.g., AWS S3), ensuring data quality, lineage, and consistency throughout the pipeline lifecycle.
    • Manage orchestration and automation of data workflows using tools like Airflow or Databricks Jobs, while implementing robust CI/CD pipelines for code deployment and testing.
    • Collaborate cross-functionally with data scientists, analysts, and business stakeholders to understand data needs and deliver actionable insights through robust data infrastructure.
    • Mentor and guide junior engineers, promoting engineering best practices, code quality, and continuous learning within the team.
    • Ensure adherence to data governance and security policies, utilizing tools such as Unity Catalog for access control and compliance.
    • Continuously evaluate new technologies and practices, driving innovation and improvements in data engineering strategy and execution.
    • Experience in designing, building, and maintaining data pipelines using Apache Airflow, including DAG creation, task orchestration, and workflow optimization for scalable data processing.
    • Upper-Intermediate English level.

     

     

    πŸ‘¨β€πŸ’»Who will you have the opportunity to meet during the hiring process (stages):
    Call, HR, Tech interview, PM interview.

     

    πŸ₯―What we can offer you:

    • We have stable and highly-functioning processes – everyone has their own role and clear responsibilities, so decisions are made quickly and without unnecessary approvals. 
    • You will have enough independence to make decisions that can affect not only the project but also the work of the company.
    • We are a team of like-minded experts who create interesting products during working hours and enjoy spending free time together.
    • Do you like to learn something new in your profession or do you want to improve your English? We will be happy to pay 50% of the cost of courses/conferences/speaking clubs.
    • Do you want an individual development plan? We will form one especially for you + you can count on mentoring from our seniors and leaders.
    • Do you have a friend who is currently looking for new job opportunities? Recommend them to us and get a bonus.
    • And what if you want to relax? Then we have 21 working days off.
    • What if you are feeling bad? You can take 5 sick leaves a year.
    • Do you want to volunteer? We will add you to a chat, where we can get a bulletproof vest, buy a pickup truck or send children's drawings to the front.
    • And we have the most empathetic HRs (who also volunteers!). So we are ready to support your well-being in various ways.

     

    πŸ‘¨β€πŸ«A little more information that you may find useful:

    - our adaptation period lasts 3 months, this period of time is enough for us to understand  each other better;

    - there is a performance review after each year of our collaboration where we use a skills map to track your growth;

    - we really have no boundaries in the truest sense of the word – we have flexible working day is up to you.

     

    Of course, we have a referral bonus syst

    More
  • Β· 132 views Β· 36 applications Β· 19d

    Middle+ Data Engineer

    Part-time Β· Full Remote Β· Worldwide Β· 2 years of experience Β· Upper-Intermediate
    Start Date: ASAP Weekly Hours: ~15–20 hours Location: Remote Client: USA-based LegalTech Platform About the Project Join a growing team working on an AI-powered legal advisory platform designed to simplify and streamline legal support for businesses....

    Start Date: ASAP
    Weekly Hours: ~15–20 hours
    Location: Remote
    Client: USA-based LegalTech Platform

     

    About the Project

    Join a growing team working on an AI-powered legal advisory platform designed to simplify and streamline legal support for businesses. The platform includes:

    • A robust contract library
    • AI-assisted document generation and guidance
    • Interactive legal questionnaires
    • A dynamic legal insights blog

       

    We're currently developing a Proof of Concept (PoC) for an advanced AI agent and are looking for a skilled Python/Data Engineer to support core backend logic and data workflows.

     

    Your Core Responsibilities

    • Design and implement ETL/ELT pipelines in the context of LLMs and AI agents
    • Collaborate directly with the AI Architect on PoC features and architecture
    • Contribute to scalable, production-ready backend systems for AI components
    • Handle structured and unstructured data processing
    • Support data integrations with vector databases and AI model inputs

     

    Must-have experience with:

    • Python (3+ years)
    • FastAPI
    • ETL / ELT pipelines
    • Vector Databases (e.g., Pinecone, Weaviate, Qdrant)
    • pandas, numpy, unstructured.io
    • Working with transformers and LLM-adjacent tools

     

    Being a part of 3asoft means having:
    - High level of flexibility and freedom
    - p2p relationship with worldwide customers
    - Competitive compensation paid in USD
    - Fully remote working

    More
  • Β· 37 views Β· 5 applications Β· 19d

    Data Engineer (Azure stack)

    Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· Upper-Intermediate
    Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are...

    Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV.

    Key Responsibilities:
    - Create and manage scalable data pipelines with Azure SQL and other databases.
    - Use Azure Data Factory to automate data workflows.
    - Write efficient Python code for data analysis and processing.
    - Use Docker for application containerization and deployment streamlining.
    - Manage code quality and version control with Git.

    Skills Requirements:
    - 3+ years of experience with Python.
    - 2+ years of experience as a Data Engineer.
    - Strong SQL knowledge, preferably with Azure SQL experience.
    - Python skills for data manipulation.
    - Expertise in Docker for app containerization.
    - Familiarity with Git for managing code versions and collaboration.
    - Upper-intermediate level of English.

    Optional Skills (as a plus):
    - Experience with Azure Data Factory for orchestrating data processes.
    - Experience developing APIs with FastAPI or Flask.
    - Proficiency in Databricks for big data tasks.
    - Experience in a dynamic, agile work environment.
    - Ability to manage multiple projects independently.
    - Proactive attitude toward continuous learning and improvement.

    We offer:

    - Great networking opportunities with international clients, challenging tasks;

    - Building interesting projects from scratch using new technologies;

    - Personal and professional development opportunities;

    - Competitive salary fixed in USD;

    - Paid vacation and sick leaves;

    - Flexible work schedule;

    - Friendly working environment with minimal hierarchy;

    - Team building activities and corporate events.

    More
Log In or Sign Up to see all posted jobs