Jobs

109
  • Β· 28 views Β· 7 applications Β· 5d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Upper-Intermediate
    We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient...

    We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.

    Does this relate to you?

    • 5+ years of experience in Data Engineering or a related field
    • Strong expertise in SQL and data modeling concepts
    • Hands-on experience with Airflow
    • Experience working with Redshift
    • Proficiency in Python for data processing
    • Strong understanding of data governance, security, and compliance
    • Experience in implementing CI/CD pipelines for data workflows
    • Ability to work independently and collaboratively in an agile environment
    • Excellent problem-solving and analytical skills

     

    A new team member will be in charge of:

    • Design, develop, and maintain scalable data warehouse solutions
    • Build and optimize ETL/ELT pipelines for efficient data integration
    • Design and implement data models to support analytical and reporting needs
    • Ensure data integrity, quality, and security across all pipelines
    • Optimize data performance and scalability using best practices
    • Work with big data technologies such as  Redshift
    • Collaborate with cross-functional teams to understand business requirements and translate them into data solutions
    • Implement CI/CD pipelines for data workflows
    • Monitor, troubleshoot, and improve data processes and system performance
    • Stay updated with industry trends and emerging technologies in data engineering

     

    Already looks interesting? Awesome! Check out the benefits prepared for you:

    • Regular performance reviews, including remuneration
    • Up to 25 paid days off per year for well-being
    • Flexible cooperation hours with work-from-home
    • Fully paid English classes with an in-house teacher
    • Perks on special occasions such as birthdays, marriage, childbirth
    • Referral program implying attractive bonuses
    • External & internal training and IT certifications
    More
  • Β· 11 views Β· 0 applications Β· 4d

    Senior/Tech Lead Data Engineer

    Hybrid Remote Β· Poland, Ukraine (Kyiv, Lviv) Β· 5 years of experience Β· Upper-Intermediate
    Quantum is a global technology partner delivering high-end software products that address real-world problems. We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps,...

    Quantum is a global technology partner delivering high-end software products that address real-world problems. 

    We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps, Blockchain, and more.

    Here at Quantum, we are dedicated to creating state-of-art solutions that effectively address the pressing issues faced by businesses and the world. To date, our team of exceptional people has already helped many organizations globally attain technological leadership.

    We constantly discover new ways to solve never-ending business challenges by adopting new technologies, even when there isn’t yet a best practice. If you share our passion for problem-solving and making an impact, join us and enjoy getting to know our wealth of experience!

     

    About the position

    Quantum is expanding the team and has brilliant opportunities for a Data Engineer. As a Senior/Tech Lead Data Engineer, you will be pivotal in designing, implementing, and optimizing data platforms. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Air Flow, Spark, using Python, and various cloud-based solutions. 

    The client is a technological research company that utilizes proprietary AI-based analysis and language models to provide comprehensive insights into global stocks in all languages. Our mission is to bridge the knowledge gap in the investment world and empower investors of all types to become β€œsuper-investors.”

    Through our generative AI technology implemented into brokerage platforms and other financial institutions’ infrastructures, we offer instant fundamental analyses of global stocks alongside bespoke investment strategies, enabling informed investment decisions for millions of investors worldwide. 

     

    Must have skills:

    • Bachelor's Degree in Computer Science or related field
    • At least 5 years of experience in Data Engineering
    • Proven experience as a Tech Lead or Architect in data-focused projects, leadership skills, and experience managing or mentoring data engineering teams
    • Strong proficiency in Python and PySpark for building ETL pipelines and large-scale data processing
    • Deep understanding of Apache Spark, including performance tuning and optimization (job execution plans, broadcast joins, partitioning, skew handling, lazy evaluation)
    • Hands-on experience with AWS Cloud (minimum 2 years), including EMR and Glue
    • Familiarity with PySpark internals and concepts (Window functions, Broadcast joins, Sort & merge joins, Watermarking, UDFs, Lazy computation, Partition skew)
    • Practical experience with performance optimization of Spark jobs (MUST)
    • Strong understanding of OOD principles and familiarity with SOLID (MUST)
    • Experience with cloud-native data platforms and lakehouse architectures
    • Comfortable with SQL & NoSQL databases
    • Experience with testing practices such as TDD, unit testing, and integration testing
    • Strong problem-solving skills and a collaborative mindset
    • Upper-Intermediate or higher level of English (spoken and written)

     

    Your tasks will include:

    • Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources
    • Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions
    • Lead efforts in performance tuning and query optimization to enhance data processing efficiency
    • Provide expertise in data modeling and database design to ensure the scalability and reliability of data platforms
    • Contribute to the development of best practices and standards for data engineering processes
    • Stay updated on emerging technologies and trends in the data engineering landscape

     

    We offer:

    • Delivering high-end software projects that address real-world problems
    • Surrounding experts who are ready to move forward professionally
    • Professional growth plan and team leader support
    • Taking ownership of R&D and socially significant projects
    • Participation in worldwide tech conferences and competitions
    • Taking part in regular educational activities
    • Being a part of a multicultural company with a fun and lighthearted atmosphere
    • Working from anywhere with flexible working hours
    • Paid vacation and sick leave days

       

    Join Quantum and take a step toward your data-driven future.

    More
  • Β· 20 views Β· 1 application Β· 1d

    Senior\Lead Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· Upper-Intermediate
    Job Description WHAT WE VALUE Most importantly, you can see yourself contributing and thriving in the position described above. How you gained the skills needed for doing that is less important. We expect you to be good at and have had hands-on experience...

    Job Description

    WHAT WE VALUE

    Most importantly, you can see yourself contributing and thriving in the position described above. How you gained the skills needed for doing that is less important.

    We expect you to be good at and have had hands-on experience with the following:

    • Expert in T-SQL
    • Proficiency in Python
    • Experience in Microsoft cloud technologies data services including but not limited to Azure SQL and Azure Data Factory
    • Experience with Snowflake and star schema and data modeling – experience with migrations to Snowflake will be an advantage
    • Experience or strong interest with DBT (data build tool) for transformations, test. Validation, data quality etc.
    • English - Upper Intermediate

    On top of that, it would an advantage to have knowledge / interest in the following:β€―

    • Some proficiency in C# .NET
    • Security first mindset, with knowledge on how to implement row level security etc.
    • Agile development methodologies and DevOps / DataOps practices such as continuous integration, continuous delivery, and continuous deployment. For example, automated DB validations and deployment of DB schema using DACPAC.

    As a person, you have following traits:

    • Strong collaborator with team mates and stakeholders
    • Clear communicator who speaks up when needed.

    Job Responsibilities

    WHAT YOU WILL BE RESPONSIBLE FOR

    Ensure quality in our data solutions and that we can ensure good data quality across multiple customer tenants every time we release.

    Work together with the Product Architect on defining and refining the data architecture and roadmap.

    Facilitate the migration of our current data platform towards a more modern tool stack that can be easier maintained by both data engineers and software engineers.

    Ensure that new data entities get implemented in the data model using schemas that are appropriate for their use, facilitating good performance and analytics needs.

    Guide and support people of other roles (engineers, testers, etc.), to ensure the spread of data knowledge and experience more broadly in the team

    Department/Project Description

    WHO WE ARE

    For over 50 years, we have worked closely with investment and asset managers to become the world’s leading provider of integrated investment management solutions. We are 3,000+ colleagues with a broad range of nationalities, educations, professional experiences, ages, and backgrounds in general. β€―

    SimCorp is an independent subsidiary of the Deutsche BΓΆrse Group. Following the recent merger with Axioma, we leverage the combined strength of our brands to provide an industry-leading, full, front-to-back offering for our clients. β€―

    SimCorp is an equal-opportunity employer. We are committed to building a culture where diverse perspectives and expertise are integrated into our everyday work. We believe in the continual growth and development of our employees, so that we can provide best-in-class solutions to our clients. β€―

     

    WHY THIS ROLE IS IMPORTANT TO US

    You will be joining an innovative application development team within SimCorp's Product Division. As a primary provider of SaaS offerings based on next-generation technologies, our Digital Engagement Platform is a cloud-native data application developed on Azure, utilizing SRE methodologies and continuous delivery. Your contribution to evolving DEP’s data platform will be vital in ensuring we can scale to future customer needs and support future analytics requirements. Our future growth as a SaaS product is rooted in a cloud-native strategy that emphasizes adopting a modern data platform tool stack and the application of modern engineering principles as essential components.

    We are looking into a technology shift from Azure SQL to SnowFlake in order to meet new client demands for scalability. You will be an important addition to the team for achieving this goal.

    More
  • Β· 85 views Β· 19 applications Β· 10 May

    Data Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· Upper-Intermediate
    We are Uvik Software β€” a successful company in software development with a global presence in the world market and we work with the world’s most successful companies. We seek a highly skilled and autonomous Data Engineer to join our dynamic team. ...

    We are Uvik Software β€” a successful company in software development with a global presence in the world market and we work with the world’s most successful companies.
     

    We seek a highly skilled and autonomous 🟣Data Engineer 🟣 to join our dynamic team.  This role requires a blend of technical expertise, creative problem-solving, and leadership to drive projects from concept to deployment.
     

    πŸ’»Key Responsibilities:
     

    • Develop and implement robust data models and software architectures.
    • Utilize Python and advanced ML libraries to build and deploy AI systems.
    • Engage in machine learning engineering, particularly in NLP/NLU and language model development using platforms like GPT.
    • Stay abreast of current trends in AI, including MLLMs, AI Agents, and RAG technologies.
    • Lead and guide teams through the project lifecycle to meet strategic business goals.
       

    Qualifications:
     

    • Profound knowledge of data structures, data modelling, and software architecture principles.
    • Expertise in Python.
    • Proven track record in the engineering and deployment of AI systems.
    • Strong interest and experience in NLP/NLU and developing language models.
    • Familiarity with major cloud platforms including GCP, Azure, and AWS.
    • Excellent problem-solving, communication, and leadership skills.
       

    Nice-to-Have:
     

    • Experience in startup environments, ideally scaling new ventures from ground zero.
    • Hands-on experience with major ML libraries.
    • Active engagement with the AI community, whether through research, presentations, or contributions to open-source projects.
    • Experience with innovative interfaces like SMS apps, browser extensions, or interactive modules.
    • Technical proficiency in React/Next.js, FastAPI, MongoDB, and Marqo AI Vector DB.

      We offer:
       

    βœ”οΈ12 sick leaves and 18 paid vacation business days per year

    βœ”οΈComfortable work conditions (including MacBook Pro and Dell monitor in each workplace)

    βœ”οΈSmart environment

    βœ”οΈInteresting projects from renowned clients

    βœ”οΈFlexible work schedule

    βœ”οΈCompetitive salary according to the qualifications

    βœ”οΈGuaranteed full workload during the term of the contract

    βœ”οΈCorporate leisure activities

    βœ”οΈGame, lounge, sports zones.

     

    More
  • Β· 178 views Β· 46 applications Β· 30d

    Middle Python / Data Engineer

    Part-time Β· Full Remote Β· Worldwide Β· 2 years of experience Β· Upper-Intermediate
    Involvement: ~15–20 hours/week Start Date: ASAP Location: Remote Client: USA-based Project: Legal IT – AI-powered legal advisory platform About the Project Join a growing team behind Legal IT, an intelligent legal advisory platform that simplifies legal...

    Involvement: ~15–20 hours/week
    Start Date: ASAP
    Location: Remote
    Client: USA-based
    Project: Legal IT – AI-powered legal advisory platform

     

    About the Project

    Join a growing team behind Legal IT, an intelligent legal advisory platform that simplifies legal support for businesses. The platform features:

    - A robust contract library

    - AI-assisted document generation & guidance

    - Interactive legal questionnaires

    - A dynamic legal blog with curated insights

     

    We’re building out advanced AI-driven proof-of-concepts (PoCs) and are looking for a strong Python/Data Engineer to support the backend logic and data pipelines powering these tools.

     

    Core Responsibility

    - Collaborate directly with the AI Architect to develop and iterate on proof-of-concept features with ongoing development

     

    Being a part of 3asoft means having:
    - High level of flexibility and freedom
    - p2p relationship with worldwide customers
    - Competitive compensation paid in USD
    - Fully remote working

    More
  • Β· 36 views Β· 1 application Β· 29d

    Senior Data Engineer

    Full Remote Β· Poland Β· 5 years of experience Β· Upper-Intermediate
    As a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization,...

    As a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Airflow, Spark, using Python and various cloud-based solutions.

     

    Key Responsibilities:

    • Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources.
    • Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions.
    • Lead efforts in performance tuning and query optimization to enhance data processing efficiency.
    • Provide expertise in data modeling and database design to ensure scalability and reliability of data platforms.
    • Contribute to the development of best practices and standards for data engineering processes.
    • Stay updated on emerging technologies and trends in the data engineering landscape.

     

    Required Skills and Qualifications:

    • Bachelor's Degree in Computer Science or related field.
    • Minimum of 5 years of experience in tech lead data engineering or architecture roles.
    • Proficiency in Python and PySpark for ETL development and data processing.
    • AWS CLOUD at least 2 years
    • Extensive experience with cloud-based data platforms, particularly EMR.
    • Must have knowledge with Spark.
    • Excellent problem-solving skills and ability to work effectively in a collaborative team environment.
    • Leadership experience, with a proven track record of leading data engineering teams.

     

    Benefits

     

    • 20 days of paid vacation, 5 sick leave
    • National holidays observed
    • Company-provided laptop

     

     

    More
  • Β· 62 views Β· 2 applications Β· 29d

    Middle Data Support Engineer (Python, SQL)

    Ukraine Β· 3 years of experience Β· Upper-Intermediate
    N-iX is looking for a Middle Data Support Engineer to join our team. Our customer is the leading school transportation solutions provider in North America. Every day, the company completes 5 million student journeys, moving more passengers than all U.S....

    N-iX is looking for a Middle Data Support Engineer to join our team. Our customer is the leading school transportation solutions provider in North America. Every day, the company completes 5 million student journeys, moving more passengers than all U.S. airlines combined and delivers reliable, quality services, including full-service transportation and management, special-needs transportation, route optimization and scheduling, maintenance, and charter services for 1,100 school district contracts.

     

    Responsibilities:

    • Provide support in production and non-production environments (Azure cloud)
    • Install, configure and provide day-to-day support after implementation, including off hours as needed;
    • Troubleshooting defects and errors, arising problems resolution;
    • Plan, test, and implement server upgrades, maintenance fixes, and vendor-supplied patches;
    • Help in resolving incidents;
    • Monitor ETL jobs;
    • Perform small enhancements (Azure/SQL). 

       

    Requirements:

    • Proven knowledge and 3+ years experience in Python
    • Proficiency in RDBMS systems (MS SQL experience as a plus);
    • Experience with Azure cloud provider service;
    • Understanding of Azure Data Lake / Storage Accounts;
    • Experience in creation and managing data pipelines in Azure Data Factory;
    • Upper Intermediate/Advanced English level.

       

    Nice to have:

    • Experience with administration of Windows Server 2012 and higher;
    • Experience with AWS, Snowflake, Power BI;
    • Experience with technical support;
    • Experience in .Net.

       

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 44 views Β· 5 applications Β· 29d

    Data engineer (relocation to Berlin)

    Office Work Β· Germany Β· 5 years of experience Β· Upper-Intermediate
    At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment,...

    At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.

     

    About the Data Solution Team As a Data Engineer, you will join our Data Solution Team, which drives our data-driven innovation. The team is pivotal to powering our business processes and enhancing customer experiences through effective data utilization. Our focus areas include:
     

    ● Developing integrations between systems.

    ● Analyzing customer data to derive actionable insights.

    ● Improving customer experience by leveraging statistical and machine learning models. Our tech stack includes:

    ● Cloud & Infrastructure: AWS (S3, EKS, Quicksight, and monitoring tools).

    ● Data Engineering & Analytics: Apache Spark (Scala and PySpark on Databricks), Apache Kafka (Confluence Cloud).

    ● Infrastructure as Code: Terraform.

    ● Development & Collaboration: BitBucket, Jira.

    ● Integration Tools & APIs: Segment.io, Blueshift, Zendesk, Google Maps API, and other external systems

     

    Job requirements

    As A Data Engineer, You Will:

    ● Design, build, and own near-time and batch data processing workflows.

    ● Develop efficient, low-latency data pipelines and systems.

    ● Maintain high data quality while ensuring GDPR compliance.

    ● Analyze customer data and extract insights to drive business decisions.

    ● Collaborate with Product, Backend, Marketing, and other teams to deliver impactful features.

    ● Help Data scientists deliver ML/AI solutions.

     

    Requirements:

    ● 5+ years of experience as a Data Engineer, with expertise in Apache Spark using Python and Scala.

    ● 3+ years of experience with Apache Kafka.

    ● Management experience or Tech Lead experience

    ● Strong proficiency in SQL.

    ● Experience with CI/CD processes and platforms.

    ● Hands-on experience with cloud technologies such as AWS, GCP or Azure.

    ● Familiarity with Terraform.

    ● Comfortable working in an agile environment.

    ● Excellent problem-solving and self-learning skills, with the ability to operate both independently and as part of a team.

     

    Nice to have:

    ● Hands-on experience with Databricks.

    ● Experience with document databases, particularly Amazon DocumentDB.

    ● Familiarity with handling high-risk data.

    ● Exposure to BI tools such as AWS Quicksight or Redash.

    ● Work experience in a Software B2C company, especially in the FinTech industry.

     

    What we offer:

    Our goal is to set up a great working environment. Become part of the process and:

    ● Shape the future of our organization as part of the international founding team.

    ● Take on responsibility from day one.

    ● Benefit from various coaching and training opportunities, including a Sports Subscription, German classes, and a €1000 yearly self-development budget.

    ● Work in a hybrid working model from the comfortable Berlin office

    ● Enjoy a modern workplace in the heart of Berlin with drinks, fresh fruit, kicker and ping pong

    More
  • Β· 44 views Β· 1 application Β· 28d

    Data Engineer

    Hybrid Remote Β· Slovakia Β· 4 years of experience Β· Upper-Intermediate
    Now is an amazing time to join our company as we continue to empower innovators to change the world. We provide top-tier technology consulting, R&D, design, and software development services across the USA, UK, and EU markets. And this is where you come...

    Now is an amazing time to join our company as we continue to empower innovators to change the world. We provide top-tier technology consulting, R&D, design, and software development services across the USA, UK, and EU markets. And this is where you come in!

    We are looking for a Skilled Data Engineer to join our team.

    About the Project

    We’re launching a Snowflake Proof of Concept (PoC) for a leading football organization in Germany. The project aims to demonstrate how structured and well-managed data can support strategic decision-making in the sports domain.

    Key Responsibilities

    • Define data scope and identify data sources
    • Design and build the data architecture
    • Implement ETL pipelines into a data lake
    • Ensure data quality and consistency
    • Collaborate with stakeholders to define analytics needs
    • Deliver data visualizations using Power BI

    Required Skills

    • Strong experience with Snowflake, ETL pipelines, and data lakes
    • Power BI proficiency
    • Knowledge of data architecture and modeling
    • Data quality assurance expertise
    • Solid communication in English (B2+)

    Nice to Have

    • Familiarity with GDPR
    • Experience in sports or media-related data projects
    • Experience with short-term PoCs and agile delivery

    What We Offer

    • Contract for the PoC phase with potential long-term involvement
    • All cloud resources and licenses provided by the client
    • Hybrid/onsite work in Bratislava
    • Opportunity to join a meaningful data-driven sports project with European visibility

    πŸ“¬ Interested? Send us your CV and hourly rate (EUR).

    We’re prioritizing candidates based in Bratislava or in Europe

    Interview Process:

    1️⃣ internal technical interview
    2️⃣ interview with the client

    More
  • Β· 65 views Β· 6 applications Β· 27d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Upper-Intermediate Ukrainian Product πŸ‡ΊπŸ‡¦
    We are Boosta β€” an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+. We’re looking for a...

    We are Boosta β€” an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+.

    We’re looking for a Data Engineer to join our team in the iGaming industry, where real-time insights, affiliate performance, and marketing analytics are at the center of decision-making. In this role, you’ll own and scale our data infrastructure, working across affiliate integrations, product analytics, and experimentation workflows.

    Your primary responsibilities will include building and maintaining data pipelines, implementing automated data validation, integrating external data sources via APIs, and creating dashboards to monitor data quality, consistency, and reliability. You’ll collaborate daily with the Affiliate Management team, Product Analysts, and Data Scientists to ensure the data powering our reports and models is clean, consistent, and trustworthy.

     

    WHAT YOU’LL DO

    • Design, develop, and maintain ETL/ELT pipelines to transform raw, multi-source data into clean, analytics-ready tables in Google BigQuery, using tools such as dbt for modular SQL transformations, testing, and documentation.
    • Integrate and automate affiliate data workflows, replacing manual processes in collaboration with the related stakeholders.
    • Proactively monitor and manage data pipelines using tools such as Airflow, Prefect, or Dagster, with proper alerting and retry mechanisms in place.
    • Emphasize data quality, consistency, and reliability by implementing robust validation checks, including schema drift detection, null/missing value tracking, and duplicate detection using tools like Great Expectations or
    • Build a Data Consistency Dashboard (in Looker Studio, Power BI, Tableau or Grafana) to track schema mismatches, partner anomalies, and source freshness, with built-in alerts and escalation logic.
    • Ensure timely availability and freshness of all critical datasets, resolving latency and reliability issues quickly and sustainably.
    • Control access to cloud resources, implement data governance policies, and ensure secure, structured access across internal teams.
    • Monitor and optimize data infrastructure costs, particularly related to BigQuery usage, storage, and API-based ingestion.
    • Document all pipelines, dataset structures, transformation logic, and data contracts clearly to support internal alignment and knowledge sharing.
    • Build and maintain postback-based ingestion pipelines to support event-level tracking and attribution across the affiliate ecosystem.
    • Collaborate closely with Data Scientists and Product Analysts to deliver high-quality, structured datasets for modeling, experimentation, and KPI reporting.
    • Act as a go-to resource across the organization for troubleshooting data discrepancies, supporting analytics workflows, and enabling self-service data access.

       

    WHAT WE EXPECT FROM YOU

    • Strong proficiency in SQL and Python.
    • Experience with Google BigQuery and other GCP tools (e.g., Cloud Storage, Cloud Functions, Composer).
    • Proven ability to design, deploy, and scale ETL/ELT pipelines.
    • Hands-on experience integrating and automating data from various platforms.
    • Familiarity with postback tracking, attribution logic, and affiliate data reconciliation.
    • Skilled in orchestration tools like Airflow, Prefect, or Dagster.
    • Experience with Looker Studio, Power BI, Tableau, or Grafana for building dashboards for data quality monitoring.
    • Use of Git for version control and experience managing CI/CD pipelines (e.g., GitHub Actions).
    • Experience with Docker to build isolated and reproducible environments for data workflows.
    • Exposure to iGaming data structures and KPIs is a strong advantage.
    • Strong sense of data ownership, documentation, and operational excellence.

       

    HOW IT WORKS

    • Stage 1: pre-screen with a recruiter.
    • Stage 2: test task.
    • Stage 3: interview.
    • Stage 4: bar-raising.
    • Stage 5: reference check.
    • Stage 6: job offer!

    A trial period for this position is 3 months, during which we will get used to working together.

     

    WHAT WE OFFER

    • 28 business days of paid off
    • Flexible hours and the possibility to work remotely
    • Medical insurance and mental health care
    • Compensation for courses, trainings
    • English classes and speaking clubs
    • Internal library, educational events
    • Outstanding corporate parties, teambuildings

     

    More
  • Β· 49 views Β· 1 application Β· 27d

    Data Engineer 2070/06 to $5500

    Office Work Β· Poland Β· 3 years of experience Β· Upper-Intermediate
    Our partner is a leading programmatic media company, specializing in ingesting large volumes of data, modeling insights, and offering a range of products and services across Media, Analytics, and Technology. Among their clients are well-known brands such...

    Our partner is a leading programmatic media company, specializing in ingesting large volumes of data, modeling insights, and offering a range of products and services across Media, Analytics, and Technology. Among their clients are well-known brands such as Walmart, Barclaycard, and Ford.

     

    The company has expanded to over 700 employees, with 15 global offices spanning four continents. With the imminent opening of a new office in Warsaw, we are seeking experienced 

    Data Engineers to join their expanding team.

     

    The Data Engineer will be responsible for developing, designing, and maintaining end-to-end optimized, scalable Big Data pipelines for our products and applications. In this role, you will collaborate closely with team leads across various departments and receive support from peers and experts across multiple fields.

     

    Opportunities:

     

    • Possibility to work in a successful company
    • Career and professional growth
    • Competitive salary
    • Hybrid work model (3 days per week work from office space in the heart of Warsaw city)
    • Long-term employment with 20 working days of paid vacation, sick leaves, and national holidays

     

    Responsibilities:

     

    • Follow and promote best practices and design principles for Big Data ETL jobs
    • Help in technological decision-making for the business’s future data management and analysis needs by conducting POCs
    • Monitor and troubleshoot performance issues on data warehouse/lakehouse systems
    • Provide day-to-day support of data warehouse management
    • Assist in improving data organization and accuracy
    • Collaborate with data analysts, scientists, and engineers to ensure best practices in terms of technology, coding, data processing, and storage technologies
    • Ensure that all deliverables adhere to our world-class standards

     

    Skills:

     

    • 3+ years of overall experience in Data Warehouse development and database design
    • Deep understanding of distributed computing principles
    • Experience with AWS cloud platform, and big data platforms like EMR, Databricks, EC2, S3, Redshift
    • Experience with Spark, PySpark, Hive, Yarn, etc.
    • Experience in SQL and NoSQL databases, as well as experience with data modeling and schema design
    • Proficiency in programming languages such as Python for implementing data processing algorithms and workflows
    • Experience with Presto and Kafka is a plus
    • Experience with DevOps practices and tools for automating deployment, monitoring, and management of big data applications is a plus
    • Excellent communication, analytical, and problem-solving skills
    • Knowledge of scalable service architecture
    • Experience in scalable data processing jobs on high-volume data
    • Self-starter, proactive, and able to work to deadlines
    • Noce to have: Experience with Scala

     

    If you are looking for an environment where you can grow professionally, learn from the best in the field, balance work and life, and enjoy a pleasant and enthusiastic atmosphere, submit your CV today and become part of our team!

    Everything you do will help us lead the programmatic industry and make it better.

    More
  • Β· 56 views Β· 6 applications Β· 26d

    Consultant Data Engineer (Python/Databricks)

    Part-time Β· Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-Intermediate
    Softermii is looking for a part-time Data Engineering Consultant / Tech Lead who will do technical interviews, assist with upcoming projects, and occasionally be hands-on with complex development tasks β€” including data pipeline design and solution...

    Softermii is looking for a part-time Data Engineering Consultant / Tech Lead who will do technical interviews, assist with upcoming projects, and occasionally be hands-on with complex development tasks β€” including data pipeline design and solution optimization on Databricks.

     


    Type of cooperation: Part-time

     

    ⚑️Your responsibilities on the project will be:

    • Interview and hire Data Engineers
    • Supervise work of other Engineers and have hands on for the most complicated tasks from backlog, focus on unblocking other data Engineers in case of technical difficulties
    • Develop and maintain scalable data pipelines using Databricks (Apache Spark) for batch and streaming use cases.
    • Work with data scientists and analysts to provide reliable, performant, and well-modeled data sets for analytics and machine learning.
    • Optimize and manage data workflows using Databricks Workflows and orchestrate jobs for complex data transformation tasks.
    • Design and implement data ingestion frameworks to bring data from various sources (files, APIs, databases) into Delta Lake.
    • Ensure data quality, lineage, and governance using tools such as Unity Catalog, Delta Live Tables, and built-in monitoring features.
    • Collaborate with cross-functional teams to understand data needs and support production-grade machine learning workflows.
    • Apply data engineering best practices: versioning, testing (e.g., with pytest or dbx), documentation, and CI/CD pipelines



     

    πŸ•ΉTools we use: Jira, Confluence, Git, Figma

     

    πŸ—žOur requirements to you:

    • 5+ years of experience in data engineering or big data development, with production-level work.
    • Architect and develop scalable data solutions on the Databricks platform, leveraging Apache Spark, Delta Lake, and the lakehouse architecture to support advanced analytics and machine learning initiatives.
    • Design, build, and maintain production-grade data pipelines using Python (or Scala) and SQL, ensuring efficient data ingestion, transformation, and delivery across distributed systems.
    • Lead the implementation of Databricks features such as Delta Live Tables, Unity Catalog, and Workflows to ensure secure, reliable, and automated data operations.
    • Optimize Spark performance and resource utilization, applying best practices in distributed computing, caching, and tuning for large-scale data processing.
    • Integrate data from cloud-based sources (e.g., AWS S3), ensuring data quality, lineage, and consistency throughout the pipeline lifecycle.
    • Manage orchestration and automation of data workflows using tools like Airflow or Databricks Jobs, while implementing robust CI/CD pipelines for code deployment and testing.
    • Collaborate cross-functionally with data scientists, analysts, and business stakeholders to understand data needs and deliver actionable insights through robust data infrastructure.
    • Mentor and guide junior engineers, promoting engineering best practices, code quality, and continuous learning within the team.
    • Ensure adherence to data governance and security policies, utilizing tools such as Unity Catalog for access control and compliance.
    • Continuously evaluate new technologies and practices, driving innovation and improvements in data engineering strategy and execution.
    • Experience in designing, building, and maintaining data pipelines using Apache Airflow, including DAG creation, task orchestration, and workflow optimization for scalable data processing.
    • Upper-Intermediate English level.

     

     

    πŸ‘¨β€πŸ’»Who will you have the opportunity to meet during the hiring process (stages):
    Call, HR, Tech interview, PM interview.

     

    πŸ₯―What we can offer you:

    • We have stable and highly-functioning processes – everyone has their own role and clear responsibilities, so decisions are made quickly and without unnecessary approvals. 
    • You will have enough independence to make decisions that can affect not only the project but also the work of the company.
    • We are a team of like-minded experts who create interesting products during working hours and enjoy spending free time together.
    • Do you like to learn something new in your profession or do you want to improve your English? We will be happy to pay 50% of the cost of courses/conferences/speaking clubs.
    • Do you want an individual development plan? We will form one especially for you + you can count on mentoring from our seniors and leaders.
    • Do you have a friend who is currently looking for new job opportunities? Recommend them to us and get a bonus.
    • And what if you want to relax? Then we have 21 working days off.
    • What if you are feeling bad? You can take 5 sick leaves a year.
    • Do you want to volunteer? We will add you to a chat, where we can get a bulletproof vest, buy a pickup truck or send children's drawings to the front.
    • And we have the most empathetic HRs (who also volunteers!). So we are ready to support your well-being in various ways.

     

    πŸ‘¨β€πŸ«A little more information that you may find useful:

    - our adaptation period lasts 3 months, this period of time is enough for us to understand  each other better;

    - there is a performance review after each year of our collaboration where we use a skills map to track your growth;

    - we really have no boundaries in the truest sense of the word – we have flexible working day is up to you.

     

    Of course, we have a referral bonus syst

    More
  • Β· 135 views Β· 37 applications Β· 22d

    Middle+ Data Engineer

    Part-time Β· Full Remote Β· Worldwide Β· 2 years of experience Β· Upper-Intermediate
    Start Date: ASAP Weekly Hours: ~15–20 hours Location: Remote Client: USA-based LegalTech Platform About the Project Join a growing team working on an AI-powered legal advisory platform designed to simplify and streamline legal support for businesses....

    Start Date: ASAP
    Weekly Hours: ~15–20 hours
    Location: Remote
    Client: USA-based LegalTech Platform

     

    About the Project

    Join a growing team working on an AI-powered legal advisory platform designed to simplify and streamline legal support for businesses. The platform includes:

    • A robust contract library
    • AI-assisted document generation and guidance
    • Interactive legal questionnaires
    • A dynamic legal insights blog

       

    We're currently developing a Proof of Concept (PoC) for an advanced AI agent and are looking for a skilled Python/Data Engineer to support core backend logic and data workflows.

     

    Your Core Responsibilities

    • Design and implement ETL/ELT pipelines in the context of LLMs and AI agents
    • Collaborate directly with the AI Architect on PoC features and architecture
    • Contribute to scalable, production-ready backend systems for AI components
    • Handle structured and unstructured data processing
    • Support data integrations with vector databases and AI model inputs

     

    Must-have experience with:

    • Python (3+ years)
    • FastAPI
    • ETL / ELT pipelines
    • Vector Databases (e.g., Pinecone, Weaviate, Qdrant)
    • pandas, numpy, unstructured.io
    • Working with transformers and LLM-adjacent tools

     

    Being a part of 3asoft means having:
    - High level of flexibility and freedom
    - p2p relationship with worldwide customers
    - Competitive compensation paid in USD
    - Fully remote working

    More
  • Β· 39 views Β· 6 applications Β· 22d

    Data Engineer (Azure stack)

    Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· Upper-Intermediate
    Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are...

    Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV.

    Key Responsibilities:
    - Create and manage scalable data pipelines with Azure SQL and other databases.
    - Use Azure Data Factory to automate data workflows.
    - Write efficient Python code for data analysis and processing.
    - Use Docker for application containerization and deployment streamlining.
    - Manage code quality and version control with Git.

    Skills Requirements:
    - 3+ years of experience with Python.
    - 2+ years of experience as a Data Engineer.
    - Strong SQL knowledge, preferably with Azure SQL experience.
    - Python skills for data manipulation.
    - Expertise in Docker for app containerization.
    - Familiarity with Git for managing code versions and collaboration.
    - Upper-intermediate level of English.

    Optional Skills (as a plus):
    - Experience with Azure Data Factory for orchestrating data processes.
    - Experience developing APIs with FastAPI or Flask.
    - Proficiency in Databricks for big data tasks.
    - Experience in a dynamic, agile work environment.
    - Ability to manage multiple projects independently.
    - Proactive attitude toward continuous learning and improvement.

    We offer:

    - Great networking opportunities with international clients, challenging tasks;

    - Building interesting projects from scratch using new technologies;

    - Personal and professional development opportunities;

    - Competitive salary fixed in USD;

    - Paid vacation and sick leaves;

    - Flexible work schedule;

    - Friendly working environment with minimal hierarchy;

    - Team building activities and corporate events.

    More
  • Β· 106 views Β· 2 applications Β· 21d

    Middle/Senior Data Engineer (3445)

    Full Remote Β· Ukraine Β· 3 years of experience Β· Intermediate
    General information: We’re ultimately looking for someone who understands data flows well, has strong analytical thinking, and can grasp the bigger picture. If you’re the kind of person who asks the right questions and brings smart ideas to the table,...

    General information:
    We’re ultimately looking for someone who understands data flows well, has strong analytical thinking, and can grasp the bigger picture. If you’re the kind of person who asks the right questions and brings smart ideas to the table, some specific requirements can be flexible β€” we’re more interested in finding "our person" :)
     

    Responsibilities:
    Implementation of business logic in Data Warehouse according with the specifications
    Some business analysis required to enable providing the relevant data in a relevant manner
    Conversion of business requirements into data models
    Pipelines management (ETL pipelines in Datafactory)
    Loadings and query performance tuning
    Working with senior staff on the customer's side who will provide requirements while engineer may propose some own ideas
     

    Requirements:
    Experience with Azure and readiness to work (up to 80% of time) with SQL is a must
    Development of data base systems (MS-SQL/T-SQL,SQL)
    Writing well performing SQL code and investigating & implementing performance measures
    Data warehousing / dimensional modeling
    Working within an Agile project setup
    Creation and maintenance of Azure DevOps & Data Factory pipelines
    Developing robust data pipelines with DBT

    Experience with Databricks (optional)
    Work in Supply Chain & Logistics and aware of SAP MM Data structure (optional).

     

    More
Log In or Sign Up to see all posted jobs