Jobs Data Engineer

162
  • Β· 25 views Β· 2 applications Β· 2d

    Middle Data Engineer

    Full Remote Β· Croatia, Poland, Romania, Slovakia, Ukraine Β· 4 years of experience Β· English - B2
    Description: Our Client is a Fortune 500 company and is one of the biggest global manufacturing companies operating in the fields of industrial systems, worker safety, health care, and consumer goods. The company is dedicated to creating the technology...

    Description:

    Our Client is a Fortune 500 company and is one of the biggest global manufacturing companies operating in the fields of industrial systems, worker safety, health care, and consumer goods. The company is dedicated to creating the technology and products that advance every business, improve every home, and enhance every life.

    Minimum Requirements:

    • Minimum of 4 years of experience in SQL and Python programming languages, specifically for data engineering tasks.
    • Proficiency in working with cloud technologies such as Azure or AWS.
    • Experience with Spark and Databricks or similar big data processing and analytics platforms
    • Experience working with large data environments, including data processing, data integration, and data warehousing.
    • Experience with data quality assessment and improvement techniques, including data profiling, data cleansing, and data validation.
    • Familiarity with data lakes and their associated technologies, such as Azure Data Lake Storage, AWS S3, or Delta Lake, for scalable and cost-effective data storage and management.
    • Experience with NoSQL databases, such as MongoDB or Cosmos, for handling unstructured and semi-structured data.
    • Fluent English

    Additional Skillset (Nice to Have):

    • Familiarity with Agile and Scrum methodologies, including working with Azure DevOps and Jira for project management.
    • Knowledge of DevOps methodologies and practices, including continuous integration and continuous deployment (CI/CD).
    • Experience with Azure Data Factory or similar data integration tools for orchestrating and automating data pipelines.
    • Ability to build and maintain APIs for data integration and consumption.
    • Experience with data backends for software platforms, including database design, optimization, and performance tuning.

    Job responsibilities:

    • Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process data quickly at big-data scales
    • Responsible for the design and implementation of data integration pipelines
    • Contributes design, code, configurations, and documentation for components that manage data ingestion, real-time streaming, batch processing, data extraction, transformation, and loading across multiple data storages
    • Take part in the full cycle of feature development (requirements analysis, decomposition, design, etc)
    • Contribute to the overall quality of development services through brainstorming, unit testing, and proactive offering of different improvements and innovations.
    More
  • Β· 20 views Β· 2 applications Β· 2d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    Project Description New long-term project for an Energy client, where we will create an application with AI integrated for the comprehensive data analysis. You will be working closely with the customer stakeholders as part of the Scrum team. ...

    Project Description

    New long-term project for an Energy client, where we will create an application with AI integrated for the comprehensive data analysis. You will be working closely with the customer stakeholders as part of the Scrum team. 

     

     

    Technical Requirements (Must Have):
    Python β€” 5+ years, production code (not just notebooks)
    SQL / PostgreSQL β€” 5+ years, complex queries, optimization
    Apache Kafka β€” event streaming, consumers, producers
    pandas / numpy β€” expert level, large datasets (1M+ rows)
    scikit-learn β€” clustering algorithms, metrics, hyperparameter tuning
    ETL Pipelines β€” 4+ years building production data pipelines
    Text Processing β€” tokenization, cleaning, encoding handling
    Git β€” branching, PRs, code reviews
    English β€” B2+ written and verbal

     

    Would Be a Plus
    Sentence-BERT / Transformers (HuggingFace ecosystem)
    MLflow or similar ML experiment tracking
    Topic Modeling (LDA, NMF)
    DBSCAN / Hierarchical Clustering
    FastAPI / Flask
    Azure DevOps
    Kafka Streams / ksqlDB
    BI & Visualization tools (Power BI, Tableau, Grafana, Apache Superset, Plotly/Dash, or similar)

    Nice to Have
    Energy / Utility / SCADA domain experience
    Time-series analysis
    Prometheus / Grafana monitoring
    On-premise ML infrastructure (no cloud APIs)
    Data modeling / dimensional modeling
    dbt (data build tool)

     

     

    Job Responsibilities

    Strong problem-solving and follow-up skills; must be proactive and take initiative
    Professionalism and ability to maintain the highest level of confidentiality
    Create robust code and translate business logic into project requirements
    Develop code using development best practices, and an emphasis on security best practices
    Leverage technologies to support business needs to attain high reusability and maintainability of current and newly developed systems
    Provide system design recommendations based on technical requirements
    Work independently on development tasks with a minimal amount of supervision

    More
  • Β· 61 views Β· 9 applications Β· 2d

    Data Engineer

    Full Remote Β· Worldwide Β· Product Β· 2 years of experience Β· English - None
    For our Partner, we are looking for Data Engineer Location: Remote / Flexible Format: Full-time Role Overview We are seeking a proactive and motivated Data Engineer who will design, develop, and maintain scalable data pipelines, build our DWH,...

    For our Partner, we are looking for Data Engineer

     

    Location: Remote / Flexible
    Format: Full-time

     

    Role Overview

    We are seeking a proactive and motivated Data Engineer who will design, develop, and maintain scalable data pipelines, build our DWH, integrate with external APIs, and support the data needs of our product, affiliate, and analytics teams.

    You will join a dynamic environment where datasets refresh hourly or daily, and data accuracy directly influences business results.

    This is a role for someone who likes autonomy, fast execution, and building systems from scratch.

     

    What You’ll Do

    • Build and maintain our internal DWH based on PostgreSQL.
    • Work with medium and large datasets that require hourly/daily updates.
    • Work with DWH components: AWS (S3, Athena), PostgreSQL, BigQuery.
    • Collect data from Kafka, Google Analytics, partner APIs, and third-party systems.
    • Implement data quality and integrity automation, ensuring stable pipelines.
    • Develop and maintain ETL/ELT processes.
    • Support saving and productionizing ML models developed by other teams.
    • Create and support technical documentation for data workflows and integrations.

       

    What We Expect From You

    Professional Experience

    • 2+ years of experience as a Data Engineer.
    • Strong Python development skills (clean, maintainable, high-performance code).
    • Advanced SQL expertise and deep understanding of PostgreSQL or similar databases.
    • Hands-on experience designing & implementing RESTful APIs (Aiohttp, Flask, FastAPI).
    • Practical work with relational databases: PostgreSQL, MS SQL, MySQL.
    • Experience with Docker/Kubernetes or similar containerization tools.
    • Basic knowledge of DBT, Databricks, Snowflake, Kubernetes, Airflow.
    • Solid understanding of OOP, data structures, algorithms, and computational complexity.

       

    What We Appreciate

    • Strong ownership mentality β€” you can independently drive tasks to completion.
    • Excellent analytical and problem-solving skills.
    • Attention to detail and reliability.
    • Clear communication and ability to collaborate with cross-functional teams.

       

    What We Offer

    • Competitive compensation package (details discussed individually).
    • Work in a fast-scaling iGaming company with strong technical and marketing teams.
    • Ability to influence architecture, tools, and internal processes from early stages.
    • Career growth opportunities 
    • Flexibility: remote-first culture and supportive environment.
    • Continuous learning: access to internal know-how, mentorship, and new tools.

       

    About Company

    Our partner is a fast-growing iGaming operator with a strong portfolio of high-performing casino & betting brands across LATAM, Tier-1, and global GEOs.
    We work with large-scale data, high-load systems, and complex tracking environments that power our marketing, product, and analytics teams.

    Our team values ownership, transparency, speed, and smart decision-making. We are building a modern data infrastructure that supports the entire business β€” and we’re looking for a Data Engineer eager to make a real impact.

     

    Waiting for your CVπŸ˜‰

    More
  • Β· 21 views Β· 3 applications Β· 2d

    Energy System Analyst (Market) / Software Developer

    Full Remote Β· Worldwide Β· 2 years of experience Β· English - B2
    About the Role We are seeking a skilled Energy System Analyst / Software Developer to join our team and contribute to the full lifecycle of energy system modelling and simulation projects. You will focus primarily on market modelling of European energy...

    About the Role

    We are seeking a skilled Energy System Analyst / Software Developer to join our team and contribute to the full lifecycle of energy system modelling and simulation projects. You will focus primarily on market modelling of European energy markets, power system optimization, capacity expansion planning, cost-benefit analysis, and economic assessment of infrastructure investments, while also engaging in data handling, visualization, and automation development.

    This role combines technical expertise in energy market fundamentals, analytical thinking, and software development skills to deliver high-quality modelling results and insights for our clients across the European energy sector, including TSOs, regional coordination bodies, and European institutions.

    Key Responsibilities

    • Lead and participate in the design, implementation, testing, and analysis of energy market models using tools such as Antares Simulator (including Antares Xpansion for capacity expansion studies).
    • Develop and maintain data management, automation, and market modelling pipelines to support large-scale European energy system studies (e.g., TYNDP, IoSN, regional adequacy assessments).
    • Conduct economic evaluations, cost-benefit analyses, and scenario-based simulations to support infrastructure investment decisions and cross-border interconnector assessments.
    • Prepare and maintain input datasets for market models, including demand profiles, generation portfolios, fuel prices, renewable time series, and transmission capacities.
    • Collaborate with client teams (TSOs, regional groups, European bodies) to ensure accuracy and consistency in data collection, validation, and central dataset maintenance.
    • Communicate modelling results, methodologies, and assumptions clearly to both technical and non-technical stakeholders through reports, presentations, and steering group meetings.
    • Contribute to technical documentation, methodology descriptions, and quality assurance of project deliverables.

    Required Qualifications & Experience

    • 2–7 years of relevant professional experience in energy system modelling, market analysis, or power system planning.
    • 2+ years of proven experience in market modelling and economic analysis of energy systems.
    • Proficiency in Python for modelling, simulation, data processing, and automation.
    • Solid understanding of European energy market fundamentals: merit order dispatch, zonal pricing, cross-border exchanges, adequacy assessment, and capacity mechanisms.
    • Strong skills in data handling, processing, and visualization of large-scale energy datasets.
    • English language proficiency at B2 level or higher.
    • MSc degree in Electrical Engineering, Power Engineering, Energy Economics, Computer Science, or related field.

    Desirable Skills & Assets

    • Experience with PLEXOS for energy market modelling, production cost simulation, or capacity expansion planning.
    • Knowledge of open-source energy modelling tools such as PyPSA, Calliope, or OSeMOSYS, and the ability to benchmark or cross-validate results across platforms.
    • Familiarity with Antares Simulator and Antares Xpansion for adequacy studies and optimal investment modelling.
    • Knowledge of optimization theory and mathematical programming (LP, MILP) as applied to energy system planning.
    • Understanding of European regulatory frameworks: ENTSO-E methodologies (TYNDP, ERAA, IoSN), CBAM, EU Green Deal targets, and network development processes.
    • Proficiency in R for statistical analysis or simulation post-processing.
    • Familiarity with Git or other version control systems.
    • Ability to translate customer requirements into actionable modelling tasks and deliverables.
    • Experience in applying AI tools to support programming and analytical work.
    • Strong communication and teamwork skills, with the ability to meet deadlines and work effectively in multi-stakeholder project environments.
    More
  • Β· 17 views Β· 2 applications Β· 2d

    Power System Modelling Engineer / Software Developer

    Full Remote Β· Worldwide Β· 2 years of experience Β· English - B2
    About the Role We are seeking a skilled Power System Modelling Engineer / Software Developer to join our team and contribute to the full lifecycle of power system modelling, simulation, and analysis projects. You will focus primarily on steady-state and...

    About the Role

    We are seeking a skilled Power System Modelling Engineer / Software Developer to join our team and contribute to the full lifecycle of power system modelling, simulation, and analysis projects. You will focus primarily on steady-state and dynamic power system simulations, network analysis, grid integration studies, and power flow calculations, while also engaging in data handling, scripting, automation, and toolchain development.

    This role combines deep power systems expertise, analytical thinking, and software development skills to deliver high-quality modelling results and technical insights for our clients across the European energy sector.

    Key Responsibilities

    • Lead and participate in the design, implementation, testing, and analysis of power system models using commercial tools such as PSS/E (Siemens) or DigSilent PowerFactory, and open-source tools such as PyPSA, PyPowSyBl, or pandapower.
    • Perform load flow, contingency analysis, short-circuit calculations, dynamic simulations, and grid stability assessments to support transmission and distribution system planning.
    • Develop and maintain automated workflows, data pipelines, and scripting tools (Python) to support power system modelling and simulation processes.
    • Conduct grid integration studies for renewable energy sources, including hosting capacity analysis, voltage regulation, and protection coordination.
    • Collaborate with client teams (TSOs, DSOs) to ensure accuracy and consistency in network data collection, validation, and model maintenance.
    • Communicate modelling results, methodologies, and assumptions clearly to both technical and non-technical stakeholders.
    • Contribute to technical documentation, reports, and presentations.

    Required Qualifications & Experience

    • 2–7 years of relevant professional experience in power system modelling and simulation.
    • 2+ years of hands-on experience with at least one commercial power system simulation tool (PSS/E, PowerFactory, ETAP, PowerWorld) or open-source equivalent (PyPSA, PyPowSyBl, pandapower).
    • Proficiency in Python for scripting, automation, and power system analysis workflows.
    • Solid understanding of power system fundamentals: load flow, fault analysis, dynamic stability, voltage regulation, and protection systems.
    • Strong skills in data handling, processing, and visualization of network and simulation data.
    • English language proficiency at B2 level or higher.
    • MSc degree in Electrical Engineering, Power Engineering, Power Systems, or related field.

    Desirable Skills & Assets

    • Experience with PSS/E automation (Python/IPLAN) or PowerFactory DPL/Python scripting.
    • Knowledge of CIM/CGMES data exchange standards and network model interoperability.
    • Familiarity with European grid codes, ENTSO-E methodologies, and TSO/DSO operational frameworks.
    • Experience with market modelling tools such as PLEXOS or Antares is a plus.
    • Knowledge of optimization theory and mathematical programming (LP, MILP).
    • Proficiency in R for statistical analysis or simulation post-processing.
    • Familiarity with Git or other version control systems.
    • Ability to translate customer requirements into actionable modelling tasks and deliverables.
    • Experience in applying AI tools to support programming and analytical work.
    • Strong communication and teamwork skills, with the ability to meet deadlines.
    More
  • Β· 33 views Β· 6 applications Β· 3d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    Intrao Tech is a London-based dev & design studio delivering web, mobile, and brand work for startups and SMBs across UK/EU. Three founders, no middle management, direct client access. We ship fast because we're not drowning in process.We're looking for a...

    Intrao Tech is a London-based dev & design studio delivering web, mobile, and brand work for startups and SMBs across UK/EU. Three founders, no middle management, direct client access. We ship fast because we're not drowning in process.We're looking for a Middle Data Engineer to join our team.

    Requirements:

    • 5+ years of commercial Data Engineering experience
    • Data warehousing at scale (Snowflake, Google BigQuery, or Amazon Redshift)
    • ETL/ELT architecture with DBT (advanced patterns: incremental models, macros, packages)
    • Data ingestion and CDC pipelines (Airbyte)
    • Workflow orchestration (Apache Airflow)
    • Data visualization (Tableau, Metabase)
    • Advanced SQL and data modeling (star schema, Data Vault, dimensional modeling)
    • Python for data pipelines and tooling
    • Containerization (Docker) and IaC (Terraform) for data platform components
    • English: B2+
       

    Nice to have:

    • Multi-warehouse experience (Snowflake + BigQuery + Redshift)
    • CI/CD for data infrastructure (GitHub Actions, GitLab CI)
    • Infrastructure as Code (Terraform)
    • Cloud platform expertise (AWS, GCP) at production scale
    • Data quality and testing frameworks (Great Expectations, DBT tests)
    • Cost optimization for cloud data platforms
    • Kubernetes for data workloads


    What you'll do:

    • Design and implement data ingestion pipelines using Airbyte, including CDC and incremental syncs
    • Architect DBT projects: modeling strategy, testing frameworks, documentation
    • Design and optimize data warehouse structures across Snowflake, BigQuery, or Redshift
    • Build and manage complex Airflow DAGs for end-to-end orchestration
    • Set up dashboards, data layers, and self-service analytics in Tableau and Metabase
    • Tune warehouse performance and optimize cloud data costs
    • Collaborate directly with clients on data architecture and requirements
    • Maintain CI/CD pipelines for data infrastructure deployment


    We offer:

    • Full remote
    • Paid vacation and sick leave
    • Competitive rate
    • Direct communication, no bureaucracy
    More
  • Β· 44 views Β· 9 applications Β· 3d

    Middle Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    Intrao Tech is a London-based dev & design studio delivering web, mobile, and brand work for startups and SMBs across UK/EU. Three founders, no middle management, direct client access. We ship fast because we're not drowning in process.We're looking for a...

    Intrao Tech is a London-based dev & design studio delivering web, mobile, and brand work for startups and SMBs across UK/EU. Three founders, no middle management, direct client access. We ship fast because we're not drowning in process.We're looking for a Middle Data Engineer to join our team.

    Requirements:

    • 3+ years of commercial Data Engineering experience
    • Data warehousing (Snowflake, Google BigQuery, or Amazon Redshift)
    • ETL/ELT with DBT
    • Data ingestion tools (Airbyte)
    • Workflow orchestration (Apache Airflow)
    • Data visualization (Tableau, Metabase)
    • Strong SQL skills
    • Python for data pipelines and tooling
    • English: B2+
       

    Nice to have:

    • CI/CD for data infrastructure (GitHub Actions, GitLab CI)
    • Containerization (Docker) and IaC (Terraform) for data platform components
    • Infrastructure as Code (Terraform)
    • Cloud platforms (AWS, GCP)
    • Data quality and testing frameworks (Great Expectations, DBT tests)


    What you'll do:

    • Ingest data from various sources using Airbyte, configure connectors and sync schedules
    • Build and maintain DBT models, transformations, and tests
    • Write and optimize SQL across Snowflake, BigQuery, or Redshift
    • Create and manage Airflow DAGs for pipeline orchestration
    • Build dashboards and reporting layers in Tableau and Metabase
    • Monitor data quality and troubleshoot pipeline failures
    • Support CI/CD and containerization for data workflows


    We offer:

    • Full remote
    • Paid vacation and sick leave
    • Competitive rate
    • Direct communication, no bureaucracy
    More
  • Β· 39 views Β· 4 applications Β· 3d

    Data Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· English - B2
    ΠœΠ°Ρ”ΠΌΠΎ Π΄Π²Ρ– ΠΏΠΎΠ·ΠΈΡ†Ρ–Ρ—. Одна БША Ρ–Π½ΡˆΠ° UK ΠŸΠ΅Ρ€ΡˆΠ° Π±Ρ–Π»ΡŒΡˆΠ΅ ΠΏΡ€ΠΎ Ρ€Π΅ΠΏΠΎΡ€Ρ‚Ρ–Π½Π³ The project is a cloud-based analytics platform designed for commercial real estate. It provides tools for data analysis, portfolio management, financial insights, and lease tracking,...

    ΠœΠ°Ρ”ΠΌΠΎ Π΄Π²Ρ– ΠΏΠΎΠ·ΠΈΡ†Ρ–Ρ—. Одна БША Ρ–Π½ΡˆΠ° UK

     

    ΠŸΠ΅Ρ€ΡˆΠ° Π±Ρ–Π»ΡŒΡˆΠ΅ ΠΏΡ€ΠΎ Ρ€Π΅ΠΏΠΎΡ€Ρ‚Ρ–Π½Π³

     

    The project is a cloud-based analytics platform designed for commercial real estate. It provides tools for data analysis, portfolio management, financial insights, and lease tracking, helping owners, property managers, and brokers make informed, data-driven decisions.

    Position Requirements
    PowerBI skills:
    β€’ Able to understand the data sources and relevant data for analysis
    β€’ Design and refine data models, familiarity with a dimensional model
    β€’ Develop interactive reports and dashboards
    β€’ Knowledge of DAX

    Azure and DB skills:
    β€’ Proficiency in ETL/ELT design, development and support
    β€’ Strong hands-on experience in Azure Data Factory
    β€’ Experience in Azure Functions
    β€’ Stored Procedures writing and optimization
    β€’ Telerik .NET Reporting experience (Nice to have)

    Responsibilities
    Continue improving existing data reporting tools. List of existing integrations (where data comes from):
    - Procore
    - DealPath
    - Yardi
    - MRI
    - JDE
    - VTS
    - OneSite
    - CoStar
    - Argus
    - Salesforce
    - RealPage

    Nice to have
    Basic Python skills

     

    Π”Ρ€ΡƒΠ³Π° Π±Ρ–Π»ΡŒΡˆΠ΅ ΠΏΡ€ΠΎ Π΄Π°Ρ‚Π° ΠΌΠΎΠ΄Π΅Π»Ρ–Π½Π³

     

    Requirements
    Data Integration/ETL (Azure Data Factory, Databricks)
    SQL Server (T-SQL, stored procedures)
    Dimensional Data Modelling (Kimball/star schema)
    Power BI (DAX, data visualisation) 

    More
  • Β· 22 views Β· 0 applications Β· 3d

    Big Data Engineer to $8000

    Full Remote Β· Bulgaria, Poland, Romania Β· 6 years of experience Β· English - B2
    Who We Are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: The product is an enterprise-grade digital experience...

    Who We Are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.  

     

    About the Product:

    The product is an enterprise-grade digital experience platform that provides real-time visibility into system performance, application stability, and end-user experience across on-premises, virtual, and cloud environments. It ingests large volumes of telemetry from distributed agents on employee devices and infrastructure, processes and enriches data through streaming pipelines, detects anomalies, and stores analytical data for monitoring and reporting. The platform serves a global customer base with high throughput and strict requirements for security, correctness, and availability. Rapid adoption has driven significant year-over-year growth and demand from large, distributed teams seeking to secure and stabilize digital environments without added complexity.

     

    About the Role:

    This is a true Big Data engineering role focused on designing and building real-time data pipelines that operate at scale in production environments serving real customers. You will join a senior, cross-functional platform team responsible for the end-to-end data flow: ingestion, processing, enrichment, anomaly detection, and storage. You will own both architecture and delivery, collaborating with Product Managers to translate requirements into robust, scalable solutions and defining guardrails for data usage, cost control, and tenant isolation. The platform is evolving from distributed, product-specific flows to a centralized, multi-region, highly observable system designed for rapid growth, advanced analytics, and future AI-driven capabilities. Strong ownership, deep technical expertise, and a clean-code mindset are essential.

     

    Key Responsibilities: 

    • Design, build, and maintain high-throughput, low-latency data pipelines handling large volumes of telemetry.
    • Develop real-time streaming solutions using Kafka and modern stream-processing frameworks (Flink, Spark, Beam, etc.).
    • Contribute to the architecture and evolution of a large-scale, distributed, multi-region data platform.
    • Ensure data reliability, fault tolerance, observability, and performance in production environments.
    • Collaborate with Product Managers to define requirements and translate them into scalable, safe technical solutions.
    • Define and enforce guardrails for data usage, cost optimization, and tenant isolation within a shared platform.
    • Participate actively in system monitoring, troubleshooting incidents, and optimizing pipeline performance.
    • Own end-to-end delivery: design, implementation, testing, deployment, and monitoring of data platform components.

     

    Required Competence and Skills:

    • 5+ years of hands-on experience in Big Data or large-scale data engineering roles.
    • Strong programming skills in Java or Python, with willingness to adopt Java and frameworks like Vert.x or Spring.
    • Proven track record of building and operating production-grade data pipelines at scale.
    • Solid knowledge of streaming technologies such as Kafka, Kafka Streams, Flink, Spark, or Apache Beam.
    • Experience with cloud platforms (AWS, Azure, or GCP) and designing distributed, multi-region systems.
    • Deep understanding of production concerns: availability, data loss prevention, latency, and observability.
    • Hands-on experience with data stores such as ClickHouse, PostgreSQL, MySQL, Redis, or equivalents.
    • Strong system design skills, able to reason about trade-offs, scalability challenges, and cost efficiency.
    • Clean code mindset, solid OOP principles, and familiarity with design patterns.
    • Experience with AI-first development tools (e.g., GitHub Copilot, Cursor) is a plus.

     

    Nice to have:

    • Experience designing and operating globally distributed, multi-region data platforms.
    • Background in real-time analytics, enrichment, or anomaly detection pipelines.
    • Exposure to cost-aware data architectures and usage guardrails.
    • Experience in platform or infrastructure teams serving multiple products.

     

    Why Us?

    We provide 20 days of vacation leave per calendar year (plus official national holidays of the country you are based in).

    We provide full accounting and legal support in all countries in which we operate.

    We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

    We offer a highly competitive package with yearly performance and compensation reviews.

    More
  • Β· 24 views Β· 2 applications Β· 3d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    Description We are a global audience and location intelligence company that helps marketers connect the digital and physical world. We provide data-driven solutions to enhance marketing campaigns by leveraging location and audience data to reveal consumer...

    Description

    We are a global audience and location intelligence company that helps marketers connect the digital and physical world. We provide data-driven solutions to enhance marketing campaigns by leveraging location and audience data to reveal consumer behavior and enable more precise targeting and measurement. We work on high-end / high-performance / high-throughput systems for in-time analysis of data for autonomous driving and other big data applications e.g. for E-commerce.

     

    Requirements

    • You have 4+ years of experience on similar position.
    • You have significant experience with Python. Familiarity with Java or Scala is a plus.
    • Hands-on experience building scalable solutions in AWS.
    • Proficiency in NoSQL and SQL databases and in high-throughput data-related architecture and technologies (e.g. Kafka, Spark, Hadoop, MongoDB, AWS Batch, AWS Glue, Athena, Airflow, dbt).
    • Excellent SQL and data transformation skills.
    • Excellent written and verbal communication skills with an ability to simplify complex technical information.
    • Experience guiding and mentoring junior team members in a collaborative environment.

     

    Job responsibilities

    • Work in a self-organised agile team with a high level of autonomy, and you will actively shape your team’s culture.
    • Design, build, and standardise privacy-first big data architectures, large-scale data pipelines, and advanced analytics solutions in AWS.
    • Develop complex integrations with third-party partners, transferring terabytes of data.
    • Align with other Data experts on data (analytics) engineering best practices and standards, and introduce those standards and data engineering expertise to the team in order to enhance existing data pipelines and build new ones.
    • Successfully partner up with the Product team to constantly develop further and improve our platform features.
    More
  • Β· 103 views Β· 9 applications Β· 3d

    Junior Data Engineer (Python)

    Full Remote Β· Ukraine Β· 1 year of experience Β· English - B2
    Description Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services. This position collaborates with a geographically diverse...

    Description

    Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services.

    This position collaborates with a geographically diverse team to develop, deliver, and maintain systems for digital subscription and transactional products across the Customer’ SVOD portfolio.

     

    Requirements

    – 1+ years of intermediate to advanced SQL

    – 1+ years of python development (intermediate level is fine: Pandas, Numpy, boto3, seaborn, requests, unittest)

    – Experience building ETLs

    – Experience with data tools (ex.: Airflow, Grafana, AWS Glue, AWS Athena)

    – Excellent understanding of database design

    – Cloud expereince (AWS S3, Lambda, or alternatives)

    – Agile SDLC knowledge
    – Detail oriented
    – Data-focused
    – Strong verbal/written communication and data presentation skills, including an ability to effectively communicate with both business and technical teams
    – An ability and interest in working in a fast-paced and rapidly changing environment
    – Be self-driven and show ability to deliver on ambiguous projects with incomplete or dirty data

     

    Would be a plus:
    – Understanding of basic SVOD store purchase workflows
    – Background in supporting data scientists in conducting data analysis / modelling to support business decision making

    – Experience in supervising subordinate staff

     

    Job responsibilities

    – Data analysis, auditing, statistical analysis
    – ETL buildouts for data reconciliation
    – Creation of automatically-running audit tools
    – Interactive log auditing to look for potential data problems
    – Help in troubleshooting customer support team cases
    – Troubleshooting and analyzing subscriber reporting issues:
    – Answer management questions related to subscriber count trends
    – App purchase workflow issues
    – Audit/reconcile store subscriptions vs userdb

    More
  • Β· 22 views Β· 3 applications Β· 3d

    Middle Data Engineer (Python)

    Full Remote Β· Ukraine Β· 3 years of experience Β· English - B2
    Description Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services. This position collaborates with a geographically diverse...

    Description

    Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services.

    This position collaborates with a geographically diverse team to develop, deliver, and maintain systems for digital subscription and transactional products across the Customer’ SVOD portfolio.

     

    Requirements

    – 3+ years of intermediate to advanced SQL

    – 3+ years of python development (intermediate level is fine: Pandas, Numpy, boto3, seaborn, requests, unittest)

    – Experience building ETLs

    – Experience with data tools (ex.: Airflow, Grafana, AWS Glue, AWS Athena)

    – Excellent understanding of database design

    – Cloud expereince (AWS S3, Lambda, or alternatives)

    – Agile SDLC knowledge
    – Detail oriented
    – Data-focused
    – Strong verbal/written communication and data presentation skills, including an ability to effectively communicate with both business and technical teams
    – An ability and interest in working in a fast-paced and rapidly changing environment
    – Be self-driven and show ability to deliver on ambiguous projects with incomplete or dirty data

     

    Would be a plus:
    – Understanding of basic SVOD store purchase workflows
    – Background in supporting data scientists in conducting data analysis / modelling to support business decision making

    – Experience in supervising subordinate staff

     

    Job responsibilities

    – Data analysis, auditing, statistical analysis
    – ETL buildouts for data reconciliation
    – Creation of automatically-running audit tools
    – Interactive log auditing to look for potential data problems
    – Help in troubleshooting customer support team cases
    – Troubleshooting and analyzing subscriber reporting issues:
          Answer management questions related to subscriber count trends
          App purchase workflow issues
          Audit/reconcile store subscriptions vs userdb

    More
  • Β· 47 views Β· 5 applications Β· 3d

    Data Engineer

    Ukraine Β· Product Β· 3 years of experience Β· English - B1 Ukrainian Product πŸ‡ΊπŸ‡¦
    Ready to level up your career? Playtech's Live unit is looking for an experienced Data Engineer with great communication and creative skills. Job Description Your influential mission. You will... Design, implement and support ETL/ELT processes to move...

    Ready to level up your career?

     

    Playtech's Live unit is looking for an experienced Data Engineer with great communication and creative skills.

     

    Job Description

    Your influential mission. You will...

    • Design, implement and support ETL/ELT processes to move and transform data between various systems
    • Contribute to the development of the analytics platform
    • Collaborate with software engineers, business analysts, and other stakeholders to translate requirements into effective data solutions
    • Implement and support ad-hoc data extraction requests and business reporting
    • Extract and transform raw data to structured data marts

     

    Qualifications

    Components for success. You...

    • Have an understanding of data lake/DWH architecture, OLAP/OLTP
    • Have experience with Airflow or similar orchestrators
    • Have strong understanding of RDBMS, with proven proficiency in SQL, data modeling, and performance tuning for OLAP/OLTP systems
    • Have practical knowledge of Python (1–2 years)

     

    You'll get extra points for...

    • Computer science degree
    • Practical experience with reporting and analytics tools such as Superset, Tableau, Power BI, etc
    • Knowledge of streaming platforms and tools like Kafka, Kafka-Connect, Kafka-Mirror
    • Understanding of database replication, clustering, and distributed systems
    • Experience working with large volumes of data and databases
    • Experience with monitoring, quality control, and data validation

     

    Thrive in a culture that values... β€― 

    • Possibility to work with a product company
    • Personalised professional growth
    • Warm and friendly attitude to every specialist
    • Educational possibilities
    • Competitive salary and benefits
    • Medical insurance
    • Fully-equipped cosy office space located in the city centre (Gulliver, β€œPalats Sportu” metro station)
    • Paid vacation days, sick leaves and national holidays
    • Corporate events and team buildings

     

    LIVE TEAM

    Hi, we are a live casino, and we invite you to join us!

    Our team consists of highly skilled professionals who are focused on high product quality and short feedback cycles. 
    With us, you will be able to contribute to our product, make architectural and technical decisions, and grow as a specialist in your main field of expertise as well as in related areas. 

     

    Playtech is an equal opportunities employer. Our mission is to welcome everyone and create inclusive teams. We celebrate differences and encourage everyone to join us and be themselves at work.

    More
  • Β· 59 views Β· 1 application Β· 3d

    Trainee/Junior BI/DB Developer

    Office Work Β· Ukraine (Lviv) Β· Product Β· English - B2
    About us: EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide. But that's not all! We're not just about...

    About us:

    EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide.

    But that's not all! We're not just about numbers, we're about people. With a team of over 1000 passionate individuals spread across twelve countries in Europe, Asia, and the US, we're all united by our love for innovation and teamwork.

    EveryMatrix is a member of the World Lottery Association (WLA) and European Lotteries Association. In September 2023 it became the first iGaming supplier to receive WLA Safer Gambling Certification. EveryMatrix is proud of its commitment to safer gambling and player protection whilst producing market leading gaming solutions.

    Join us on this exciting journey as we continue to redefine the iGaming landscape, one groundbreaking solution at a time.
     

    We are looking for a passionate and dedicated Trainee/Junior BI/DB Developer to join our team in Lviv!
     

    What You'll get to do:

    • Data Processing: Develop real-time data processing and aggregations.
    • Data Warehousing: Create and modify data marts to enhance the data warehouse.
    • Integrations: Manage both internal and external integrations.
    • Reporting: Build and maintain various types of reports to support business decisions.

    Our main stack:

    Databases: BigQuery, PostgreSQL, SQL

    1. ETL: Apache Airflow, Apache NiFi
    2. Streaming: Apache Kafka

      What You need to know:
    • Education: Bachelor’s or Master’s degree in a STEM field.
    • Fundamentals: Solid understanding of Computer Science (Software Engineering, Algorithms, Operating Systems, Networking, etc.).
    • RDBMS Mastery: Strong knowledge of at least one RDBMS (PostgreSQL, MSSQL, Oracle, MySQL), specifically:
      • Database internals
      • Query optimization
      • Indexing
      • Partitioning
    • Data Engineering Experience: * Practical experience with ETL/ELT processes.
      • Experience in creating or supporting Data Warehouses.
      • Hands-on experience with at least one enterprise Business Intelligence (BI) platform.
    • Language: English proficiency at an Intermediate level or higher (reading, writing, speaking).

      Nice to have:
    • Cloud Warehousing: Knowledge of Google BigQuery, Azure Synapse Analytics, AWS Redshift, or Snowflake.
    • Coding: Programming skills in Python or Java.
    • Tooling: Experience with Apache Airflow and Apache NiFi.

    Here's what we offer:

    • Start with 22 days of annual leave, with 2 additional days added each year, up to 32 days by your fifth year with us.
    • 3 sick leave days per year, no doctor's note required; 30 medical leave days with medical allowance
    • Hybrid work schedule is available after the first three months of employment, with up to 50 days of work from home per year.
    • Benefit from two Free Fridays each year, limited to one per quarter.
    • Daily catered lunch or monthly lunch allowance.β€―
    • Private Medical Subscription.β€―
    • Access online learning platforms like Udemy for Business, LinkedIn Learning or O’Reilly, and a budget for external training.
    • Gym allowance.
    • Corporate English lessons.
    • Support for New Parents:
    • 21 weeks of paid maternity leave, with the flexibility to work from home full-time until your child turns 1 year old.
    • 4 weeks of paternity leave, plus the flexibility to work from home full-time until your child is 13 weeks old.

    Our office perks include on-site massages and frequent team-building activities in various locations.

    At EveryMatrix, we're committed to creating a supportive and inclusive workplace where you can thrive both personally and professionally. Come join us and experience the difference!

    More
  • Β· 362 views Β· 17 applications Β· 3d

    Middle Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· English - B2
    Dataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture...

    Dataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics.
    You will be responsible for developing and maintaining a scalable data architecture that powers large-scale data collection, processing, analysis, and integrations.

    If you are passionate about data optimization, system performance, and architecture, we’re waiting for your CV!
     

         Requirements:

    • 2+ years of commercial experience with Python.
    • Advanced experience with SQL DBs (optimisations, monitoring, etc.);
    • PostgreSQL β€” must have;
    • Solid understanding of ETL principles (architecture/ monitoring/ alerting/search and resolve bottlenecks);
    • Experience with Message brokers: Kafka/ Redis;
    • Experience with Pandas;
    • Familiar with AWS infrastructure (boto3, S3 buckets, etc);
    • Experience working with large volumes of data;
    • Understanding the principles of medallion architecture.
       

         Will Be a Plus:

    • Understanding noSQL DBs (Elastic);
    • TimeScaleDB;
    • PySpark;
    • Experience with e-commerce or fin-tech.
       

         Key Responsibilities:

    • Develop and maintain a robust and scalable data processing architecture using Python.
    • Design, optimize, and monitor data pipelines using Kafka and AWS SQS.
    • Implement and optimize ETL processes for various data sources.
    • Manage and optimize SQL and NoSQL databases (PostgreSQL, TimeScaleDB, Elasticsearch).
    • Work with AWS infrastructure to ensure reliability, scalability, and cost efficiency.
    • Proactively identify bottlenecks and suggest technical improvements.
       

      We offer:

    • Working in a fast growing company;
    • Great networking opportunities with international clients, challenging tasks;
    • Personal and professional development opportunities;
    • Competitive salary fixed in USD;
    • Paid vacation and sick leaves;
    • Flexible work schedule;
    • Friendly working environment with minimal hierarchy;
    • Team building activities, corporate events.
    More
Log In or Sign Up to see all posted jobs