Jobs Data Engineer

161
  • Β· 18 views Β· 0 applications Β· 9d

    Physical Security Infrastructure, Systems Engineer

    Full Remote Β· EU Β· Product Β· 2 years of experience Β· English - B2
    RISK inc: An International iGaming Company Pushing the Boundaries of Entertainment Who We Are: An international iGaming company specializing in identifying and fostering the growth of high-potential entertainment markets. With 600+ professionals in 20+...

    RISK inc: An International iGaming Company Pushing the Boundaries of Entertainment
    Who We Are:
    An international iGaming company specializing in identifying and fostering the growth of high-potential entertainment markets. With 600+ professionals in 20+ locations, we operate in 10 countries, serving over 300,000 customers.
    Always Pushing the Boundaries? You Already Belong at RISK!
    Our global-scale operations are based on strong internal expertise, analytics, and data research. We have expertise in iGaming operations (sports betting, online casino), digital and affiliate marketing, tech solutions, and data analytics.

    We are looking for a Physical Security Infrastructure & Systems Engineer to join our team.

    Responsibilities:

    • Ensure continuous operation and maintenance of security systems, including CCTV, access control, alarms, and network infrastructure;
    • Participate in design and implementation of: AI video analytics solutions (e.g. BriefCam), PSIM platforms (e.g. IMMIX), integrations with InCoreSoft;
    • Develop and maintain: response scenarios, risk-based logic;
    • Provide technical support for the SOC;
    • Quality control of external system integrators;
    • Configure and support Mikrotik routers and VPNs for secure remote site connectivity;
    • Manage server infrastructure and video storage systems, including monitoring and backups;
    • Administer Active Directory for the security team and manage user access across systems;
    • Monitor network performance 24/7 and respond to technical issues or security incidents;
    • Participate in the planning, deployment, and testing of security systems at local and remote sites;
    • Maintain equipment inventory and configure Ajax alarm systems as part of integrated security solutions.


    Requirements:

    • 2+ years of experience as a system or network administrator in security or technical infrastructure;
    • Hands-on experience with CCTV / VMS platforms (specific vendors are not mandatory);
    • Solid understanding of SOC / PSIM logic, including: incidents, escalations, operator roles;
    • Practical experience with, or strong understanding of, PSIM platforms (IMMIX experience is a strong advantage but not mandatory);
    • Practical experience or strong understanding of: video analytics, behavior detection, object tracking;
    • Clear understanding of AI as an assistive tool, not an autonomous β€œautopilot” system;
    • Understanding of REST APIs / Webhooks at an integration level;
    • Experience working with: external vendors and system integrators;
    • Linux (basic level: services, logs, networking);
    • Experience in network diagnostics and coordination with ISPs;
    • Strong problem-solving skills and a proactive, security-oriented mindset;
    • Good written and verbal communication skills in English;
    • Willingness to travel occasionally for system installations and technical support.

      Our Benefit Cafeteria is Packed with Goodies:
      - Children Allowance
      - Mental Health Support
      - Sport Activities
      - Language Courses
      - Automotive Services
      - Veterinary Services
      - Home Office Setup Assistance
      - Dental Services
      - Books and Stationery
      - Training Compensation
      - And yes, even Massage!
    More
  • Β· 27 views Β· 1 application Β· 9d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· English - B2
    About the job: We are an innovative AI-driven construction intelligence startup, committed to transforming the construction industry with cutting-edge technology. Our mission is to enhance the efficiency, safety, and productivity of construction projects...

    About the job:

    We are an innovative AI-driven construction intelligence startup, committed to transforming the construction industry with cutting-edge technology. Our mission is to enhance the efficiency, safety, and productivity of construction projects through intelligent solutions.
     

    We’re hiring a hands-on Senior Data Engineer who wants to build data products that move the needle in the physical world. Your work will help construction professionals make better, data-backed decisions every day. You’ll be part of a high-performing engineering team based in Tel Aviv.

    ‍

    Responsibilities:

    • Lead the design, development, and ownership of scalable data pipelines (ETL/ELT) that power analytics, product features, and downstream consumption.
    • Collaborate closely with Product, Data Science, Data Analytics, and full-stack/platform teams to deliver data solutions that serve product and business needs.
    • Build and optimize data workflows using Databricks, Spark (PySpark, SQL), Kafka, and AWS-based tooling.
    • Implement and manage data architectures that support both real-time and batch processing, including streaming, storage, and processing layers.
    • Develop, integrate, and maintain data connectors and ingestion pipelines from multiple sources.
    • Manage the deployment, scaling, and performance of data infrastructure and clusters, including Spark on Kubernetes, Kafka, and AWS services.
    • Manage the deployment, scaling, and performance of data infrastructure and clusters, including Databricks, Kafka, and AWS services.
    • Use Terraform (and similar tools) to manage infrastructure-as-code for data platforms.
    • Model and prepare data for analytics, BI, and product-facing use cases, ensuring high performance and reliability.

    ‍

    Requirements:
     

    • 8+ years of hands-on experience working with large-scale data systems in production environments.
    • Proven experience designing, deploying, and integrating big data frameworks - PySpark, Kafka, Databricks. 
    • Strong expertise in Python and SQL, with experience building and optimizing batch and streaming data pipelines.
    • Experience with AWS cloud services and Linux-based environments.
    • Background in building ETL/ELT pipelines and orchestrating workflows end-to-end.
      Proven experience designing, deploying, and operating data infrastructure / data platforms.
    • Mandatory hands-on experience with Apache Spark in production environments. 
    • Mandatory experience running Spark on Kubernetes.
    • Mandatory hands-on experience with Apache Kafka, including Kafka connectors.
    • Understanding of event-driven and domain-driven design principles in modern data architectures.
    • Familiarity with infrastructure-as-code tools (e.g., Terraform) β€” advantage.
    • Experience supporting machine learning or algorithmic applications β€” advantage.
    • BSc or higher in Computer Science, Engineering, Mathematics, or another quantitative field.
    More
  • Β· 93 views Β· 7 applications Β· 9d

    Data Engineer

    Full Remote Β· Worldwide Β· 6 years of experience Β· English - B2
    Business Digital marketing agency that specializes in enhancing the digital presence and growth of automotive dealerships nationwide. Initially, they focused on supporting their service teams through automation tools and have since expanded to develop and...

    Business

    Digital marketing agency that specializes in enhancing the digital presence and growth of automotive dealerships nationwide. Initially, they focused on supporting their service teams through automation tools and have since expanded to develop and sell white-label, productized services to their partners. Their mission is to deliver high-quality tools that meet the needs of their partners, leveraging their expertise in digital marketing to drive significant improvements and efficiencies in the automotive sector. The agency places a strong emphasis on building scalable, functional tools and fostering team growth through continuous improvement and collaboration.
     

     

    Requirements

    Client seeking a skilled Data Engineer with experience in architecting scalable data solutions using Amazon Web Services (AWS). This role will focus on building and maintaining data infrastructure to support our customer-facing tools, ensuring data is efficiently stored, processed, and available for analytics.

    Overlap with EST is required
     

    Key Responsibilities:

    • Data Architecture Design: Architect, develop, and maintain scalable and robust data architectures using AWS services such as Amazon S3, RDS, Redshift, and EMR.
    • ETL Processes: Design and implement ETL processes using AWS Glue, ensuring data is clean, reliable, and available for analysis.
    • Data Pipeline Management: Develop and manage data pipelines using tools like Amazon Kinesis and AWS Lambda for real-time and batch processing.
    • Database Management: Optimize databases using Amazon RDS and Redshift, ensuring data integrity, security, and performance.
    • Data Lake Management: Utilize Amazon S3 for building and managing data lakes to store structured and unstructured data.
    • Collaboration: Work closely with software engineers, data scientists, and product managers to ensure data solutions meet business requirements.
    • Security and Compliance: Implement security best practices and ensure compliance with industry standards and regulations.
    • Monitoring and Optimization: Implement monitoring tools and strategies to optimize the performance of data systems.

       

    Qualifications:

    • Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
    • Experience: Minimum of 5 years in data engineering roles with experience in AWS services. Proven track record in building scalable data architectures.
    • Technical Skills: Proficiency in AWS tools (e.g., S3, RDS, Redshift, Glue), data processing frameworks (e.g., Apache Spark, Hadoop), and programming languages (e.g., Python, SQL).
    • Analytical Skills: Strong problem-solving abilities and a data-driven mindset. Experience with performance tuning and data optimization.
    • Communication Skills: Excellent verbal and written communication skills. Ability to collaborate effectively with cross-functional teams.
    • Attention to Detail: High attention to detail and a commitment to delivering high-quality solutions.

     

    Additional requirements

    • Certifications: AWS Certified Data Analytics – Specialty, AWS Certified Solutions Architect, or similar certifications.
    • Domain Knowledge: Experience in the automotive industry or digital marketing sector.

    Additional Skills: Familiarity with machine learning models and integration with customer-facing applications.

     

    More
  • Β· 38 views Β· 2 applications Β· 9d

    Middle Data Engineer

    Full Remote Β· Croatia, Poland, Romania, Slovakia, Ukraine Β· 4 years of experience Β· English - B2
    Description: Our Client is a Fortune 500 company and is one of the biggest global manufacturing companies operating in the fields of industrial systems, worker safety, health care, and consumer goods. The company is dedicated to creating the technology...

    Description:

    Our Client is a Fortune 500 company and is one of the biggest global manufacturing companies operating in the fields of industrial systems, worker safety, health care, and consumer goods. The company is dedicated to creating the technology and products that advance every business, improve every home, and enhance every life.

    Minimum Requirements:

    • Minimum of 4 years of experience in SQL and Python programming languages, specifically for data engineering tasks.
    • Proficiency in working with cloud technologies such as Azure or AWS.
    • Experience with Spark and Databricks or similar big data processing and analytics platforms
    • Experience working with large data environments, including data processing, data integration, and data warehousing.
    • Experience with data quality assessment and improvement techniques, including data profiling, data cleansing, and data validation.
    • Familiarity with data lakes and their associated technologies, such as Azure Data Lake Storage, AWS S3, or Delta Lake, for scalable and cost-effective data storage and management.
    • Experience with NoSQL databases, such as MongoDB or Cosmos, for handling unstructured and semi-structured data.
    • Fluent English

    Additional Skillset (Nice to Have):

    • Familiarity with Agile and Scrum methodologies, including working with Azure DevOps and Jira for project management.
    • Knowledge of DevOps methodologies and practices, including continuous integration and continuous deployment (CI/CD).
    • Experience with Azure Data Factory or similar data integration tools for orchestrating and automating data pipelines.
    • Ability to build and maintain APIs for data integration and consumption.
    • Experience with data backends for software platforms, including database design, optimization, and performance tuning.

    Job responsibilities:

    • Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process data quickly at big-data scales
    • Responsible for the design and implementation of data integration pipelines
    • Contributes design, code, configurations, and documentation for components that manage data ingestion, real-time streaming, batch processing, data extraction, transformation, and loading across multiple data storages
    • Take part in the full cycle of feature development (requirements analysis, decomposition, design, etc)
    • Contribute to the overall quality of development services through brainstorming, unit testing, and proactive offering of different improvements and innovations.
    More
  • Β· 23 views Β· 2 applications Β· 9d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    Project Description New long-term project for an Energy client, where we will create an application with AI integrated for the comprehensive data analysis. You will be working closely with the customer stakeholders as part of the Scrum team. ...

    Project Description

    New long-term project for an Energy client, where we will create an application with AI integrated for the comprehensive data analysis. You will be working closely with the customer stakeholders as part of the Scrum team. 

     

     

    Technical Requirements (Must Have):
    Python β€” 5+ years, production code (not just notebooks)
    SQL / PostgreSQL β€” 5+ years, complex queries, optimization
    Apache Kafka β€” event streaming, consumers, producers
    pandas / numpy β€” expert level, large datasets (1M+ rows)
    scikit-learn β€” clustering algorithms, metrics, hyperparameter tuning
    ETL Pipelines β€” 4+ years building production data pipelines
    Text Processing β€” tokenization, cleaning, encoding handling
    Git β€” branching, PRs, code reviews
    English β€” B2+ written and verbal

     

    Would Be a Plus
    Sentence-BERT / Transformers (HuggingFace ecosystem)
    MLflow or similar ML experiment tracking
    Topic Modeling (LDA, NMF)
    DBSCAN / Hierarchical Clustering
    FastAPI / Flask
    Azure DevOps
    Kafka Streams / ksqlDB
    BI & Visualization tools (Power BI, Tableau, Grafana, Apache Superset, Plotly/Dash, or similar)

    Nice to Have
    Energy / Utility / SCADA domain experience
    Time-series analysis
    Prometheus / Grafana monitoring
    On-premise ML infrastructure (no cloud APIs)
    Data modeling / dimensional modeling
    dbt (data build tool)

     

     

    Job Responsibilities

    Strong problem-solving and follow-up skills; must be proactive and take initiative
    Professionalism and ability to maintain the highest level of confidentiality
    Create robust code and translate business logic into project requirements
    Develop code using development best practices, and an emphasis on security best practices
    Leverage technologies to support business needs to attain high reusability and maintainability of current and newly developed systems
    Provide system design recommendations based on technical requirements
    Work independently on development tasks with a minimal amount of supervision

    More
  • Β· 92 views Β· 11 applications Β· 9d

    Data Engineer

    Full Remote Β· Worldwide Β· Product Β· 2 years of experience Β· English - None
    For our Partner, we are looking for Data Engineer Location: Remote / Flexible Format: Full-time Role Overview We are seeking a proactive and motivated Data Engineer who will design, develop, and maintain scalable data pipelines, build our DWH,...

    For our Partner, we are looking for Data Engineer

     

    Location: Remote / Flexible
    Format: Full-time

     

    Role Overview

    We are seeking a proactive and motivated Data Engineer who will design, develop, and maintain scalable data pipelines, build our DWH, integrate with external APIs, and support the data needs of our product, affiliate, and analytics teams.

    You will join a dynamic environment where datasets refresh hourly or daily, and data accuracy directly influences business results.

    This is a role for someone who likes autonomy, fast execution, and building systems from scratch.

     

    What You’ll Do

    • Build and maintain our internal DWH based on PostgreSQL.
    • Work with medium and large datasets that require hourly/daily updates.
    • Work with DWH components: AWS (S3, Athena), PostgreSQL, BigQuery.
    • Collect data from Kafka, Google Analytics, partner APIs, and third-party systems.
    • Implement data quality and integrity automation, ensuring stable pipelines.
    • Develop and maintain ETL/ELT processes.
    • Support saving and productionizing ML models developed by other teams.
    • Create and support technical documentation for data workflows and integrations.

       

    What We Expect From You

    Professional Experience

    • 2+ years of experience as a Data Engineer.
    • Strong Python development skills (clean, maintainable, high-performance code).
    • Advanced SQL expertise and deep understanding of PostgreSQL or similar databases.
    • Hands-on experience designing & implementing RESTful APIs (Aiohttp, Flask, FastAPI).
    • Practical work with relational databases: PostgreSQL, MS SQL, MySQL.
    • Experience with Docker/Kubernetes or similar containerization tools.
    • Basic knowledge of DBT, Databricks, Snowflake, Kubernetes, Airflow.
    • Solid understanding of OOP, data structures, algorithms, and computational complexity.

       

    What We Appreciate

    • Strong ownership mentality β€” you can independently drive tasks to completion.
    • Excellent analytical and problem-solving skills.
    • Attention to detail and reliability.
    • Clear communication and ability to collaborate with cross-functional teams.

       

    What We Offer

    • Competitive compensation package (details discussed individually).
    • Work in a fast-scaling iGaming company with strong technical and marketing teams.
    • Ability to influence architecture, tools, and internal processes from early stages.
    • Career growth opportunities 
    • Flexibility: remote-first culture and supportive environment.
    • Continuous learning: access to internal know-how, mentorship, and new tools.

       

    About Company

    Our partner is a fast-growing iGaming operator with a strong portfolio of high-performing casino & betting brands across LATAM, Tier-1, and global GEOs.
    We work with large-scale data, high-load systems, and complex tracking environments that power our marketing, product, and analytics teams.

    Our team values ownership, transparency, speed, and smart decision-making. We are building a modern data infrastructure that supports the entire business β€” and we’re looking for a Data Engineer eager to make a real impact.

     

    Waiting for your CVπŸ˜‰

    More
  • Β· 31 views Β· 7 applications Β· 9d

    Energy System Analyst (Market) / Software Developer

    Full Remote Β· Worldwide Β· 2 years of experience Β· English - B2
    About the Role We are seeking a skilled Energy System Analyst / Software Developer to join our team and contribute to the full lifecycle of energy system modelling and simulation projects. You will focus primarily on market modelling of European energy...

    About the Role

    We are seeking a skilled Energy System Analyst / Software Developer to join our team and contribute to the full lifecycle of energy system modelling and simulation projects. You will focus primarily on market modelling of European energy markets, power system optimization, capacity expansion planning, cost-benefit analysis, and economic assessment of infrastructure investments, while also engaging in data handling, visualization, and automation development.

    This role combines technical expertise in energy market fundamentals, analytical thinking, and software development skills to deliver high-quality modelling results and insights for our clients across the European energy sector, including TSOs, regional coordination bodies, and European institutions.

    Key Responsibilities

    • Lead and participate in the design, implementation, testing, and analysis of energy market models using tools such as Antares Simulator (including Antares Xpansion for capacity expansion studies).
    • Develop and maintain data management, automation, and market modelling pipelines to support large-scale European energy system studies (e.g., TYNDP, IoSN, regional adequacy assessments).
    • Conduct economic evaluations, cost-benefit analyses, and scenario-based simulations to support infrastructure investment decisions and cross-border interconnector assessments.
    • Prepare and maintain input datasets for market models, including demand profiles, generation portfolios, fuel prices, renewable time series, and transmission capacities.
    • Collaborate with client teams (TSOs, regional groups, European bodies) to ensure accuracy and consistency in data collection, validation, and central dataset maintenance.
    • Communicate modelling results, methodologies, and assumptions clearly to both technical and non-technical stakeholders through reports, presentations, and steering group meetings.
    • Contribute to technical documentation, methodology descriptions, and quality assurance of project deliverables.

    Required Qualifications & Experience

    • 2–7 years of relevant professional experience in energy system modelling, market analysis, or power system planning.
    • 2+ years of proven experience in market modelling and economic analysis of energy systems.
    • Proficiency in Python for modelling, simulation, data processing, and automation.
    • Solid understanding of European energy market fundamentals: merit order dispatch, zonal pricing, cross-border exchanges, adequacy assessment, and capacity mechanisms.
    • Strong skills in data handling, processing, and visualization of large-scale energy datasets.
    • English language proficiency at B2 level or higher.
    • MSc degree in Electrical Engineering, Power Engineering, Energy Economics, Computer Science, or related field.

    Desirable Skills & Assets

    • Experience with PLEXOS for energy market modelling, production cost simulation, or capacity expansion planning.
    • Knowledge of open-source energy modelling tools such as PyPSA, Calliope, or OSeMOSYS, and the ability to benchmark or cross-validate results across platforms.
    • Familiarity with Antares Simulator and Antares Xpansion for adequacy studies and optimal investment modelling.
    • Knowledge of optimization theory and mathematical programming (LP, MILP) as applied to energy system planning.
    • Understanding of European regulatory frameworks: ENTSO-E methodologies (TYNDP, ERAA, IoSN), CBAM, EU Green Deal targets, and network development processes.
    • Proficiency in R for statistical analysis or simulation post-processing.
    • Familiarity with Git or other version control systems.
    • Ability to translate customer requirements into actionable modelling tasks and deliverables.
    • Experience in applying AI tools to support programming and analytical work.
    • Strong communication and teamwork skills, with the ability to meet deadlines and work effectively in multi-stakeholder project environments.
    More
  • Β· 27 views Β· 4 applications Β· 9d

    Power System Modelling Engineer / Software Developer

    Full Remote Β· Worldwide Β· 2 years of experience Β· English - B2
    About the Role We are seeking a skilled Power System Modelling Engineer / Software Developer to join our team and contribute to the full lifecycle of power system modelling, simulation, and analysis projects. You will focus primarily on steady-state and...

    About the Role

    We are seeking a skilled Power System Modelling Engineer / Software Developer to join our team and contribute to the full lifecycle of power system modelling, simulation, and analysis projects. You will focus primarily on steady-state and dynamic power system simulations, network analysis, grid integration studies, and power flow calculations, while also engaging in data handling, scripting, automation, and toolchain development.

    This role combines deep power systems expertise, analytical thinking, and software development skills to deliver high-quality modelling results and technical insights for our clients across the European energy sector.

    Key Responsibilities

    • Lead and participate in the design, implementation, testing, and analysis of power system models using commercial tools such as PSS/E (Siemens) or DigSilent PowerFactory, and open-source tools such as PyPSA, PyPowSyBl, or pandapower.
    • Perform load flow, contingency analysis, short-circuit calculations, dynamic simulations, and grid stability assessments to support transmission and distribution system planning.
    • Develop and maintain automated workflows, data pipelines, and scripting tools (Python) to support power system modelling and simulation processes.
    • Conduct grid integration studies for renewable energy sources, including hosting capacity analysis, voltage regulation, and protection coordination.
    • Collaborate with client teams (TSOs, DSOs) to ensure accuracy and consistency in network data collection, validation, and model maintenance.
    • Communicate modelling results, methodologies, and assumptions clearly to both technical and non-technical stakeholders.
    • Contribute to technical documentation, reports, and presentations.

    Required Qualifications & Experience

    • 2–7 years of relevant professional experience in power system modelling and simulation.
    • 2+ years of hands-on experience with at least one commercial power system simulation tool (PSS/E, PowerFactory, ETAP, PowerWorld) or open-source equivalent (PyPSA, PyPowSyBl, pandapower).
    • Proficiency in Python for scripting, automation, and power system analysis workflows.
    • Solid understanding of power system fundamentals: load flow, fault analysis, dynamic stability, voltage regulation, and protection systems.
    • Strong skills in data handling, processing, and visualization of network and simulation data.
    • English language proficiency at B2 level or higher.
    • MSc degree in Electrical Engineering, Power Engineering, Power Systems, or related field.

    Desirable Skills & Assets

    • Experience with PSS/E automation (Python/IPLAN) or PowerFactory DPL/Python scripting.
    • Knowledge of CIM/CGMES data exchange standards and network model interoperability.
    • Familiarity with European grid codes, ENTSO-E methodologies, and TSO/DSO operational frameworks.
    • Experience with market modelling tools such as PLEXOS or Antares is a plus.
    • Knowledge of optimization theory and mathematical programming (LP, MILP).
    • Proficiency in R for statistical analysis or simulation post-processing.
    • Familiarity with Git or other version control systems.
    • Ability to translate customer requirements into actionable modelling tasks and deliverables.
    • Experience in applying AI tools to support programming and analytical work.
    • Strong communication and teamwork skills, with the ability to meet deadlines.
    More
  • Β· 39 views Β· 8 applications Β· 10d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    Intrao Tech is a London-based dev & design studio delivering web, mobile, and brand work for startups and SMBs across UK/EU. Three founders, no middle management, direct client access. We ship fast because we're not drowning in process.We're looking for a...

    Intrao Tech is a London-based dev & design studio delivering web, mobile, and brand work for startups and SMBs across UK/EU. Three founders, no middle management, direct client access. We ship fast because we're not drowning in process.We're looking for a Middle Data Engineer to join our team.

    Requirements:

    • 5+ years of commercial Data Engineering experience
    • Data warehousing at scale (Snowflake, Google BigQuery, or Amazon Redshift)
    • ETL/ELT architecture with DBT (advanced patterns: incremental models, macros, packages)
    • Data ingestion and CDC pipelines (Airbyte)
    • Workflow orchestration (Apache Airflow)
    • Data visualization (Tableau, Metabase)
    • Advanced SQL and data modeling (star schema, Data Vault, dimensional modeling)
    • Python for data pipelines and tooling
    • Containerization (Docker) and IaC (Terraform) for data platform components
    • English: B2+
       

    Nice to have:

    • Multi-warehouse experience (Snowflake + BigQuery + Redshift)
    • CI/CD for data infrastructure (GitHub Actions, GitLab CI)
    • Infrastructure as Code (Terraform)
    • Cloud platform expertise (AWS, GCP) at production scale
    • Data quality and testing frameworks (Great Expectations, DBT tests)
    • Cost optimization for cloud data platforms
    • Kubernetes for data workloads


    What you'll do:

    • Design and implement data ingestion pipelines using Airbyte, including CDC and incremental syncs
    • Architect DBT projects: modeling strategy, testing frameworks, documentation
    • Design and optimize data warehouse structures across Snowflake, BigQuery, or Redshift
    • Build and manage complex Airflow DAGs for end-to-end orchestration
    • Set up dashboards, data layers, and self-service analytics in Tableau and Metabase
    • Tune warehouse performance and optimize cloud data costs
    • Collaborate directly with clients on data architecture and requirements
    • Maintain CI/CD pipelines for data infrastructure deployment


    We offer:

    • Full remote
    • Paid vacation and sick leave
    • Competitive rate
    • Direct communication, no bureaucracy
    More
  • Β· 54 views Β· 14 applications Β· 10d

    Middle Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    Intrao Tech is a London-based dev & design studio delivering web, mobile, and brand work for startups and SMBs across UK/EU. Three founders, no middle management, direct client access. We ship fast because we're not drowning in process.We're looking for a...

    Intrao Tech is a London-based dev & design studio delivering web, mobile, and brand work for startups and SMBs across UK/EU. Three founders, no middle management, direct client access. We ship fast because we're not drowning in process.We're looking for a Middle Data Engineer to join our team.

    Requirements:

    • 3+ years of commercial Data Engineering experience
    • Data warehousing (Snowflake, Google BigQuery, or Amazon Redshift)
    • ETL/ELT with DBT
    • Data ingestion tools (Airbyte)
    • Workflow orchestration (Apache Airflow)
    • Data visualization (Tableau, Metabase)
    • Strong SQL skills
    • Python for data pipelines and tooling
    • English: B2+
       

    Nice to have:

    • CI/CD for data infrastructure (GitHub Actions, GitLab CI)
    • Containerization (Docker) and IaC (Terraform) for data platform components
    • Infrastructure as Code (Terraform)
    • Cloud platforms (AWS, GCP)
    • Data quality and testing frameworks (Great Expectations, DBT tests)


    What you'll do:

    • Ingest data from various sources using Airbyte, configure connectors and sync schedules
    • Build and maintain DBT models, transformations, and tests
    • Write and optimize SQL across Snowflake, BigQuery, or Redshift
    • Create and manage Airflow DAGs for pipeline orchestration
    • Build dashboards and reporting layers in Tableau and Metabase
    • Monitor data quality and troubleshoot pipeline failures
    • Support CI/CD and containerization for data workflows


    We offer:

    • Full remote
    • Paid vacation and sick leave
    • Competitive rate
    • Direct communication, no bureaucracy
    More
  • Β· 51 views Β· 6 applications Β· 10d

    Data Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· English - B2
    ΠœΠ°Ρ”ΠΌΠΎ Π΄Π²Ρ– ΠΏΠΎΠ·ΠΈΡ†Ρ–Ρ—. Одна БША Ρ–Π½ΡˆΠ° UK ΠŸΠ΅Ρ€ΡˆΠ° Π±Ρ–Π»ΡŒΡˆΠ΅ ΠΏΡ€ΠΎ Ρ€Π΅ΠΏΠΎΡ€Ρ‚Ρ–Π½Π³ The project is a cloud-based analytics platform designed for commercial real estate. It provides tools for data analysis, portfolio management, financial insights, and lease tracking,...

    ΠœΠ°Ρ”ΠΌΠΎ Π΄Π²Ρ– ΠΏΠΎΠ·ΠΈΡ†Ρ–Ρ—. Одна БША Ρ–Π½ΡˆΠ° UK

     

    ΠŸΠ΅Ρ€ΡˆΠ° Π±Ρ–Π»ΡŒΡˆΠ΅ ΠΏΡ€ΠΎ Ρ€Π΅ΠΏΠΎΡ€Ρ‚Ρ–Π½Π³

     

    The project is a cloud-based analytics platform designed for commercial real estate. It provides tools for data analysis, portfolio management, financial insights, and lease tracking, helping owners, property managers, and brokers make informed, data-driven decisions.

    Position Requirements
    PowerBI skills:
    β€’ Able to understand the data sources and relevant data for analysis
    β€’ Design and refine data models, familiarity with a dimensional model
    β€’ Develop interactive reports and dashboards
    β€’ Knowledge of DAX

    Azure and DB skills:
    β€’ Proficiency in ETL/ELT design, development and support
    β€’ Strong hands-on experience in Azure Data Factory
    β€’ Experience in Azure Functions
    β€’ Stored Procedures writing and optimization
    β€’ Telerik .NET Reporting experience (Nice to have)

    Responsibilities
    Continue improving existing data reporting tools. List of existing integrations (where data comes from):
    - Procore
    - DealPath
    - Yardi
    - MRI
    - JDE
    - VTS
    - OneSite
    - CoStar
    - Argus
    - Salesforce
    - RealPage

    Nice to have
    Basic Python skills

     

    Π”Ρ€ΡƒΠ³Π° Π±Ρ–Π»ΡŒΡˆΠ΅ ΠΏΡ€ΠΎ Π΄Π°Ρ‚Π° ΠΌΠΎΠ΄Π΅Π»Ρ–Π½Π³

     

    Requirements
    Data Integration/ETL (Azure Data Factory, Databricks)
    SQL Server (T-SQL, stored procedures)
    Dimensional Data Modelling (Kimball/star schema)
    Power BI (DAX, data visualisation) 

    More
  • Β· 35 views Β· 1 application Β· 10d

    Big Data Engineer to $8000

    Full Remote Β· Bulgaria, Poland, Romania Β· 6 years of experience Β· English - B2
    Who We Are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: The product is an enterprise-grade digital experience...

    Who We Are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.  

     

    About the Product:

    The product is an enterprise-grade digital experience platform that provides real-time visibility into system performance, application stability, and end-user experience across on-premises, virtual, and cloud environments. It ingests large volumes of telemetry from distributed agents on employee devices and infrastructure, processes and enriches data through streaming pipelines, detects anomalies, and stores analytical data for monitoring and reporting. The platform serves a global customer base with high throughput and strict requirements for security, correctness, and availability. Rapid adoption has driven significant year-over-year growth and demand from large, distributed teams seeking to secure and stabilize digital environments without added complexity.

     

    About the Role:

    This is a true Big Data engineering role focused on designing and building real-time data pipelines that operate at scale in production environments serving real customers. You will join a senior, cross-functional platform team responsible for the end-to-end data flow: ingestion, processing, enrichment, anomaly detection, and storage. You will own both architecture and delivery, collaborating with Product Managers to translate requirements into robust, scalable solutions and defining guardrails for data usage, cost control, and tenant isolation. The platform is evolving from distributed, product-specific flows to a centralized, multi-region, highly observable system designed for rapid growth, advanced analytics, and future AI-driven capabilities. Strong ownership, deep technical expertise, and a clean-code mindset are essential.

     

    Key Responsibilities: 

    • Design, build, and maintain high-throughput, low-latency data pipelines handling large volumes of telemetry.
    • Develop real-time streaming solutions using Kafka and modern stream-processing frameworks (Flink, Spark, Beam, etc.).
    • Contribute to the architecture and evolution of a large-scale, distributed, multi-region data platform.
    • Ensure data reliability, fault tolerance, observability, and performance in production environments.
    • Collaborate with Product Managers to define requirements and translate them into scalable, safe technical solutions.
    • Define and enforce guardrails for data usage, cost optimization, and tenant isolation within a shared platform.
    • Participate actively in system monitoring, troubleshooting incidents, and optimizing pipeline performance.
    • Own end-to-end delivery: design, implementation, testing, deployment, and monitoring of data platform components.

     

    Required Competence and Skills:

    • 5+ years of hands-on experience in Big Data or large-scale data engineering roles.
    • Strong programming skills in Java or Python, with willingness to adopt Java and frameworks like Vert.x or Spring.
    • Proven track record of building and operating production-grade data pipelines at scale.
    • Solid knowledge of streaming technologies such as Kafka, Kafka Streams, Flink, Spark, or Apache Beam.
    • Experience with cloud platforms (AWS, Azure, or GCP) and designing distributed, multi-region systems.
    • Deep understanding of production concerns: availability, data loss prevention, latency, and observability.
    • Hands-on experience with data stores such as ClickHouse, PostgreSQL, MySQL, Redis, or equivalents.
    • Strong system design skills, able to reason about trade-offs, scalability challenges, and cost efficiency.
    • Clean code mindset, solid OOP principles, and familiarity with design patterns.
    • Experience with AI-first development tools (e.g., GitHub Copilot, Cursor) is a plus.

     

    Nice to have:

    • Experience designing and operating globally distributed, multi-region data platforms.
    • Background in real-time analytics, enrichment, or anomaly detection pipelines.
    • Exposure to cost-aware data architectures and usage guardrails.
    • Experience in platform or infrastructure teams serving multiple products.

     

    Why Us?

    We provide 20 days of vacation leave per calendar year (plus official national holidays of the country you are based in).

    We provide full accounting and legal support in all countries in which we operate.

    We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

    We offer a highly competitive package with yearly performance and compensation reviews.

    More
  • Β· 30 views Β· 3 applications Β· 10d

    Senior Data Engineer (Python + AWS)

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    Description We are a global audience and location intelligence company that helps marketers connect the digital and physical world. We provide data-driven solutions to enhance marketing campaigns by leveraging location and audience data to reveal consumer...

    Description

    We are a global audience and location intelligence company that helps marketers connect the digital and physical world. We provide data-driven solutions to enhance marketing campaigns by leveraging location and audience data to reveal consumer behavior and enable more precise targeting and measurement. We work on high-end / high-performance / high-throughput systems for in-time analysis of data for autonomous driving and other big data applications e.g. for E-commerce.

     

    Requirements

    • You have 4+ years of experience on similar position.
    • You have significant experience with Python. Familiarity with Java or Scala is a plus.
    • Hands-on experience building scalable solutions in AWS.
    • Proficiency in NoSQL and SQL databases and in high-throughput data-related architecture and technologies (e.g. Kafka, Spark, Hadoop, MongoDB, AWS Batch, AWS Glue, Athena, Airflow, dbt).
    • Excellent SQL and data transformation skills.
    • Excellent written and verbal communication skills with an ability to simplify complex technical information.
    • Experience guiding and mentoring junior team members in a collaborative environment.

     

    Job responsibilities

    • Work in a self-organised agile team with a high level of autonomy, and you will actively shape your team’s culture.
    • Design, build, and standardise privacy-first big data architectures, large-scale data pipelines, and advanced analytics solutions in AWS.
    • Develop complex integrations with third-party partners, transferring terabytes of data.
    • Align with other Data experts on data (analytics) engineering best practices and standards, and introduce those standards and data engineering expertise to the team in order to enhance existing data pipelines and build new ones.
    • Successfully partner up with the Product team to constantly develop further and improve our platform features.
    More
  • Β· 149 views Β· 14 applications Β· 10d

    Junior Data Engineer (Python)

    Full Remote Β· Ukraine Β· 1 year of experience Β· English - B2
    Description Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services. This position collaborates with a geographically diverse...

    Description

    Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services.

    This position collaborates with a geographically diverse team to develop, deliver, and maintain systems for digital subscription and transactional products across the Customer’ SVOD portfolio.

     

    Requirements

    – 1+ years of intermediate to advanced SQL

    – 1+ years of python development (intermediate level is fine: Pandas, Numpy, boto3, seaborn, requests, unittest)

    – Experience building ETLs

    – Experience with data tools (ex.: Airflow, Grafana, AWS Glue, AWS Athena)

    – Excellent understanding of database design

    – Cloud expereince (AWS S3, Lambda, or alternatives)

    – Agile SDLC knowledge
    – Detail oriented
    – Data-focused
    – Strong verbal/written communication and data presentation skills, including an ability to effectively communicate with both business and technical teams
    – An ability and interest in working in a fast-paced and rapidly changing environment
    – Be self-driven and show ability to deliver on ambiguous projects with incomplete or dirty data

     

    Would be a plus:
    – Understanding of basic SVOD store purchase workflows
    – Background in supporting data scientists in conducting data analysis / modelling to support business decision making

    – Experience in supervising subordinate staff

     

    Job responsibilities

    – Data analysis, auditing, statistical analysis
    – ETL buildouts for data reconciliation
    – Creation of automatically-running audit tools
    – Interactive log auditing to look for potential data problems
    – Help in troubleshooting customer support team cases
    – Troubleshooting and analyzing subscriber reporting issues:
    – Answer management questions related to subscriber count trends
    – App purchase workflow issues
    – Audit/reconcile store subscriptions vs userdb

    More
  • Β· 32 views Β· 5 applications Β· 10d

    Middle Data Engineer (Python)

    Full Remote Β· Ukraine Β· 3 years of experience Β· English - B2
    Description Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services. This position collaborates with a geographically diverse...

    Description

    Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services.

    This position collaborates with a geographically diverse team to develop, deliver, and maintain systems for digital subscription and transactional products across the Customer’ SVOD portfolio.

     

    Requirements

    – 3+ years of intermediate to advanced SQL

    – 3+ years of python development (intermediate level is fine: Pandas, Numpy, boto3, seaborn, requests, unittest)

    – Experience building ETLs

    – Experience with data tools (ex.: Airflow, Grafana, AWS Glue, AWS Athena)

    – Excellent understanding of database design

    – Cloud expereince (AWS S3, Lambda, or alternatives)

    – Agile SDLC knowledge
    – Detail oriented
    – Data-focused
    – Strong verbal/written communication and data presentation skills, including an ability to effectively communicate with both business and technical teams
    – An ability and interest in working in a fast-paced and rapidly changing environment
    – Be self-driven and show ability to deliver on ambiguous projects with incomplete or dirty data

     

    Would be a plus:
    – Understanding of basic SVOD store purchase workflows
    – Background in supporting data scientists in conducting data analysis / modelling to support business decision making

    – Experience in supervising subordinate staff

     

    Job responsibilities

    – Data analysis, auditing, statistical analysis
    – ETL buildouts for data reconciliation
    – Creation of automatically-running audit tools
    – Interactive log auditing to look for potential data problems
    – Help in troubleshooting customer support team cases
    – Troubleshooting and analyzing subscriber reporting issues:
          Answer management questions related to subscriber count trends
          App purchase workflow issues
          Audit/reconcile store subscriptions vs userdb

    More
Log In or Sign Up to see all posted jobs