Jobs
120-
Β· 26 views Β· 5 applications Β· 13d
Python Cloud Engineer
Full Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper IntermediateOur partners are building cloud-native backend services, APIs, and background systems designed for scalability, reliability, and high performance. Project span consumer devices, energy, healthcare, and beyond, combining regulated requirements with rapid...Our partners are building cloud-native backend services, APIs, and background systems designed for scalability, reliability, and high performance. Project span consumer devices, energy, healthcare, and beyond, combining regulated requirements with rapid time-to-market and often bringing together a variety of technologies in a single project. You will develop services that process real-time device data, integrate multiple systems, handle high-volume cloud workloads, and power applications across diverse use cases. Make a direct impact by contributing to complex systems that drive innovation across industries.
Necessary skills and qualifications- At least 3 years of commercial experience with Python frameworks (FastAPI, Django REST, etc.)
- Experience with relational/non-relational databases
- Strong knowledge of the Object-Oriented Analysis and Design (OOAD) principles
- Hands-on experience with application performance optimization and low-level debugging
- Practical AWS/Azure engineering experience: creating and securing resources, not just consuming them from code
- Experience with containers and orchestration (Docker, Kubernetes)
- Good knowledge of the HTTP protocol
- Proactive position in solution development and process improvements
- Ability to cooperate with customers and teammates
- Upper-Intermediate English level
Will be a plus- Experience with any other back-end technologies
- Knowledge of communication protocols: MQTT/XMPP/AMQP/RabbitMQ/WebSockets
- Ability to research new technological areas and understand them in depth through self-directed learning
- Skilled in IoT data collection, managing device fleets, and implementing OTA updates
- Familiarity with healthcare data standards (e.g., FHIR, HL7) and HIPAA/GDPR compliance
- Expertise in documenting technical solutions in different formats
-
Β· 31 views Β· 3 applications Β· 13d
Senior Data Engineer
Full Remote Β· EU Β· 5 years of experience Β· B2 - Upper IntermediateOur client is a global jewelry manufacturer undergoing a major transformation, moving from IaaS-based solutions to a modern Azure PaaS data platform. As part of this journey, you will design and implement scalable, reusable, and high-quality data products...Our client is a global jewelry manufacturer undergoing a major transformation, moving from IaaS-based solutions to a modern Azure PaaS data platform. As part of this journey, you will design and implement scalable, reusable, and high-quality data products using technologies such as Data Factory, Data Lake, Synapse, and Databricks. These solutions will enable advanced analytics, reporting, and data-driven decision-making across the organization. By collaborating with product owners, architects, and business stakeholders, you will play a key role in maximizing the value of data and driving measurable commercial impact worldwide.
Responsibilities:
- Design, build, and maintain scalable, efficient, and reusable data pipelines and products on the Azure PaaS data platform.
- Collaborate with product owners, architects, and business stakeholders to translate requirements into technical designs and data models.
- Enable advanced analytics, reporting, and other data-driven use cases that support commercial initiatives and operational efficiencies.
- Ingest, transform, and optimize large, complex data sets while ensuring data quality, reliability, and performance.
- Apply DevOps practices, CI/CD pipelines, and coding best practices to ensure robust, production-ready solutions.
- Monitor and own the stability of delivered data products, ensuring continuous improvements and measurable business benefits.
- Promote a βbuild-once, consume-manyβ approach to maximize reuse and value creation across business verticals.
Contribute to a culture of innovation by following best practices while exploring new ways to push the boundaries of data engineering.
Skills Required:
- Must-Have Skills
- 5+ years of experience as a Data Engineer with proven expertise in Azure Synapse Analytics and SQL Server.
- Advanced proficiency in SQL, covering relational databases, data warehousing, dimensional modeling, and cubes.
- Practical experience with Azure Data Factory, Databricks, and PySpark.
- Track record of designing, building, and delivering production-ready data products at enterprise scale.
- Strong analytical skills and ability to translate business requirements into technical solutions.
- Excellent communication skills in English, with the ability to adapt technical details for different audiences.
- Experience working in Agile/Scrum teams.
- Nice-to-Have Skills
- Familiarity with infrastructure tools such as Kubernetes and Helm.
- Experience with Kafka.
- Experience with DevOps and CI/CD pipelines.
-
Β· 48 views Β· 7 applications Β· 13d
Data Solutions Architect
Full Remote Β· Colombia, Poland, Ukraine Β· 5 years of experience Β· C1 - AdvancedWe are looking for you! We are seeking a Data Solutions Architect with deep expertise in data platform design, AdTech systems integration, and data pipeline development for the advertising and media industry. This role requires strong technical knowledge...We are looking for you!
We are seeking a Data Solutions Architect with deep expertise in data platform design, AdTech systems integration, and data pipeline development for the advertising and media industry. This role requires strong technical knowledge in both real-time and batch data processing, with hands-on experience in building scalable, high-performance data architectures across demand-side and sell-side platforms.
As a client-facing technical expert, you will play a key role in project delivery, presales process, technical workshops, and project kick-offs, ensuring that our clients receive best-in-class solutions tailored to their business needs.
Contract type: Gig contract.
Skills and experience you can bring to this role
Qualifications & experience:
- 5+ years of experience in designing and implementing data architectures and pipelines for media and advertising industries that align with business goals, ensure scalability, security, and performance;
- Hands-on expertise with cloud-native and enterprise data platforms, including Snowflake, Databricks, and cloud-native warehousing solutions like AWS Redshift, Azure Synapse, or Google BigQuery;
- Proficiency in Python, Scala, or Java for building data pipelines and ETL workflows;
- Hands-on experience with data engineering tools and frameworks such as Apache Kafka, Apache Spark, Airflow, dbt, or Flink. Batch and stream processing architecture;
- Experience working with and good understanding of relational and non-relational databases (SQL, NoSQL(document-oriented, key-value, columnar stores, etc.);
- Experience in data modelling: Ability to create conceptual, logical, and physical data models;
- Experience designing solutions for one or more cloud providers (AWS, GCP, Azure) and their data engineering services;
- Experience in client-facing technical roles, including presales, workshops, and solutioning discussions;
- Strong ability to communicate complex technical concepts to both technical and non-technical stakeholders.
Nice to have:
- Experience working with AI and machine learning teams, integration of ML models into enterprise data pipelines: model fine-tuning, RAG, MLOps, LLMOps;
- Knowledge of privacy-first architectures and data compliance standards in advertising (e.g., GDPR, CCPA);
- Knowledge of data integration tools such as Apache Airflow, Talend, Informatica, and MuleSoft for connecting disparate systems;
- Exposure to real-time bidding (RTB) systems and audience segmentation strategies.
What impact youβll make
- Architect and implement end-to-end data solutions for advertising and media clients, integrating with DSPs, SSPs, DMPs, CDPs, and other AdTech systems;
- Design and optimize data platforms, ensuring efficient data ingestion, transformation, and storage for both batch and real-time processing;
- Build scalable, secure, and high-performance data pipelines that handle large-scale structured and unstructured data from multiple sources;
- Work closely with client stakeholders to define technical requirements, guide solution designs, and align data strategies with business goals;
- Lead technical discovery sessions, workshops, and presales engagements, acting as a trusted technical advisor to clients;
- Ensure data governance, security, and compliance best practices are implemented within the data architecture;
- Collaborate with data science and machine learning teams, designing data pipelines that support model training, feature engineering, and analytics workflows.
-
Β· 17 views Β· 0 applications Β· 12d
Senior Data Engineer Azure (IRC278989)
Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper IntermediateJob Description Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics Proficiency in data engineering with Apache Spark,...Job Description
- Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
- Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
- Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
- Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
- Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
- Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
- Strong understanding of data modeling, schema design, and database performance optimization
- Practical experience working with various file formats, including JSON, Parquet, and ORC
- Familiarity with machine learning and AI integration within the data platform context
- Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
- Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
- Strong analytical and problem-solving skills with attention to detail
- Excellent teamwork and communication skills
- Upper-Intermediate English (spoken and written)
Job Responsibilities
- Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
- Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
- Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
- Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
- Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
- Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
- Design and maintain data models and schemas optimized for analytical and operational workloads
- Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
- Participate in architecture discussions, backlog refinement, estimation, and sprint planning
- Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
- Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
- Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
Department/Project Description
GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people's lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.
More
-
Β· 54 views Β· 10 applications Β· 12d
Lead Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· B2 - Upper IntermediateDigis is looking for an experienced, proactive, and self-driven Lead Data Engineer to join our fully remote team. About the Project Youβll be part of a large-scale platform focused on performance management and revenue optimization, powered by...Digis is looking for an experienced, proactive, and self-driven Lead Data Engineer to join our fully remote team.
About the Project
Youβll be part of a large-scale platform focused on performance management and revenue optimization, powered by predictive analytics and machine learning.
The product helps businesses boost employee engagement, customer satisfaction, and overall revenue across industries such as hospitality, automotive, car rentals, and theme parks.Project Details
- Location: USA, India (work hours aligned with the Kyiv timezone)
- Team Composition: CTO (US), 6 Data Engineers, 2 DevOps, and 1 Backend Engineer from Digis.
In total, around 50 professionals are involved in the project. Engagement: Long-term collaboration
What Weβre Looking For
- 5+ years of experience in Data Engineering
- Proven background in a Team Lead or Tech Lead role
- Experience with Spark / PySpark and AWS
Upper-Intermediate (B2+) or higher level of spoken English
Why Join
- Contribute to a stable, industry-leading product
- Take ownership and lead a talented team
- Work with cutting-edge data technologies
- Influence technical decisions and product evolution
Enjoy long-term professional growth within a supportive and innovative environment
If this opportunity resonates with you, letβs connect β weβll be glad to share more details.
-
Β· 36 views Β· 2 applications Β· 12d
Senior Data Architect
Full Remote Β· Poland, Turkey Β· 10 years of experience Β· B2 - Upper IntermediateProject Description Senior Data Architect responsible for enterprise data architecture, designing conceptual and logical data models that guide Manufacturing, Supply Chain, Merchandising and consumer data transformation at leading jewellery company. In...Project Description
Senior Data Architect responsible for enterprise data architecture, designing conceptual and logical data models that guide Manufacturing, Supply Chain, Merchandising and consumer data transformation at leading jewellery company.
In this strategic data architecture role, you will define enterprise wide data models, create conceptual data architectures, and establish information blueprints that engineering teams implement. You will ensure our data architecture makes information accessible and trustworthy for both analytical insights and AI innovations, enabling Pandora's journey to becoming the world's leading jewellery company.
Responsibilities
- Leading enterprise information architecture design, creating semantic, conceptual, logical models that engineering teams translate into Azure and databricks based physical implementations.
- Architecting data product interfaces and semantic contracts that define how business entities flow through bronze-to-gold transformations, enabling interoperability between domain-owned data products.
- Developing semantic layer architectures that provide business-oriented abstractions, hiding technical complexity while enabling self-service analytics.
- Creating data architectural blueprints for core application transformation (SAP, Salesforce, o9, workforce management etc.) ensuring a smooth data transitional architecture.
- Reviewing and approving physical model proposals from engineering teams, ensuring alignment with enterprise information strategy.
- Defining modeling standards for data products: operational schemas in bronze, aggregated and dimensional entities in silver, and consumption-ready models in gold.
- Designing information value chains and data product specifications with clear contracts for downstream consumption.
- Mapping business capabilities to information domains, creating bounded contexts aligned with Domain-Driven Design principles.
- Providing architectural oversight through design reviews, decision records to ensure adherence to the data standards.
- Creating business glossaries and semantic models that bridge business language and technical implementation.
- Guiding data solution design sessions, translating complex business requirements into clear information structures.
- Mentoring teams on data architectural best practices while documenting anti-patterns to avoid common pitfalls.
Skills Required
- 8+ years as a practicing data architect with proven experience designing enterprise-scale information architectures that successfully went into production.
- Demonstrated experience architecting shared data models in modern lakehouse patterns, including defining silver-layer conformance standards and gold-layer consumption contracts.
- Demonstrated ability to create semantic layers and logical models that achieved actual business adoption.
- Practical experience with multiple modeling paradigms (dimensional, Data Vault, graph, document) knowing when and why to apply each.
- Strong understanding of Azure data platform capabilities (Synapse, Databricks, Unity Catalog) to guide architectural decisions in daily implementations.
- Track record of guiding teams through major transformations, particularly SAP migrations or platform modernizations.
- Excellence in stakeholder communication - explaining data architecture to non-technical audience while providing technical guidance to engineering teams.
- Pragmatic approach balancing theoretical best practices with practical implementation realities.
- Collaborative leadership style - guiding teams to good decisions rather than mandating solutions.
- Experience establishing reusable patterns and reference architectures that accelerate delivery.
- Understanding of how logical models translate to physical implementations.
- Proven ability to create architectures that survive from MVP to enterprise scale.
-
Β· 46 views Β· 17 applications Β· 12d
Data Engineer with Databricks
Full Remote Β· Worldwide Β· 5 years of experience Β· B2 - Upper IntermediateWe are seeking an experienced Data Engineer with deep expertise in Databricks to design, build, and maintain scalable data pipelines and analytics solutions. This role requires 5 years of hands-on experience in data engineering with a strong focus on the...We are seeking an experienced Data Engineer with deep expertise in Databricks to design, build, and maintain scalable data pipelines and analytics solutions. This role requires 5 years of hands-on experience in data engineering with a strong focus on the Databricks platform.
Key Responsibilities:
- Data Pipeline Development & Management
- Design and implement robust, scalable ETL/ELT pipelines using Databricks and Apache Spark
- Process large volumes of structured and unstructured data
- Develop and maintain data workflows using Databricks workflows, Apache Airflow, or similar orchestration tools
- Optimize data processing jobs for performance, cost efficiency, and reliability
- Implement incremental data processing patterns and change data capture (CDC) mechanisms
- Databricks Platform Engineering
- Build and maintain Delta Lake tables and implement medallion architecture (bronze, silver, gold layers)
- Develop streaming data pipelines using Structured Streaming and Delta Live Tables
- Manage and optimize Databricks clusters for various workloads
- Implement Unity Catalog for data governance, security, and metadata management
- Configure and maintain Databricks workspace environments across development, staging, and production
- Data Architecture & Modeling
- Design and implement data models optimized for analytical workloads
- Create and maintain data warehouses and data lakes on cloud platforms (Azure, AWS, or GCP)
- Implement data partitioning, indexing, and caching strategies for optimal query performance
- Collaborate with data architects to establish best practices for data storage and retrieval patterns
- Performance Optimization & Monitoring
- Monitor and troubleshoot data pipeline performance issues
- Optimize Spark jobs through proper partitioning, caching, and broadcast strategies
- Implement data quality checks and automated testing frameworks
- Manage cost optimization through efficient resource utilization and cluster management
- Establish monitoring and alerting systems for data pipeline health and performance
- Collaboration & Best Practices
- Work closely with data scientists, analysts, and business stakeholders to understand data requirements
- Implement version control using Git and follow CI/CD best practices for code deployment
- Document data pipelines, data flows, and technical specifications
- Mentor junior engineers on Databricks and data engineering best practices
- Participate in code reviews and contribute to establishing team standards
Required Qualifications:
- Experience & Skills
- 5+ years of experience in data engineering with hands-on Databricks experience
- Strong proficiency in Python and/or Scala for Spark application development
- Expert-level knowledge of Apache Spark, including Spark SQL, DataFrames, and RDDs
- Deep understanding of Delta Lake and Lakehouse architecture concepts
- Experience with SQL and database optimization techniques
- Solid understanding of distributed computing concepts and data processing frameworks
- Proficiency with cloud platforms (Azure, AWS, or GCP) and their data services
- Experience with data orchestration tools (Databricks Workflows, Apache Airflow, Azure Data Factory)
- Knowledge of data modeling concepts for both OLTP and OLAP systems
- Familiarity with data governance principles and tools like Unity Catalog
- Understanding of streaming data processing and real-time analytics
- Experience with version control systems (Git) and CI/CD pipelines
Preferred Qualifications:
- Databricks Certified Data Engineer certification (Associate or Professional)
- Experience with machine learning pipelines and MLOps on Databricks
- Knowledge of data visualization tools (Power BI, Tableau, Looker)
- Experience with infrastructure as code (Terraform, CloudFormation)
- Familiarity with containerization technologies (Docker, Kubernetes)
-
Β· 80 views Β· 27 applications Β· 12d
Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· B2 - Upper IntermediateAt CIGen, we partner with both startups and established enterprises to help them achieve their business goals through innovative software solutions. We are a Microsoft Gold Partner, driven by professionalism, trust, and mutual respect. We believe that...At CIGen, we partner with both startups and established enterprises to help them achieve their business goals through innovative software solutions. We are a Microsoft Gold Partner, driven by professionalism, trust, and mutual respect.
We believe that long-term business success comes from long-term trusted relationships β with our clients, employees, and partners. We offer projects with a modern tech stack, flexible schedules, and a professional and supportive team. Open management and a friendly environment are part of our culture.
Hence, the quality of our services is crucial!
Currently, we are looking to add a Data Engineer to our team.
This position is full-time and can be fully remote.
πΉ Key Responsibilities- Design, build, and evolve modern data platforms for our clients across industries.
- Develop data pipelines and ETL processes using cloud technologies such as Azure, AWS, and Databricks.
- Collaborate with Architects, Cloud Engineers, BI Developers, and other team members to deliver scalable, secure, and future-ready data solutions.
- Work with SQL, Python, and Spark to transform, model, and process complex datasets.
- Apply best practices for data architecture, version control (Git), and CI/CD pipelines.
- Ensure high-quality, maintainable, and efficient solutions that meet business requirements.
- Stay curious and proactive β explore new technologies, validate new tools, and contribute to knowledge sharing within the team.
πΉ Qualifications- Bachelorβs or Masterβs degree in Computer Science, Information Technology, or Engineering β or equivalent practical experience.
- 3+ years of experience in a Data Engineering or similar role (e.g., Data Developer, Analytics Engineer, ETL/DW Developer).
- Proven hands-on experience with Azure (experience with AWS or GCP is also a plus).
- Strong knowledge of SQL, Python, and Apache Spark.
- Understanding of data structures, data modeling, and large-scale data processing.
- Familiarity with Databricks and Lakehouse/Medallion architectures is an advantage.
- Experience working with both real-time and batch data.
- Proficiency in Git and CI/CD workflows.
- Fluency in English (both written and spoken).
π» What We Offer- Fully remote position, with the option to work from our office in Lviv, Ukraine, if preferred.
- Be part of an English-speaking, multinational environment, where you can share knowledge and learn from professionals across countries.
- PTO and sick leaves to support your well-being.
- Support for learning and professional development β courses, certifications, and events.
- Flexible working hours to maintain your work-life balance.
- Work on modern technologies and cloud platforms guided by experienced mentors.
- Opportunities to grow your expertise β deepen your technical craft, contribute to internal communities, take part in pre-sales, or explore mentorship and speaking roles.
- Transparent communication and a culture of curiosity, collaboration, and trust.
- A friendly, supportive atmosphere where people genuinely care about what they build β and about each other.
...and so much more!
We look forward to hearing from you!
π Apply today and join CIGen β where technology meets growth!
More -
Β· 15 views Β· 0 applications Β· 11d
Senior Data Engineer (IRC278988)
Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateJob Description Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics Proficiency in data engineering with Apache Spark,...Job Description
- Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
- Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
- Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
- Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
- Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
- Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
- Strong understanding of data modeling, schema design, and database performance optimization
- Practical experience working with various file formats, including JSON, Parquet, and ORC
- Familiarity with machine learning and AI integration within the data platform context
- Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
- Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
- Strong analytical and problem-solving skills with attention to detail
- Excellent teamwork and communication skills
- Upper-Intermediate English (spoken and written)
Job Responsibilities
- Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
- Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
- Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
- Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
- Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
- Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
- Design and maintain data models and schemas optimized for analytical and operational workloads
- Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
- Participate in architecture discussions, backlog refinement, estimation, and sprint planning
- Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
- Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
- Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
Department/Project Description
GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
More
You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people's lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you. -
Β· 57 views Β· 8 applications Β· 11d
Senior Data Engineer
Part-time Β· Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper IntermediateAbout the Platform Weβre building a unified data ecosystem that connects raw data, analytical models, and intelligent decision layers. The platform combines the principles of data lakes, lakehouses, and modern data warehouses β structured around the...About the Platform
Weβre building a unified data ecosystem that connects raw data, analytical models, and intelligent decision layers.
The platform combines the principles of data lakes, lakehouses, and modern data warehouses β structured around the Medallion architecture (Bronze / Silver / Gold).
Every dataset is versioned, governed, and traceable through a unified catalog and lineage framework.
This environment supports analytics, KPI computation, and AI-driven reasoning β designed for performance, transparency, and future scalability. (in Partnership with GCP, OpenAI, Cohere)
What Youβll Work On
1. Data Architecture & Foundations
- Design, implement, and evolve medallion-style data pipelines β from raw ingestion to curated, business-ready models.
- Build hybrid data lakes and lakehouses using Iceberg, Delta, or Parquet formats with ACID control and schema evolution.
- Architect data warehouses that unify batch and streaming sources into a consistent, governed analytics layer.
- Ensure optimal partitioning, clustering, and storage strategies for large-scale analytical workloads.
2. Data Ingestion & Transformation
- Create ingestion frameworks for APIs, IoT, ERP, and streaming systems (Kafka, Pub/Sub).
- Develop reproducible ETL/ELT pipelines using Airflow, dbt, Spark, or Dataflow.
- Manage CDC and incremental data loads, ensuring freshness and resilience.
- Apply quality validation, schema checks, and contract-based transformations at every stage.
3. Governance, Cataloging & Lineage
- Implement a unified data catalog with lineage visibility, metadata capture, and schema versioning.
- Integrate dbt metadata, OpenLineage, and Great Expectations to enforce data quality.
- Define clear governance rules: data contracts, access policies, and change auditability.
- Ensure every dataset is explainable and fully traceable back to its source.
4. Data Modeling & Lakehouse Operations
- Design dimensional models and business data marts to power dashboards and KPI analytics.
- Develop curated Gold-layer tables that serve as trusted sources of truth for analytics and AI workloads.
- Optimize materialized views and performance tuning for analytical efficiency.
- Manage cross-domain joins and unified semantics across products, customers, or operational processes.
5. Observability, Reliability & Performance
- Monitor data pipeline health, freshness, and cost using modern observability tools (Prometheus, Grafana, Cloud Monitoring).
- Build proactive alerting, anomaly detection, and drift monitoring for datasets.
- Implement CI/CD workflows for data infrastructure using Terraform, Helm, and ArgoCD.
- Continuously improve query performance and storage efficiency across warehouses and lakehouses.
6. Unified Data & Semantic Layers
- Help define a unified semantic model that connects operational, analytical, and AI-ready data.
- Work with AI and analytics teams to structure datasets for semantic search, simulation, and reasoning systems.
- Collaborate on vectorized data representation and process-relationship modeling (graph or vector DBs).
What Weβre Looking For- 5+ years of hands-on experience building large-scale data platforms, warehouses, or lakehouses.
- Strong proficiency in SQL, Python, and distributed processing frameworks (PySpark, Spark, Dataflow).
- Deep understanding of Medallion architecture, data modeling, and modern ETL orchestration (Airflow, dbt).
- Experience implementing data catalogs, lineage tracking, and validation frameworks.
- Knowledge of data governance, schema evolution, and contract-based transformations.
- Familiarity with streaming architectures, CDC patterns, and real-time analytics.
- Practical understanding of FinOps, data performance tuning, and cost management in analytical environments.
- Strong foundation in metadata-driven orchestration, observability, and automated testing.
- Bonus: experience with ClickHouse, Trino, Iceberg, or hybrid on-prem/cloud data deployments.
Youβll Excel If You- Think of data systems as living, evolving architectures β not just pipelines.
- Care deeply about traceability, scalability, and explainability.
- Love designing platforms that unify data across analytics, AI, and process intelligence.
- Are pragmatic, hands-on, and focused on building systems that last.
-
Β· 28 views Β· 5 applications Β· 11d
Middle/Senior AWS Data Engineer (IRC278651)
Full Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper IntermediateJob Description Amazon Web Services (AWS) β Kinesis, DMS, EMR, Glue, Lambda, Athena, Redshift, S3 Strong expertise in building data ingestion, ETL/ELT pipelines and orchestration on AWS Experience with Change Data Capture (CDC) and real-time streaming...Job Description
- Amazon Web Services (AWS) β Kinesis, DMS, EMR, Glue, Lambda, Athena, Redshift, S3
- Strong expertise in building data ingestion, ETL/ELT pipelines and orchestration on AWS
- Experience with Change Data Capture (CDC) and real-time streaming data architectures
- Proficiency in SQL and data modeling
- Solid understanding of distributed systems and big data processing
- Python for data transformations and scripting
- Hands-on experience with orchestration frameworks like AWS Step Functions or Data Pipeline
Keyskills β Nice to Have
- Apache Spark, Zeppelin, Hadoop ecosystem
- Infrastructure-as-Code (IaC) using Terraform or CloudFormation
- Experience with data quality frameworks and monitoring
- Knowledge of HIPAA or similar healthcare data compliance standards
- Experience working in highly regulated environments
Job Responsibilities
Job Responsibilities
Lead the design and development of scalable and reliable data pipelines in AWS using tools such as Kinesis, DMS, Glue, EMR, Athena, and Lambda.
Define the architecture of the data ingestion and processing layer, considering performance, cost-efficiency, and maintainability.
Collaborate closely with business stakeholders and technical teams to gather requirements, validate solutions, and align data flows with business needs.
Propose and validate technical approaches and best practices for CDC, ETL/ELT, orchestration, data storage, and query optimization.
Own the implementation of key data workflows, ensuring quality, security, and adherence to architectural standards.
Conduct technical reviews, mentor junior engineers, and foster a culture of continuous improvement.
Participate in strategic planning of the data platform and actively contribute to its evolution.
Monitor and optimize performance of AWS-based data solutions, addressing bottlenecks and proposing enhancements.Department/Project Description
The client is a pioneer in medical devices for less invasive surgical procedures, ranking as a leader in the market for coronary stents. The companyβs medical devices are used in a variety of interventional medical specialties, including interventional cardiology, peripheral interventions, vascular surgery, electrophysiology, neurovascular intervention, oncology, endoscopy, urology, gynecology, and neuromodulation.
The clientβs mission is to improve the quality of patient care and the productivity of health care delivery through the development and advocacy of less-invasive medical devices and procedures. This is accomplished through the continuing refinement of existing products and procedures and the investigation and development of new technologies that can reduce risk, trauma, cost, procedure time and the need for aftercare.
More
-
Β· 12 views Β· 1 application Β· 11d
Data Architect (Azure Platform)
Full Remote Β· Ukraine, Poland, Romania, Slovakia Β· 8 years of experience Β· C1 - AdvancedDescription As the Data Architect, you will be the senior technical visionary for the Data Platform. You will be responsible for the high-level design of the entire solution, ensuring it is scalable, secure, and aligned with the companyβs long-term...Description
As the Data Architect, you will be the senior technical visionary for the Data Platform. You will be responsible for the high-level design of the entire solution, ensuring it is scalable, secure, and aligned with the companyβs long-term strategic goals. Your decisions will form the technical foundation upon which the entire platform is built, from initial batch processing to future real-time streaming capabilities.
Full remote.Required Skills
(Must-Haves)β Cloud Architecture: Extensive experience designing and implementing large-scale data platforms on Microsoft Azure.
β Expert Technical Knowledge: Deep, expert-level understanding of the Azure data stack, including ADF, Databricks, ADLS, Synapse, and Purview.
β Data Concepts: Mastery of data warehousing, data modeling (star schemas), data lakes, and both batch and streaming architectural patterns.
β Strategic Thinking: Ability to align technical solutions with long-term business strategy.Nice-to-Have Skills:
β Hands-on Coding Ability: Proficiency in Python/PySpark, allowing for the creation of architectural proofs-of-concept.
β DevOps & IaC Acumen: Deep understanding of CI/CD for data platforms and experience with Infrastructure as Code (Bicep/Terraform)/Experience with AzureDevOps for BigData services
β Azure Cost Management: Experience with FinOps and optimizing the cost of Azure data services.
Job responsibilitiesβ End-to-End Architecture Design: Design and document the complete, end-to-end data architecture, encompassing data ingestion, processing, storage, and analytics serving layers.
More
β Technology Selection & Strategy: Make strategic decisions on the use of Azure services (ADF, Databricks, Synapse, Event Hubs) to meet both immediate MVP needs and future scalability requirements.
β Define Standards & Best Practices: Establish data modeling standards, development best practices, and governance policies for the engineering team to follow.
β Technical Leadership: Provide expert technical guidance and mentorship to the data engineers and BI developers, helping them solve the most complex technical challenges.
β Stakeholder Communication: Clearly articulate the architectural vision, benefits, and trade-offs to technical teams, project managers, and senior business leaders. -
Β· 59 views Β· 11 applications Β· 11d
Middle Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· B2 - Upper IntermediateJoin a globally recognized company where innovation meets impact. Our team of seasoned professionals and our commitment to excellence enable us to deliver top-tier solutions that shape industries and drive meaningful results. Be part of a Π‘ompetence...Join a globally recognized company where innovation meets impact. Our team of seasoned professionals and our commitment to excellence enable us to deliver top-tier solutions that shape industries and drive meaningful results. Be part of a Π‘ompetence Π‘enter where your expertise will be valued, your voice will be heard, and your career will be empowered.
Customer
It is an international technology company that specializes in developing high-load platforms for data processing and analytics. Its core product helps businesses manage large volumes of data, build models, and gain actionable insights. Operating globally, the company primarily serves clients in the marketing and advertising sectors, and focuses on modern technologies, microservices architecture, and cloud-based solutions.
Responsibilities
- Design, develop, and maintain end-to-end big data pipelines that are optimized, scalable, and capable of processing large volumes of data in real-time and in batch mode
- Follow and promote best practices and design principles for Data Lake House architecture
- Support technological decision-making for the businessβs future data management and analysis needs by doing POCs
- Write and automate data pipelines
- Assist in improving data organization and accuracy
- Collaborate and work with data analysts, scientists, and engineers to ensure the use of best practices in terms of data processing and storage technologies
- Explore and stay up to date on emerging technologies and proactively shares learnings with the team
- Ensure that all deliverables adhere to our world-class standards
Requirements
- 3+ years of experience in big data development and database design
- Strong hands-on experience with SQL and experience working on advanced SQL
- Proficient in Python and other scripting languages
- Working knowledge of at least one of the big data technologies
- Experience developing software solutions using Hadoop Technologies such as MapReduce, Hive, Spark, Yarn/Mesos, etc.
- At least an Upper-Intermediate level of English
Would be a plus
- Experience with AWS cloud, S3, and Redshift
- Knowledge and exposure to BI applications, e.g. Tableau, QlikView
Personal profile
- Excellent analytical and problem-solving skills
-
Β· 82 views Β· 14 applications Β· 10d
Data Engineer
Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· B1 - IntermediateFAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment. Main areas of work: Game Development β driving the end-to-end engineering process for innovative, engaging, and...FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment.
Main areas of work:- Game Development β driving the end-to-end engineering process for innovative, engaging, and mathematically precise games tailored to global markets.
- Mechanics & Player Experience β overseeing the creation of core gameplay logic and features that maximize engagement and retention while also leading the development of back office admin panels for game configuration, monitoring, and operational efficiency.
- Data-Driven Game Design β implementing analytics and big data solutions to measure player behavior, guide feature development, and improve monetization strategies.
- Cloud Services β we use cloud technologies for scaling and business efficiency.
Responsibilities:
- Design, build, install, test, and maintain highly scalable data management systems.
- Develop ETL/ELT processes and frameworks for efficient data transformation and loading.
- Implement, optimize, and support reporting solutions for the Sportsbook domain.
- Ensure effective storage, retrieval, and management of large-scale data.
- Improve data query performance and overall system efficiency.
- Collaborate closely with data scientists and analysts to deliver data solutions and actionable insights.
Requirements:
- At least 2 years of experience in designing and implementing modern data integration solutions.
- Masterβs degree in Computer Science or a related field.
- Proficiency in Python and SQL, particularly for data engineering tasks.
- Hands-on experience with data processing, ETL (Extract, Transform, Load), ELT (Extract, Load, Transform) processes, and data pipeline development.
- Experience with DBT framework and Airflow orchestration.
- Practical experience with both SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
- Experience with Snowflake.
- Working knowledge of cloud services, particularly AWS (S3, Glue, Redshift, Lambda, RDS, Athena).
- Experience in managing data warehouses and data lakes.
- Familiarity with star and snowflake schema design.
- Understanding of the difference between OLAP and OLTP.
Would be a plus:
- Experience with other cloud data services (e.g., AWS Redshift, Google BigQuery).
- Experience with version control tools (e.g., GitHub, GitLab, Bitbucket).
- Experience with real-time data processing (e.g., Kafka, Flink).
- Familiarity with orchestration tools (e.g., Airflow, Luigi).
- Experience with monitoring and logging tools (e.g., ELK Stack, Prometheus, CloudWatch).
- Knowledge of data security and privacy practices.
We can offer:- 30 days of paid vacation and sick days β we value rest and recreation. We also comply with the national holidays.
- Medical insurance for employees and the possibility of training employees at the expense of the company and gym membership.
- Remote work; after Ukraine wins the war β our own modern lofty office with spacious workplace, and brand-new work equipment (near Pochaina metro station).
- Flexible work schedule β we expect a full-time commitment but do not track your working hours.
- Flat hierarchy without micromanagement β our doors are open, and all teammates are approachable.
-
Β· 32 views Β· 8 applications Β· 10d
Senior Data Engineer (ETL, ML Experience)
Full Remote Β· Worldwide Β· 7 years of experience Β· C1 - AdvancedLocation: Remote (Europe preferred) Contract Type: B2B Experience: 7+ years as a Data Engineer English Level: C1 (Advanced) Compensation: Gross (to be specified) Holidays: 10 public holidays per year (vacation and sick days unpaid) About the Role We are...Location: Remote (Europe preferred)
Contract Type: B2B
Experience: 7+ years as a Data Engineer
English Level: C1 (Advanced)
Compensation: Gross (to be specified)
Holidays: 10 public holidays per year (vacation and sick days unpaid)About the Role
We are seeking a Senior Data Engineer with strong experience in ETL pipeline design, data analytics, and exposure to machine learning workflows. You will play a key role in designing, developing, and maintaining scalable data solutions to support analytics, reporting, and ML-driven decision-making.
You will work closely with data scientists, analysts, and software engineers to ensure data integrity, performance, and accessibility across the organization.
Key Responsibilities
- Design, build, and maintain ETL/ELT pipelines for large-scale data processing.
- Develop, optimize, and manage data models, data warehouses, and data lakes.
- Collaborate with cross-functional teams to define data architecture, governance, and best practices.
- Implement and maintain CI/CD workflows using AWS CodePipeline.
- Work with Python and .NET for automation, data integration, and application-level data handling.
- Support data-driven decision-making through analytics and reporting.
- Troubleshoot and optimize database performance and data processing pipelines.
- Implement data quality and validation frameworks to ensure reliable data flow.
Required Skills & Experience
- 7+ years of professional experience as a Data Engineer or similar role.
- Strong expertise in ETL development and orchestration.
- Python β Expert level (data processing, automation, APIs, ML pipeline integration).
- ETL Tools / Frameworks β Expert level (custom and/or AWS-native).
- Data Analytics & Reporting β Expert level (data modeling, KPI dashboards, insights generation).
- DBA experience β Experienced (database design, tuning, and maintenance).
- AWS CodePipeline β Experienced (CI/CD for data workflows).
- .NET β Experienced (integration, backend data logic).
- Experience with data warehousing solutions (e.g., Redshift, Snowflake, BigQuery) is a plus.
- Familiarity with machine learning data pipelines (feature engineering, data prep, model serving) is a plus.
Nice to Have
- Experience with Airflow, DBT, or other orchestration tools.
- Familiarity with Terraform or AWS CloudFormation.
- Exposure to ML Ops and productionizing ML models.
- Knowledge of data governance, security, and compliance standards.