Jobs Data Engineer
143-
Β· 31 views Β· 1 application Β· 15d
Azure Data Engineer (ETL Developer)
Hybrid Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - B1Kyivstar is looking for Azure Data Engineer (Developer) to drive different life cycles of large systems. The Data Lifecycle Engineer will have opportunity to help customers realize their full potential through accelerated adoption and productive use of...Kyivstar is looking for Azure Data Engineer (Developer) to drive different life cycles of large systems. The Data Lifecycle Engineer will have opportunity to help customers realize their full potential through accelerated adoption and productive use of Microsoft Data and AI technologies.
Requirements:
β 3+ years of technical expertise in Database development (preferably with SQL, including Azure SQL) β designing and building database solutions (tables / stored procedures / forms / queries / etc.);
β Business intelligence knowledge with a deep understanding of data structure / data models to design and tune BI solutions;
β Advanced data analytics β designing and building solutions using technologies such as Databricks, Azure Data Factory, Azure Data Lake, HD Insights, SQL DW, stream analytics, machine learning, R server;
β Data formats knowledge and the differences between them;
β Experience with Hadoop stack;
β Experience with RDBMS and/or NoSQL;
β Experience with Kafka;
β Experience with Java and/or Scala and/or Python;
β Knowledge of version control system: git or bitbucket;
β BI Tools experience (PowerBI);
β Background in test driven development, automated testing and other software engineering best practices (e.g., performance, security, BDD, etc.);
β Docker/Kubernetes paradigm understanding;
β English β strong intermediate;
β Microsoft Certified is a plus.
Responsibility:
β Developing ETL flows based on Azure Cloud stack technology: such as Databricks, Azure Data Factory, Azure Data Lake, HD Insights, SQL DW, stream analytics, machine learning, R server;
β Troubleshooting and performance optimization for data processing flows, data models;
β Build and maintain reporting, models, dashboards.
We offer:
β A unique experience of working the most customers beloved and largest mobile operator in Ukraine;
β Real opportunity to ship digital products to millions of customers;
β To contribute into building the biggest analytical cloud environment in Ukraine;
β To create Big Data/AI products, changing the whole industry and influencing Ukraine;
β To be involved in real Big Data projects with Petabytes of data and Billions of events daily processed in Real-time;
β A competitive salary;
β Great possibilities for professional development and career growth;
β Medical insurance;
β Life insurance;
β Friendly & Collaborative Environment.
More -
Β· 49 views Β· 5 applications Β· 4d
Lead Analytics Manager
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B1We are looking for an experienced Head of Analytics / Analytics Manager to lead the development and maintenance of company-wide reporting, manage core data sources, and drive the implementation of modern analytics and AI-enabled solutions. This...We are looking for an experienced Head of Analytics / Analytics Manager to lead the development and maintenance of company-wide reporting, manage core data sources, and drive the implementation of modern analytics and AI-enabled solutions.
This role combines hands-on expertise in Power BI, SQL, and Python with ownership of analytics processes, data quality, and a small analytics team. You will play a key role in ensuring reliable management reporting, supporting business growth, and evolving our analytics ecosystem.
Key Responsibilities
β’ Design, build, and maintain Power BI dashboards and reports for company management and business stakeholders ;
β’ Own and manage global data sources, including:
- HRM system ;
- Time tracking system ;
- Agent schedules and workforce data ;β’ Fully build and support reporting and data logic for the companyβs main project, ensuring data accuracy and consistency ;
β’ Implement reporting for new projects, including:
- Connecting to new data sources o Integrating data via APIs ;
- Creating new dashboards and data models;β’ Develop and improve data models and DAX calculations in Power BI ;
β’ Write and optimize SQL queries and data transformations;
β’ Participate in the development of Microsoft Fabric capabilities within existing processes;
β’ Coordinate implementation of ML / forecasting solutions together with external vendors;
β’ Lead and manage a small team: Reporting & Data Analyst and Operational Analyst;
β’ Define priorities, distribute tasks, and review results;β’ Ensure documentation, stability, and reliability of reporting solutions;
- Collect, process, and analyze Customer Experience (CX) data (CSAT, NPS, CES, QA scores, customer feedback, complaints, etc.)
- Build CX dashboards and analytical views to monitor service quality and customer satisfaction
Required Qualifications
- Higher education in IT, Computer Science, Mathematics, Finance, or related field;
- 3+ years of hands-on experience with Power BI;
- Strong and practical knowledge of DAX;
- 3+ years of experience with SQL and building complex queries;
- 1+ year of experience with Python (for data processing / automation / ETL tasks);
- Experience connecting to external systems via APIs;
- Solid understanding of data modeling and BI best practices;
- Experience working with large datasets;
- English level: B1 or higher;
Nice to Have
- Experience with Microsoft Fabric (Dataflows Gen2, Lakehouse/Warehouse, Notebooks, Pipelines);
- Exposure to forecasting or machine learning concepts;
- Experience in BPO / Contact Center / Operations analytics;
What We Offer
- Opportunity to build and shape the analytics function
- Direct impact on management decision-making
- Participation in AI-driven analytics transformation
- Professional growth in a fast-scaling company
-
Β· 57 views Β· 4 applications Β· 12d
Senior Data Engineer/NextFlow Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 7 years of experience Β· English - B2Meet the YozmaTech YozmaTech isnβt just another tech company β weβre a global team of go-getters, innovators, and A-players helping startups and product companies scale smarter and faster. We build dedicated development teams across 10+ countries,...Meet the YozmaTech
YozmaTech isnβt just another tech company β weβre a global team of go-getters, innovators, and A-players helping startups and product companies scale smarter and faster.
We build dedicated development teams across 10+ countries, creating strong, long-term partnerships based on trust, transparency, and real impact.
Here, every idea counts. We value people who are proactive, open-minded, and ready to grow. If youβre passionate about building meaningful products and want to join a team that feels like family β youβll feel right at home with us.Our client weβre seeking a skilled Data Engineer / Software Developer with deep experience in building, maintaining and optimizing reproducible data processing pipelines for large-scale scientific data (bioinformatics, genomics, proteomics, or related domains). The successful candidate will bridge computational engineering best practices with biological data challenges, enabling scientists to move from raw data to reliable insights at scale.
You will work with interdisciplinary teams of researchers, computational scientists, and domain
experts to design end-to-end workflows, ensure data quality and governance, and implement
infrastructure that powers scientific discovery.
Prior experience with Nextflow or similar workflow systems is strongly preferred.Key Requirements:
πΉ Experience with NextFlow and bioinformatics pipelines;
πΉ Strong programming skills in languages such as Python;
πΉ Experience with data processing and pipeline development;
πΉ Familiarity with Linux environments, cloud computing workflows;Domain Experience:
πΉ Prior work in scientific data environments or life sciences research (genomics/proteomics/high-throughput data) is highly desirable;Soft Skills:
πΉ Strong problem-solving, communication, and organization skills; ability to manage multiple projects and deliverables;
πΉ Comfortable collaborating with researchers from biological, computational, and engineering disciplines;
πΉ English β Upper-Intermediate or higher.Will be plus:
πΉ Experience with cloud-based infrastructure and containerization (e.g., Docker);
πΉ Familiarity with AI and machine learning concepts;
πΉ Experience with agile development methodologies and version control systems (e.g., Git);What you will do:
πΉ Design, develop, and maintain high-performance, portable data pipelines using NextFlow;
πΉ Collaborate with data scientists and researchers to integrate new algorithms and features into the pipeline;
πΉ Ensure the pipeline is scalable, efficient, and well-organized;
πΉ Develop and maintain tests to ensure the pipeline is working correctly;
πΉ Work with the DevOps team to deploy and manage the pipeline on our infrastructure;
πΉ Participate in design meetings and contribute to the development of new features and algorithms;Interview stages:
πΉ HR Interview;
πΉ Technical Interview;
πΉ Reference Check;
πΉ Offer;Why Join Us?
At YozmaTech, weβre self-starters who grow together. Every day, we tackle real challenges for real products β and have fun doing it. We work globally, think entrepreneurially, and support each other like family. We invest in your growth and care about your voice. With us, youβll always know what youβre working on and why it matters.
From day one, youβll get:
πΉ Direct access to clients and meaningful products;
πΉ Flexibility to work remotely or from our offices;
πΉ A-team colleagues and a zero-bureaucracy culture;
πΉ Opportunities to grow, lead, and make your mark.After you apply
Weβll keep it respectful, clear, and personal from start to offer.
Youβll always know what project youβre joining β and how you can grow with us.Everyoneβs welcome
Diversity makes us better. We create a space where you can thrive as you are.
Ready to build something meaningful?
Letβs talk. Your next big adventure might just start here.
More -
Β· 18 views Β· 1 application Β· 11d
Senior Data Engineer
Full Remote Β· Ukraine Β· 6 years of experience Β· English - B2Project Description: We are looking for an experienced Data Engineer to join the Shelf Analytics project β a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve...
Project Description:We are looking for an experienced Data Engineer to join the Shelf Analytics project β a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve product visibility, optimize in-store execution, and ultimately increase sales by combining shelf layout data with sales insights.
As a Data Engineer, you will play a key role in building, maintaining, and enhancing scalable data pipelines and analytics workflows that power shelf-level insights. You will work closely with analytics and business stakeholders to ensure high-quality, reliable, and performant data solutions.Responsibilities:
Design, develop, and maintain data pipelines and workflows using Databricks and PySpark
Read, understand, and extend existing codebases; independently develop new components for Databricks workflows
Implement object-oriented Python solutions (classes, inheritance, reusable modules)
Develop and maintain unit tests to ensure code quality and reliability
Work with Spark SQL and SQL Server Management Studio to create and optimize complex queries
Create and manage Databricks workflows, clusters, databases, and tables
Handle data storage and access management in Azure Data Lake Storage (ADLS), including ACL permissions
Collaborate using GitHub, following CI/CD best practices and working with GitHub Actions
Support continuous improvement of data engineering standards, performance, and scalabilityMandatory Skills Description:
Strong programming skills in Python and PySpark
Hands-on experience with Databricks (workflows, clusters, tables, databases)
Solid knowledge of SQL and experience with Spark SQL and SQL Server Management Studio
Experience with pandas, dbx, and unit testing frameworks
Practical experience working with Azure Storage (ADLS) and access control (ACLs)
Proficiency with GitHub, including CI/CD pipelines and GitHub Actions
Ability to work independently, analyze existing solutions, and propose improvementsNice-to-Have Skills Description:
Experience with retail, CPG, or shelf analyticsβrelated solutions
Familiarity with large-scale data processing and analytics platforms
Strong communication skills and a proactive, problem-solving mindsetLanguages:
English: B2 Upper Intermediate
More -
Β· 21 views Β· 1 application Β· 11d
Data Engineer for Shelf Analytics MΕ
Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2We are looking for an experienced Data Engineer to join the Shelf Analytics project β a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve product visibility,...We are looking for an experienced Data Engineer to join the Shelf Analytics project β a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve product visibility, optimize in-store execution, and ultimately increase sales by combining shelf layout data with sales insights.
As a Data Engineer, you will play a key role in building, maintaining, and enhancing scalable data pipelines and analytics workflows that power shelf-level insights. You will work closely with analytics and business stakeholders to ensure high-quality, reliable, and performant data solutions.Responsibilities:
Design, develop, and maintain data pipelines and workflows using Databricks and PySpark
Read, understand, and extend existing codebases; independently develop new components for Databricks workflows
Implement object-oriented Python solutions (classes, inheritance, reusable modules)
Develop and maintain unit tests to ensure code quality and reliability
Work with Spark SQL and SQL Server Management Studio to create and optimize complex queries
Create and manage Databricks workflows, clusters, databases, and tables
Handle data storage and access management in Azure Data Lake Storage (ADLS), including ACL permissions
Collaborate using GitHub, following CI/CD best practices and working with GitHub Actions
Support continuous improvement of data engineering standards, performance, and scalabilityMandatory Skills Description:
Strong programming skills in Python and PySpark
Hands-on experience with Databricks (workflows, clusters, tables, databases)
Solid knowledge of SQL and experience with Spark SQL and SQL Server Management Studio
Experience with pandas, dbx, and unit testing frameworks
Practical experience working with Azure Storage (ADLS) and access control (ACLs)
Proficiency with GitHub, including CI/CD pipelines and GitHub Actions
Ability to work independently, analyze existing solutions, and propose improvementsNice-to-Have Skills Description:
Experience with retail, CPG, or shelf analyticsβrelated solutions
Familiarity with large-scale data processing and analytics platforms
Strong communication skills and a proactive, problem-solving mindset
-
Β· 47 views Β· 3 applications Β· 11d
Senior Data Engineer
Full Remote Β· Bulgaria, Spain, Poland, Portugal, Ukraine Β· 5 years of experience Β· English - B1We are seeking a Senior Data Engineer to deliver data-driven solutions that optimize fleet utilization and operational efficiency across 46,000+ assets in 545+ locations. You will enable decision-making through demand forecasting, asset cascading,...We are seeking a Senior Data Engineer to deliver data-driven solutions that optimize fleet utilization and operational efficiency across 46,000+ assets in 545+ locations. You will enable decision-making through demand forecasting, asset cascading, contract analysis, and risk detection, partnering with engineering and business stakeholders to take models from concept to production on AWS.
Requirements
- 5+ years of experience in data engineering
- 3+ years of hands-on experience building and supporting production ETL/ELT pipelines
- Advanced SQL skills (CTEs, window functions, performance optimization)
- Strong Python skills (pandas, API integrations)
- Proven experience with Snowflake (schema design, Snowpipe, Streams, Tasks, performance tuning, data quality)
- Solid knowledge of AWS services: S3, Lambda, EventBridge, IAM, CloudWatch, Step Functions
- Strong understanding of dimensional data modeling (Kimball methodology, SCDs)
- Experience working with enterprise systems (ERP, CRM, or similar)
Nice-to-haves
- Experience with data quality frameworks (Great Expectations, Deequ)
- Knowledge of CDC tools and concepts (AWS DMS, Kafka, Debezium)
- Hands-on experience with data lake technologies (Apache Iceberg, Parquet)
- Exposure to ML data pipelines and feature stores (SageMaker Feature Store)
- Experience with document processing tools such as Amazon Textract
Core Responsibilities
- Design and develop ETL/ELT pipelines using Snowflake, Snowpipe, internal systems, Salesforce, SharePoint, and DocuSign
- Build and maintain dimensional data models in Snowflake using dbt, including data quality checks (Great Expectations, Deequ)
- Implement CDC patterns for near real-time data synchronization
- Manage and evolve the data platform across S3 Data Lake (Apache Iceberg) and Snowflake data warehouse
- Build and maintain Medallion architecture data lake in Snowflake
- Prepare ML features using SageMaker Feature Store
- Develop analytical dashboards and reports in Power BI
What we offer
- Continuous learning and career growth opportunities
- Professional training and English/Spanish language classes
- Comprehensive medical insurance
- Mental health support
- Specialized benefits program with compensation for fitness activities, hobbies, pet care, and more
- Flexible working hours
- Inclusive and supportive culture
-
Β· 30 views Β· 0 applications Β· 11d
Data Engineer for Shelf Analytics MΕ
Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2Project Description: We are looking for an experienced Data Engineer to join the Shelf Analytics project β a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve...Project Description:
We are looking for an experienced Data Engineer to join the Shelf Analytics project β a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve product visibility, optimize in-store execution, and ultimately increase sales by combining shelf layout data with sales insights.
As a Data Engineer, you will play a key role in building, maintaining, and enhancing scalable data pipelines and analytics workflows that power shelf-level insights. You will work closely with analytics and business stakeholders to ensure high-quality, reliable, and performant data solutions.
Responsibilities:
Design, develop, and maintain data pipelines and workflows using Databricks and PySpark
Read, understand, and extend existing codebases; independently develop new components for Databricks workflows
Implement object-oriented Python solutions (classes, inheritance, reusable modules)
Develop and maintain unit tests to ensure code quality and reliability
Work with Spark SQL and SQL Server Management Studio to create and optimize complex queries
Create and manage Databricks workflows, clusters, databases, and tables
Handle data storage and access management in Azure Data Lake Storage (ADLS), including ACL permissions
Collaborate using GitHub, following CI/CD best practices and working with GitHub Actions
Support continuous improvement of data engineering standards, performance, and scalability
Mandatory Skills Description:
Strong programming skills in Python and PySpark
Hands-on experience with Databricks (workflows, clusters, tables, databases)
Solid knowledge of SQL and experience with Spark SQL and SQL Server Management Studio
Experience with pandas, dbx, and unit testing frameworks
Practical experience working with Azure Storage (ADLS) and access control (ACLs)
Proficiency with GitHub, including CI/CD pipelines and GitHub Actions
Ability to work independently, analyze existing solutions, and propose improvements
-
Β· 44 views Β· 1 application Β· 10d
Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2Summary 5+ years in data science or data engineering roles; Proficient in Python, SQL, and common data tools (Pandas, Plotly, Streamlit, Dash); Familiarity with large language models (LLMs) and deploying ML in production; This role is NOT focused on BI,...Summary
- 5+ years in data science or data engineering roles;
- Proficient in Python, SQL, and common data tools (Pandas, Plotly, Streamlit, Dash);
- Familiarity with large language models (LLMs) and deploying ML in production;
- This role is NOT focused on BI, data platforms, or research work;
- Good fit: Hands-on, Python-first Applied AI / GenAI engineers with real delivery ownership and client-facing experience;
- No fit: Data platform or BI profiles, architecture-heavy or lead-only roles, research-focused profiles, or candidates with only PoC-level GenAI exposure and no ownership.
Role:
This role is ideal for someone comfortable working throughout the entire pre-sales to delivery lifecycle, rolling up their sleeves to solve complex multi-faceted problems, thrives as a technical communicator, and works well as a key member of a team.
Requirements:
- 5+ years in data science or data engineering roles;
- Proficient in Python, SQL, and common data tools (pandas, Plotly, Streamlit, Dash);
- Familiarity with large language models (LLMs) and deploying ML in production;
- Experience working with APIs and interpreting technical documentation;
- Client-facing mindset with clear ownership of decisions and outcomes;
-
Β· 81 views Β· 12 applications Β· 10d
Senior Data Engineer (PySpark)
Full Remote Β· Worldwide Β· 6 years of experience Β· English - B2QIT Software is looking for a Data Engineer to a hospitality technology company which running an analytics platform that serves 2,500+ hotels and 500+ restaurants. You will own and operate our AWS data infrastructure - building pipelines, fixing what...QIT Software is looking for a Data Engineer to a hospitality technology company which running an analytics platform that serves 2,500+ hotels and 500+ restaurants. You will own and operate our AWS data infrastructure - building pipelines, fixing what breaks, and making the platform more reliable and scalable.
More
Project:
Hospitality Analytics Platform
Requirements:
- 6+ years hands-on data engineering (not architecture diagrams - actual pipelines in production)
- Strong Spark/PySpark and Python
- Advanced SQL
- AWS data stack: EMR, Glue, S3, Redshift (or similar), IAM, CloudWatch
- Terraform
Would be a plus:
- Kafka/Kinesis streaming experience
- Airflow or similar orchestration
- Experience supporting BI tools and analytics teams
Responsibilities:
- Build and operate Spark/PySpark workloads on EMR and Glue
- Design end-to-end pipelines: ingestion from APIs, databases, and files β transformation β delivery to analytics consumers
- Implement data quality checks, validation, and monitoring
- Optimize for performance, cost, and reliability β then keep it running
- Work directly with product and analytics teams to define data contracts and deliver what they need
- Manage infrastructure via Terraform
Work conditions:
- The ability to work remotely from anywhere in the world;
- Flexible work schedule, no micromanagement, no strict deadlines and free overtime work;
- Work in European and American products with a modern technology stack in different industries (Finance, Technology, Health, Construction, Media, etc.);
- Revision of wages every year or on an individual basis;
- Accounting support and full payment of taxes by the company;
- 100% compensation for remote English lessons;
- 15 paid leaves (PTO) and public holidays.
-
Β· 54 views Β· 9 applications Β· 9d
ETL Developer
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2Description We are looking for a ETL Developer to join our team and work on data integration for a Pharmaceutical Marketing company. You will develop and support ETL processes that run in Docker containers. Your daily work will primarily involve writing...Description
We are looking for a ETL Developer to join our team and work on data integration for a Pharmaceutical Marketing company.
You will develop and support ETL processes that run in Docker containers. Your daily work will primarily involve writing complex SQL queries and views, performing data transformations, and ensuring accurate and timely delivery of data by monitoring notifications and logs in AWS CloudWatch. Work also involves scripting in Bash and Python for automation, SFTP data transfers, and connecting to APIs when required.
We work as a team, care about code and data quality, and like people who want to learn and improve.
Our teams have daily standups and direct communication with a client on a daily basis.
The platform processes sensitive data, so development is manual, controlled, and accuracy-driven rather than highly automated.Requirements
- 3+ years of experience working with ETL processes or data pipelines
- Strong SQL skills: creating and debugging complex queries, aggregations, and validation logic
- Experience with a relational database (preferably PostgreSQL)
- Basic understanding of data warehouse concepts (facts, dimensions, SCD, star schema)
- Experience building ETL pipelines
- Python knowledge (Pandas, boto3, paramiko), connecting to SFTPs, APIs, and pulling/pushing data
- Understanding of clean code and good coding practices
- Experience using Git and pipelines
- Solid Bash scripting skills for automation and troubleshooting
- Experience with Docker (images, containers, passing data between containers)
- Basic knowledge of AWS, including:
- Running containers in ECS
- Mounting EFS volumes
- Viewing logs in CloudWatch
- English level B2 (can communicate and understand documentation)
- Willingness to learn and improve skills
- Interest in software development and data work
Nice to have
- Experience with Amazon Redshift, Snowflake, Postgres
- Experience using AWS CLI
- Knowledge of AWS services such as:
- ECR
- ECS
- EventBridge
- CloudWatch
- Lambda
- Step Functions
- Experience working with REST APIs
- Knowledge of NoSQL databases
- Experience with CI/CD tools
We offer:
- Possibility to propose solutions on a project
- Dynamic and challenging tasks
- Team of professionals
- Competitive salary
- Low bureaucracy
- Continuous self-improvement
- Long-term employment with paid vacation and other social benefits
- Bunch of perks π
This vacancy is exclusively for Ukrainian developers!
More
-
Β· 17 views Β· 3 applications Β· 8d
Senior Snowflake Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data...- The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data infrastructure through the transition to Snowflake as a priority, as it will enhance capabilities for implementing advanced AI solutions and unlock numerous opportunities for innovation and growth.
We are seeking a highly skilled Snowflake Data Engineer to design, build, and optimize scalable data pipelines and cloud-based solutions across AWS, Azure, and GCP. The ideal candidate will have strong expertise in Snowflake, ETL Tools like DBT, Python, visualization tools like Tableau and modern CI/CD practices, with a deep understanding of data governance, security, and role-based access control (RBAC). Knowledge of data modeling methodologies (OLTP, OLAP, Data Vault 2.0), data quality frameworks, Stream lit application development and SAP integration and infrastructure-as-code with Terraform is essential. Experience working with different file formats such as JSON, Parquet, CSV, and XML is highly valued.
Responsibilities:
β’ Design and develop data pipelines using Snowflake and Snow pipe for real-time and batch ingestion.
β’ Implement CI/CD pipelines in Azure DevOps for seamless deployment of data solutions.
β’ Automate DBT jobs to streamline transformations and ensure reliable data workflows.
β’ Apply data modeling techniques including OLTP, OLAP, and Data Vault 2.0 methodologies to design scalable architectures.
β’ Document data models, processes, and workflows clearly for future reference and knowledge sharing.
β’ Build data tests, unit tests, and mock data frameworks to validate and maintain reliability of data solutions.
β’ Develop Streamlit applications integrated with Snowflake to deliver interactive dashboards and self-service analytics.
β’ Integrate SAP data sources into Snowflake pipelines for enterprise reporting and analytics.
β’ Leverage SQL expertise for complex queries, transformations, and performance optimization.
β’ Integrate cloud services across AWS, Azure, and GCP to support multi-cloud data strategies.
β’ Develop Python scripts for ETL/ELT processes, automation, and data quality checks.
β’ Implement infrastructure-as-code solutions using Terraform for scalable and automated cloud deployments.
β’ Manage RBAC and enforce data governance policies to ensure compliance and secure data access.
β’ Collaborate with cross-functional teams including business analysts, and business stakeholders to deliver reliable data solutions.
Mandatory Skills Description:
β’ Strong proficiency in Snowflake (Snowpipe, RBAC, performance tuning).
β’ Hands-on experience with Python , SQL , Jinja , JavaScript for data engineering tasks.
β’ CI/CD expertise using Azure DevOps (build, release, version control).
β’ Experience automating DBT jobs for data transformations.
β’ Experience building Streamlit applications with Snowflake integration.
β’ Cloud services knowledge across AWS (S3, Lambda, Glue), Azure (Data Factory, Synapse), and GCP (BigQuery, Pub/Sub).
Nice-to-Have Skills Description:
- Cloud certifications is a plus
- Languages:
- English: B2 Upper Intermediate
- The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data infrastructure through the transition to Snowflake as a priority, as it will enhance capabilities for implementing advanced AI solutions and unlock numerous opportunities for innovation and growth.
-
Β· 21 views Β· 0 applications Β· 8d
Senior Snowflake Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data...The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data infrastructure through the transition to Snowflake as a priority, as it will enhance capabilities for implementing advanced AI solutions and unlock numerous opportunities for innovation and growth.
We are seeking a highly skilled Snowflake Data Engineer to design, build, and optimize scalable data pipelines and cloud-based solutions across AWS, Azure, and GCP. The ideal candidate will have strong expertise in Snowflake, ETL Tools like DBT, Python, visualization tools like Tableau and modern CI/CD practices, with a deep understanding of data governance, security, and role-based access control (RBAC). Knowledge of data modeling methodologies (OLTP, OLAP, Data Vault 2.0), data quality frameworks, Stream lit application development and SAP integration and infrastructure-as-code with Terraform is essential. Experience working with different file formats such as JSON, Parquet, CSV, and XML is highly valued.Responsibilities:
β’ Design and develop data pipelines using Snowflake and Snow pipe for real-time and batch ingestion.
β’ Implement CI/CD pipelines in Azure DevOps for seamless deployment of data solutions.
β’ Automate DBT jobs to streamline transformations and ensure reliable data workflows.
β’ Apply data modeling techniques including OLTP, OLAP, and Data Vault 2.0 methodologies to design scalable architectures.
β’ Document data models, processes, and workflows clearly for future reference and knowledge sharing.
β’ Build data tests, unit tests, and mock data frameworks to validate and maintain reliability of data solutions.
β’ Develop Streamlit applications integrated with Snowflake to deliver interactive dashboards and self-service analytics.
β’ Integrate SAP data sources into Snowflake pipelines for enterprise reporting and analytics.
β’ Leverage SQL expertise for complex queries, transformations, and performance optimization.
β’ Integrate cloud services across AWS, Azure, and GCP to support multi-cloud data strategies.
β’ Develop Python scripts for ETL/ELT processes, automation, and data quality checks.
β’ Implement infrastructure-as-code solutions using Terraform for scalable and automated cloud deployments.
β’ Manage RBAC and enforce data governance policies to ensure compliance and secure data access.
β’ Collaborate with cross-functional teams including business analysts, and business stakeholders to deliver reliable data solutions.Mandatory Skills Description:
β’ Strong proficiency in Snowflake (Snowpipe, RBAC, performance tuning).
β’ Hands-on experience with Python , SQL , Jinja , JavaScript for data engineering tasks.
β’ CI/CD expertise using Azure DevOps (build, release, version control).
β’ Experience automating DBT jobs for data transformations.
β’ Experience building Streamlit applications with Snowflake integration.
β’ Cloud services knowledge across AWS (S3, Lambda, Glue), Azure (Data Factory, Synapse), and GCP (BigQuery, Pub/Sub).
-
Β· 63 views Β· 10 applications Β· 6d
Data Platform Engineer
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 6 years of experience Β· English - B1WHO WE ARE At Bennett Data Science, weβve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. Weβre at the top of our field because we focus on actionable technology that helps...WHO WE ARE
At Bennett Data Science, weβve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. Weβre at the top of our field because we focus on actionable technology that helps people around the world. Our deep experience and product-first attitude set us apart from other groups and it's why people who work with us tend to stay with us long term.
WHY YOU SHOULD WORK WITH US
You'll work on an important problem that improves the lives of a lot of people. You'll be at the cutting edge of innovation and get to work on fascinating problems, supporting real products, with real data. Your perks include: expert mentorship from senior staff, competitive compensation, paid leave, flexible work schedule and ability to travel internationally.
Essential Requirements for Data Platform Engineer:- Architecture & Improvement: Continuously review the current architecture and implement incremental improvements, facilitating a gradual transition of production operations from Data Science to Engineering.
- AWS Service Ownership: Own the full lifecycle (development, deployment, support, and monitoring) of client-facing AWS services (including SageMaker endpoints, Lambdas, and OpenSearch). Maintain high uptime and adherence to Service Level Agreements (SLAs).
- ETL Operations Management: Manage all ETL processes, including the operation and maintenance of Step Functions and Batch jobs (scheduling, scaling, retry/timeout logic, failure handling, logging, and metrics).
- Redshift Operations & Maintenance: Oversee all Redshift operations, focusing on performance optimization, access control, backup/restore readiness, cost management, and general housekeeping.
- Performance Optimization: Post-stabilization of core monitoring and pipelines, collaborate with the Data Science team on targeted code optimizations to enhance reliability, reduce latency, and lower operational costs.
- Security & Compliance: Implement and manage the vulnerability monitoring and remediation workflow (Snyk).
- CI/CD Implementation: Establish and maintain robust Continuous Integration/Continuous Deployment (CI/CD) systems.
- Infrastructure as Code (Optional): Utilize IaC principles where necessary to ensure repeatable and streamlined release processes.
Mandatory Hard Skills:- AWS Core Services: Proven experience with production fundamentals (IAM, CloudWatch, and VPC networking concepts).
- AWS Deployment: Proficiency in deploying and operating AWS SageMaker and Lambda services.
- ETL Orchestration: Expertise in using AWS Step Functions and Batch for ETL and job orchestration.
- Programming & Debugging: Strong command of Python for automation and troubleshooting.
- Containerization: Competence with Docker/containers (build, run, debug).
- Version Control & CI/CD: Experience with CI/CD practices and Git (GitHub Actions preferred).
- Data Platform Tools: Experience with Databricks, or a demonstrated aptitude and willingness to quickly learn.
Essential Soft Skills:
- Accountability: Demonstrate complete autonomy and ownership over all assigned systems ("you run it, you fix it, you improve it").
- Communication: Fluent in English; capable of clear, direct communication, especially during incidents.
- Prioritization: A focus on delivering a minimally-supportable, deployable solution to meet deadlines, followed by optimization and cleanup.
- Incident Management: Maintain composure under pressure and possess strong debugging and incident handling abilities.
- Collaboration: Work effectively with the Data Science team while communicating technical trade-offs clearly and maintaining momentum.
-
Β· 20 views Β· 1 application Β· 5d
Senior/Lead Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2Description The GlobalLogic technology team is focused on next-generation health capabilities that align with the clientβs mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of...Description
The GlobalLogic technology team is focused on next-generation health capabilities that align with the clientβs mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of our broader team, with a focus on patient engagement, care coordination, AI, healthcare analytics, and interoperability. These advanced technologies enhance our product portfolio with new services while improving clinical and patient experiences.
Requirements
An AWS Data Engineer designs, develops, and maintains scalable data solutions using AWS cloud services.
Key Responsibilities:
β’ Design, build, and manage ETL (Extract, Transform, Load) pipelines using AWS services (e.g., Glue, Lambda, EMR, Redshift, S3).
β’ Develop and maintain data architecture (data lakes, warehouses, databases) on AWS.
β’ Implement data quality and governance solutions.
β’ Automate data workflows and monitor pipeline health.
β’ Ensure data security and compliance with company policies.
Required Skills:
β’ Proficiency with AWS cloud services, especially data-related offerings (S3, Glue, Redshift, Athena, EMR, Kinesis, Lambda).
β’ Strong SQL and Python skills.
β’ Experience with ETL tools and frameworks.
β’ Familiarity with data modelling and warehousing concepts.
β’ Knowledge of data security, access management, and best practices in AWS.
Preferred Qualifications:
β’ AWS certifications (e.g., AWS Certified Data Analytics β Speciality, AWS Certified Solutions Architect).
β’ Background in software engineering or data science.β’ Hands-on experience with Oracle Database and log-based Change Data Capture (CDC) replication using AWS Database Migration Service (DMS) for near real-time data ingestion.
Job responsibilities
- Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.
- Involved in planning of system and development deployment, as well as responsible for meeting compliance and security standards.
- API development using AWS services in a scalable, microservices-based architecture
- Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.
- May document testing and maintenance of system updates, modifications, and configurations.
- May act as a liaison with key technology vendor technologists or other business functions.
- Function Specific: Strategically design technology solutions that meet the needs and goals of the company and its customers/users.
- Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or if a customisation solution would be required.
- Test the quality of a product and its ability to perform a task or solve a problem.
- Perform basic maintenance and performance optimisation procedures in each of the primary operating systems.
- Ability to document detailed technical system specifications based on business system requirements
- Ensures system implementation compliance with global & local regulatory and security standards (i.e. HIPAA, SOCII, ISO27001, etc.)
-
Β· 47 views Β· 9 applications Β· 5d
Data Engineer
Ukraine Β· 4 years of experience Β· English - B2Role Summary A key role in our data engineering team, working closely with the rest of the technology team to provide a first class service to both internal and external users. In this role you will be responsible for building solutions that allow us to...Role Summary
A key role in our data engineering team, working closely with the rest of the technology team to provide a first class service to both internal and external users. In this role you will be responsible for building solutions that allow us to use our data in a robust, flexible and efficient way while also maintaining the integrity of our data, much of which is of a sensitive nature.
Role and Responsibilities
Manages resources (internal and external) in the delivery of the product roadmap for our data asset. Key responsibilities include but not exhaustive:
- Work with data to solve business problems, building and maintaining the infrastructure to answer questions and improve processes
- Work closely with the development and product teams (both internal and external) to ensure that products meet the required specification prior to release.
- Working closely with our technology colleagues throughout the delivery lifecycle to ensure that all data related processes are efficient and accurate
- Providing expert assistance with design and implementation of all new products. All of our new technology stack has data at its heart.
- Ensuring data is available for business and management reporting purposes.
- Assist with the development and refinement of the agile process.
- Be an advocate for best practices and continued learning
- Strong technical understanding of a data experience
- Ensure the ongoing maintenance of their own CPD
- Carry out all duties in a manner that always reflect Financial Wellness Groupβs values and principles
Essential Criteria
- Extensive knowledge of using Python to build ETL and ELT products in AWS using Lambda and Batch processing.
- A keen understanding of developing and tuning Microsoft SQL Server.
- Exposure to development in Postgres.
- A good understanding of CI/CD for data and the challenges inherent.
- Ability to use Source Control Systems such as Git/Azure DevOps
- An understanding of dimensional modelling and data warehousing methodologies and an interest in Data Lakehousing technologies.
- An understanding of Infrastructure as a Service provision (for example Terraform)
- The ability to rapidly adapt to new technologies and technical challenges.
- The flexibility to quickly react to changing business priorities.
- A team player, with a natural curiosity and a desire to learn new skills
- An interest in finding the βright wayβ
- Passionate about data delivery and delivering change
What To Expect From Digicode?
π Work from Anywhere: From an office, home, or travel continuously if thatβs your thing. Everything we do is online. As long as you have the Internet and your travel nomad lifestyle doesnβt affect the work process (you meet all deadlines and are present at all the meetings), youβre all set.
πΌ Professional Development: We offer great career development opportunities in a growing company, international work environment, paid language classes, conference and education budget, & internal 42 Community training.
π§ββοΈ Work-life Balance: We provide employees with 18+ paid vacation days and paid sick leave, flexible schedule, medical insurance for employees and their children, monthly budget for things like a gym or pool membership.
π Culture of Openness: Weβre committed to fostering a community where everyone feels welcome, seen, and heard, with minimal bureaucracy, and a flat organization structure.
And, also, corporate gifts, corporate celebrations, free food & snacks, play & relax rooms for those who visit the office.
Did we catch your attention? Weβd love to hear from you.
More