Jobs
152-
Β· 104 views Β· 7 applications Β· 3d
Data Engineering Team Lead
Full Remote Β· Ukraine Β· 7 years of experience Β· B2 - Upper IntermediateAutomat-it is where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI support that drives real results. We...Automat-it is where high-growth startups turn when they need to move faster, scale smarter, and make the most of the cloud. As an AWS Premier Partner and Strategic Partner, we deliver hands-on DevOps, FinOps, and GenAI support that drives real results.
We work across EMEA and the US, fueling innovation and solving complex challenges daily. Join us to grow your skills, shape bold ideas, and help build the future of tech.
Weβre looking for a Data Engineering Team Lead to build and scale our Data & Analytics capability while delivering modern, production-grade data platforms for customers on AWS. Youβll lead a team of Data Engineers, own delivery quality and timelines, and remain hands-on across architecture, pipelines, and analytics so the team ships fast, safely, and cost-effectively.
π Work location - remote from Ukraine
If you are interested in this opportunity, please submit your CV in English.
Responsibilities
- Manage, coach, and grow a team of Data Engineers through 1:1s, goal setting, feedback, and career development.
- Own end-to-end delivery outcomes (scope, timelines, quality) across multiple projects; unblock the team and ensure on-time, high-quality releases.
- Lead customer-facing workshops, discovery sessions, and proof-of-concepts, serving as the primary technical point of contact to translate requirements into clear roadmaps, estimates, and trade-offs in plain language.
- Support solution proposals, estimates, and statements of work; contribute to thought leadership and reusable accelerators.
- Collaborate closely with adjacent teams (MLOps, DevOps, Data Science, Application Engineering) to ship integrated solutions.
- Design, develop, and deploy AWS-based data and analytics solutions to meet customer requirements. Ensure architectures are highly available, scalable, and cost-efficient.
- Develop dashboards and analytics reports using Amazon QuickSight or equivalent BI tools.
- Migrate and modernize existing data workflows to AWS. Re-architect legacy ETL pipelines to AWS Glue and move on-premises data systems to Amazon OpenSearch/Redshift for improved scalability and insights.
Build and manage multi-modal data lakes and data warehouses for analytics and AI. Integrate structured and unstructured data on AWS (e.g. S3, Redshift) to enable advanced analytics and generative AI model training using tools like SageMaker.
Requirements
- Proven leadership experience with a track record of managing and developing technical teams.
- Production experience with AWS cloud and data services, including building solutions at scale with tools like AWS Glue, Amazon Redshift, Amazon S3, Amazon Kinesis, Amazon OpenSearch Service, etc.
- Skilled in AWS analytics and dashboards tools β hands-on expertise with services such as Amazon QuickSight or other BI tools (Tableau, Power BI) and Amazon Athena.
- Experience with ETL pipelines β ability to build ETL/ELT workflows (using AWS Glue, Spark, Python, SQL).
- Experience with data warehousing and data lakes - ability to design and optimize data lakes (on S3), Amazon Redshift for data warehousing, and Amazon OpenSearch for log/search analytics.
- Proficiency in programming (Python/PySpark) and SQL skills for data processing and analysis.
- Understanding of cloud security and data governance best practices (encryption, IAM, data privacy).
- Excellent communication and customer-facing skills with an ability to explain complex data concepts in clear terms. Comfortable working directly with clients and guiding technical discussions.
- Fluent written and verbal communication skills in English.
- Proven ability to lead end-to-end technical engagements and work effectively in fast-paced, Agile environments.
- AWS certification β AWS certifications, especially in Data Analytics or Machine Learning are a plus.
DevOps/MLOps knowledge β experience with Infrastructure as Code (Terraform), CI/CD pipelines, containerization, and AWS AI/ML services (SageMaker, Bedrock) is a plus.
Benefits
- Professional training and certifications covered by the company (AWS, FinOps, Kubernetes, etc.)
- International work environment
- Referral program β enjoy cooperation with your colleagues and get a bonus
- Company events and social gatherings (happy hours, team events, knowledge sharing, etc.)
- English classes
- Soft skills training
Country-specific benefits will be discussed during the hiring process.
Automat-it is committed to fostering a workplace that promotes equal opportunities for all and believes that a diverse workforce is crucial to our success. Our recruitment decisions are based on your experience and skills, recognising the value you bring to our team.
More -
Β· 97 views Β· 21 applications Β· 15d
Data Engineer for Game analytical platform
Full Remote Β· EU Β· 3 years of experience Β· B2 - Upper IntermediateOur client is at the forefront of innovation in the gaming industry, leveraging data and AI to enhance player experiences and drive community engagement. We are seeking a passionate data engineer to join our dynamic team, dedicated to transforming data...Our client is at the forefront of innovation in the gaming industry, leveraging data and AI to enhance player experiences and drive community engagement. We are seeking a passionate data engineer to join our dynamic team, dedicated to transforming data into actionable insights and enabling advanced AI applications.
As a Data Engineer, you will play an important role in designing, building, and optimizing data pipelines and architectures for AI and machine learning initiatives. You will work closely with AI/ML engineers and software developers on various tasks to effectively collect, store, and process data from multiple sources, including social media and in-game interactions.
Key Responsibilities
Data Infrastructure & Warehousing:
- Design and implement data pipelines using AWS Redshift, S3, and related AWS services
- Build ETL/ELT processes to ingest data from game servers, blockchain networks, and third-party APIs
- Optimize Redshift performance through query optimization, table design, and distribution strategies
- Implement data modeling best practices for dimensional and fact table structures
- Web3 & Blockchain Integration:
- Extract and process on-chain data from various blockchain networks (Ethereum, Polygon, BSC, etc.)
- Integrate NFT marketplace data, token transactions, and smart contract events
- Build real-time streaming pipelines for blockchain data using AWS Kinesis or similar services
- Ensure data accuracy and consistency across centralized game databases and decentralized blockchain data
Game Analytics & Metrics:- Develop data models for player behavior, retention, monetization, and engagement metrics
- Create datasets supporting player lifecycle analysis, cohort studies, and revenue attribution
- Build data marts for game economy analytics, including token economics and NFT trading patterns
- Support A/B testing infrastructure and statistical analysis requirements
- Data Quality & Governance:
- Implement data validation, monitoring, and alerting systems
- Establish data lineage tracking and documentation standards
- Ensure compliance with data privacy regulations and Web3 security best practices
- Collaborate with data analysts and scientists to understand requirements and optimize data delivery
Required Qualifications
Technical Skills:
- 3-5 years of experience in data engineering or a related field
- Strong proficiency with AWS Redshift, including query optimization and performance tuning
- Experience with AWS ecosystem (S3, Lambda, Glue, Kinesis, CloudFormation/CDK)
- Proficiency in SQL and at least one programming language (Python, Scala, or Java)
- Experience with ETL tools and frameworks (Apache Airflow, dbt, AWS Glue)
- Understanding of data warehousing concepts and dimensional modeling
Additional Requirements:
- Experience with version control systems (Git) and CI/CD practices
- Strong problem-solving skills and attention to detail
- Excellent communication skills and ability to work in cross-functional teams
- Bachelor's degree in Computer Science, Data Engineering, or related field
Preferred Qualifications
- Experience with other cloud platforms (GCP BigQuery, Azure Synapse)
- Knowledge of machine learning pipelines and MLOps practices
- Familiarity with container technologies (Docker, Kubernetes)
- Experience with NoSQL databases (DynamoDB, MongoDB)
- Previous experience in the gaming industry or Web3/crypto projects
- Certifications in AWS or other relevant technologies
Nice to Have
Web3 & Gaming Knowledge:
- Basic understanding of blockchain technology, smart contracts, and DeFi protocols
- Familiarity with Web3 data sources (The Graph, Moralis, Alchemy APIs)
- Experience with gaming analytics metrics and player behavior analysis
- Knowledge of real-time data processing and streaming architectures
π We offer:
- Medical Insurance in Ukraine and Multisport program in Poland;
- Flexible working hours;
- Offices in Ukraine;
- All official holidays;
- Paid vacation and sick leaves;
- Tax & accounting services for Ukrainian contractors;
- The company is ready to provide all the necessary equipment;
- English classes up to three times a week;
- Mentoring and Educational Programs;
- Regular Activities on a Corporate level (Incredible parties, Team Buildings, Sports Events, Table Games, Tech Events);
- Advanced Bonus System.
-
Β· 23 views Β· 0 applications Β· 1d
Middle/Senior Data Engineer
Full Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper IntermediateN-iX is looking for a Middle/Senior Data Engineer who would be involved in designing, implementing, and managing the new Data Lakehouse for our customer in the e-commerce domain. The ideal candidate has worked with data-related services in AWS, Snowflake,...N-iX is looking for a Middle/Senior Data Engineer who would be involved in designing, implementing, and managing the new Data Lakehouse for our customer in the e-commerce domain. The ideal candidate has worked with data-related services in AWS, Snowflake, and experience in modern data approaches.
Our Client is a global full-service e-commerce and subscription billing platform on a mission to simplify software sales everywhere. For nearly two decades, weβve helped SaaS, digital goods, and subscription-based businesses grow by managing payments, global tax compliance, fraud prevention, and recurring revenue at scale. Our flexible, cloud-based platform, combined with consultative services, helps clients accelerate growth, reach new markets, and build long-term customer relationships.
Data is at the heart of everything we do β powering insights, driving innovation, and shaping business decisions. We are building a next-generation data platform, and weβre looking for a Senior Data Engineer to help us make it happen.
As a Data Engineer, you will play a key role in designing and building our new Data Lakehouse on AWS, enabling scalable, reliable, and high-quality data solutions. You will work closely with senior engineers, data architects, and product managers to create robust data pipelines, develop data products, and optimize storage solutions that support business-critical analytics and decision-making.
Responsibilities:
- Build and operate a modern Data Lakehouse on AWS (S3 + Iceberg) supporting ingestion, storage, transformation, and serving layers.
- Design and optimize ETL pipelines using PySpark, Airflow (MWAA), and Snowflake for scalability and cost efficiency.
- Automate workflows with Python scripts, integration validation, and monitoring across sources and layers.
- Implement and enforce data quality controls (Glue Data Quality, Great Expectations) and contribute to governance best practices.
- Collaborate with cross-functional teams (Data and Software Architects, Engineering Managers, Product Owners, and Data/Power BI Engineers) to refine data requirements and deliver trusted and actionable insights.
- Support CI/CD practices via GitLab, ensuring version-controlled, testable, and auditable data processes.
- Document data flows and business logic to maintain transparency, lineage, and knowledge transfer across teams.
- Continuously improve operational efficiency by troubleshooting issues, monitoring performance, and suggesting technical enhancements.
Requirements:
- 3+ years of hands-on experience in Data Engineering, preferably in lakehouse or hybrid architectures.
- Proficiency in PySpark for large-scale transformations across layered datasets.
- Experience with Airflow (MWAA) for orchestrating end-to-end pipelines, dependencies, and SLA-driven workloads.
- Knowledge of AWS services used in modern data platforms: S3 + Iceberg, Glue (Catalog + Data Quality), Athena, EMR.
- Experience in Snowflake for analytics serving and cross-platform ingestion.
- Proficiency in Python for automation, validation, and auxiliary data workflows.
- Understanding of data modeling principles and harmonization principles, including SCD handling and cross-source entity resolution.
- Familiarity with CI/CD pipelines in Git/GitLab, ensuring tested, version-controlled, and production-ready deployments.
- Experience working with BI ecosystems (e.g., Power BI, dbt-like transformations, semantic layers).
- Upper-Intermediate English or higher, with the ability to document and explain complex concepts.
We offer*:
- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
More -
Β· 81 views Β· 19 applications Β· 30d
Senior Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· B2 - Upper IntermediateWe are seeking a Senior Data Engineer to join our growing team. In this role, you will play a critical part in maintaining and improving large-scale data integrations, ensuring reliability, scalability, and performance. You will take ownership of...We are seeking a Senior Data Engineer to join our growing team. In this role, you will play a critical part in maintaining and improving large-scale data integrations, ensuring reliability, scalability, and performance. You will take ownership of debugging complex issues, resolving incidents, and working closely with both internal and external stakeholders. This is a hands-on position with a strong impact: from stabilizing pipelines to implementing process improvements that make our data engineering practice more effective and proactive.
The project focuses on large-scale data integration for the travel and hospitality sector.
Required skills
β’ Strong proficiency in Python and proven experience working with large-scale datasets.
β’ Solid background in designing, building, and maintaining data processing pipelines.
β’ Experience with cloud platforms (GCP, AWS, or Azure).
β’ Hands-on skills with SQL and data storage/querying systems (e.g., BigQuery, BigTable, or similar).
β’ Knowledge of containerization and orchestration tools (Docker, Kubernetes).
β’ Ability to troubleshoot and debug complex technical issues in distributed systems.
β’ Strong communication skills in English, with the ability to explain technical details to both technical and non-technical stakeholders.
β’ Experience using AI coding assistants (e.g., Cursor, GitHub Copilot, or similar) in day-to-day development tasks.
β’ Experience with Google Cloud services such as Pub/Sub, Dataflow, and ML-driven data workflows.
Would be a plus
β’ Experience with airline, travel, or hospitality-related datasets.
β’ Exposure to observability and monitoring tools for large-scale data systems.
β’ Experience building AI-powered solutions or integrating AI pipelines/APIs into software projects.
β’ Experience with 2nd tier PMS market like Tesipro or Maestro. Or any property management systems APIs.
Responsibilities
β’ Maintain and enhance existing data integrations, ensuring the reliability, accuracy, and quality of incoming data.
More
β’ Lead the investigation and resolution of complex incidents by performing deep technical analysis and debugging.
β’ Communicate effectively with stakeholders (including customer-facing teams and external partners) by providing transparent and timely updates.
β’ Collaborate with partners to troubleshoot integration issues and ensure smooth data flow.
β’ Identify opportunities to improve processes, tooling, and documentation to scale and streamline data operations.
β’ Contribute to the design and delivery of new data engineering solutions supporting business-critical systems. -
Β· 38 views Β· 3 applications Β· 30d
Middle Data Engineer
Full Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper IntermediateDescription Our Client is a Fortune 500 company and is one of the biggest global manufacturing companies operating in the fields of industrial systems, worker safety, health care, and consumer goods. The company is dedicated to creating the technology and...Description
Our Client is a Fortune 500 company and is one of the biggest global manufacturing companies operating in the fields of industrial systems, worker safety, health care, and consumer goods. The company is dedicated to creating the technology and products that advance every business, improve every home and enhance every life.
As a Data Engineer for our Data Mesh platform, you will design, develop, and maintain data pipelines & models, ensuring high-quality, domain-oriented data products. You will collaborate with cross-functional teams and optimize data processes for performance and cost efficiency. Your expertise in big data technologies, cloud platforms, and programming languages will be crucial in driving the success of our Data Mesh initiatives.
Requirements
Minimum Requirements:
- Proficiency in Python for data processing and automation.
- Strong SQL skills for querying and manipulating data.
- Minimum of 3 years of experience in SQL and Python programming languages, specifically for data engineering tasks.
- Good English (min. B2 level).
- Experience with cloud platforms, preferably Azure (Azure Data Factory, Azure Databricks, Azure SQL Database, etc.).
- Experience with Spark and Databricks or similar big data processing and analytics platforms
- Experience working with large data environments, including data processing, data integration, and data warehousing.
- Experience with data quality assessment and improvement techniques, including data profiling, data cleansing, and data validation.
- Familiarity with data lakes and their associated technologies, such as Azure Data Lake Storage, AWS S3, or Delta Lake, for scalable and cost-effective data storage and management.
- Experience with NoSQL databases, such as MongoDB or Cosmos, for handling unstructured and semi-structured data.
Additional Skillsets (Nice to Have):
- Familiarity with Agile and Scrum methodologies, including working with Azure DevOps and Jira for project management.
- Knowledge of DevOps methodologies and practices, including continuous integration and continuous deployment (CI/CD).
- Experience with Azure Data Factory or similar data integration tools for orchestrating and automating data pipelines.
- Ability to build and maintain APIs for data integration and consumption.
- Experience with data backends for software platforms, including database design, optimization, and performance tuning.
Job responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions.
- Implement data quality checks and ensure data integrity across various data sources.
- Optimize and tune data pipelines for performance and scalability.
- Develop and maintain data models and schemas to support data mesh architecture.
- Work with cloud platforms, particularly Azure, to deploy and manage data infrastructure.
- Participate in Agile development processes, including sprint planning, stand-ups, and retrospectives.
- Monitor and troubleshoot data pipeline issues, ensuring timely resolution.
- Document data engineering processes, best practices, and standards.
-
Β· 44 views Β· 5 applications Β· 30d
Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· B1 - IntermediateKey requirements: 3+ years of experience as a data engineer Key libs Kafka, ClickHouse, MLFlow, RabbitMQ, Celery as well as cloud solutions such as SageMaker Basic requirement: ability to build data pipelines About the Product The product is an...Key requirements:
- 3+ years of experience as a data engineer
- Key libs Kafka, ClickHouse, MLFlow, RabbitMQ, Celery as well as cloud solutions such as SageMaker
Basic requirement: ability to build data pipelines
About the ProductThe product is an AI-powered platform built for the iGaming sector, focused on improving user retention and engagement. It provides casino platforms with tools such as personalized interactions, workflow automations, and AI assistants. The platform acts as a retention layer across the player lifecycle, helping predict, prevent, and personalize key moments - from onboarding to churn, through smart automation and AI.
-
Β· 23 views Β· 6 applications Β· 30d
Senior Data Engineer (Java)
Full Remote Β· Poland Β· Product Β· 5 years of experience Β· B2 - Upper IntermediateWho We Are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our platform provides organizations with real-time...Who We Are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.About the Product:
Our platform provides organizations with real-time visibility and control over their digital environments, enabling IT teams to detect, diagnose, and resolve issues before they impact employees. It integrates multiple products into a single, unified experienceβleveraging AI-driven automation, intelligent data processing, and scalable architecture to enhance productivity across global workplaces. The DEX Platform team builds the core infrastructure that powers these capabilities, delivering high-performance backend services and advanced data pipelines at scale.About the Role:
We are looking for an experienced Senior Data Engineer to join our advanced employee experience monitoring and optimization platform and take a leading role in building the next generation of our data infrastructure. This role involves designing and implementing large-scale, real-time data pipelines and backend services that support both operational workflows and AI-driven insights.You will work end-to-endβcovering architecture, development, deployment, and ongoing production monitoringβwhile collaborating closely with backend, AI, and data specialists to ensure high performance, scalability, and reliability.
Key Responsibilities:
- Design, develop, and maintain robust backend services and data processing pipelines for large-scale, real-time environments.
- Build and optimize streaming solutions using technologies like Kafka, Flink, and other stream-processing frameworks.
- Own the full lifecycle of services: architecture, implementation, deployment, monitoring, and scaling.
- Collaborate with cross-functional teams, including backend engineers, AI developers, and data analysts, to deliver production-ready solutions.
- Ensure compliance, security, and observability for all data-related systems.
- Work with cloud infrastructure to design and deploy scalable solutions.
Troubleshoot and resolve production issues with a focus on high availability and system resilience.
Required Competence and Skills:
- 5+ years of backend/data engineering experience.
- Strong experience with Java (Vert.x or Spring)
- Solid understanding of microservices architecture and cloud platforms (Azure, AWS, or GCP).
- Hands-on experience with Kafka and streaming frameworks such as Kafka Streams, Flink, Spark, or Beam.
- Strong foundation in object-oriented design, design patterns, and clean code principles.
- Experience in production-aware environments, including monitoring, troubleshooting, and optimization.
- Comfortable designing, deploying, and maintaining backend services and data flows.
- Passion for continuous learning, experimenting with new technologies, and building reliable systems at scale.
- Strong product mind-set, open-mindedness and flexibility to work with different technologies as per company needs
- Excellent communication skills in English (Hebrew a plus).
- Team player with a positive attitude and a passion for delivering high-quality products.
Nice to have:
- Experience with Node.js (NestJS/Express).
- Familiarity with AI-first development tools (e.g., GitHub Copilot, Cursor).
- Knowledge of Postgres, Redis, or ClickHouse.
-
Β· 72 views Β· 18 applications Β· 29d
Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· B1 - IntermediateJ-VERS is a programmatic job advertising product that helps employers find the right candidates in any industry by optimizing the hiring process. Founded in 2023, the company has grown 5x, building a fully remote team that serves over 150 enterprise...J-VERS is a programmatic job advertising product that helps employers find the right candidates in any industry by optimizing the hiring process. Founded in 2023, the company has grown 5x, building a fully remote team that serves over 150 enterprise clients across the US and EU through AI-powered advertising technology.
Weβre transforming how companies hire β and weβre just getting started. Want in? Join us as the first Data Engineer and help shape the future of recruitment. You'll be helping our customers achieve their hiring goals π
What You'll Be Doing:
- Design, implement, and maintain ETL/ELT pipelines for ingesting data from publishers, clients, and internal systems;
- Develop and optimize data models for analytics and machine learning (event streams, campaign performance, conversions);
- Ensure high data quality, reliability, and freshness across datasets;
- Collaborate with ML Engineers to prepare training datasets and production feature pipelines;
- Collaborate with Backend and Frontend developers to deliver data APIs and reporting functionality;
- Maintain and optimize data warehouse/lake infrastructure (e.g., AWS S3, Glue, Athena, Redshift, Postgres);
- Implement monitoring, logging, and alerting for pipelines and data jobs;
- Document data flows, schemas, and business definitions.
You are our ideal candidate if you have:
- 3+ years of experience as a Data Engineer;
- Strong knowledge of SQL and experience with relational databases;
- Hands-on experience with Python for data pipelines;
- Experience with cloud-based data platforms (AWS, Glue, S3, Athena);
- Experience with version control (Git) and CI/CD workflows;
- Strong problem-solving skills and ability to work in a fast-paced startup environment;
- Fluency in the Ukrainian language.
It would be a plus if you have:
- Backend/API skills: Experience building REST APIs to expose processed data to applications;
- MLOps support: Familiarity with ML model lifecycle (feature stores, model deployment, monitoring);
- Streaming data: Knowledge of Kafka, Kinesis, or Pub/Sub for real-time pipelines;
- DevOps mindset: Experience with Terraform, Docker, Kubernetes for infra-as-code and deployment;
- Familiarity with orchestration tools (Airflow, or similar).
Youβll thrive at J-Vers if you are:
- Self-motivated and comfortable with autonomy;
- Passionate about using technology to solve real problems;
- Open to collaboration and knowledge sharing;
- Results-oriented and focused on impact;
- Curious and committed to continuous learning.
Why Join Us:
Work Without Limits:
- Remote-first team with no location limits;
- Flexible 8-hour workday;
- Flat structure with direct access to leadership;
- Full set of equipment provided for your comfortable work.
Get Rewarded & Supported:
- Competitive compensation with transparent salary bands;
- Health insurance after 3 months of work;
- Mental health support;
- 24 vacation days + 20 paid sick days + 4 no-doc sick days + company-wide one-week break at year-end.
Endless Opportunities to Grow:
- Personal learning budget for professional development;
- Clear growth paths from Junior to Senior;
- Work with global clients (US & EU);
- Culture of feedback, mentorship, and constant learning.
Hiring Process:
- Intro call with a recruiter
- Values-based interview
- Technical interview with CTO
- Reference check
- Job offer π
At J-Vers, you're not just filling a job β you're joining a mission. We're building something extraordinary where technology and humanity combine to transform hiring. Flex your skills. Expand your impact. Shape the future of global hiring.
More
-
Β· 71 views Β· 9 applications Β· 29d
Senior Data Engineer
Full Remote Β· Ukraine Β· 4 years of experience Β· B1 - IntermediateTJHelpers is committed to building a new generation of data specialists by combining mentorship, practical experience, and structured development through our βHelpers as a Serviceβ model. Weβre looking for a Senior Data Engineer to join our growing data...TJHelpers is committed to building a new generation of data specialists by combining mentorship, practical experience, and structured development through our βHelpers as a Serviceβ model.
Weβre looking for a Senior Data Engineer to join our growing data team and help design, build, and optimize scalable data pipelines and infrastructure. You will work with cross-functional teams to ensure high-quality, reliable, and efficient data solutions that empower analytics, AI models, and business decision-making.
Responsibilities
- Design, implement, and maintain robust ETL/ELT pipelines for structured and unstructured data.
- Build scalable data architectures using modern tools and cloud platforms (e.g., AWS, GCP, Azure).
- Collaborate with data analysts, scientists, and engineers to deliver reliable data solutions.
- Ensure data quality, lineage, and observability across all pipelines.
- Optimize performance, scalability, and cost efficiency of data systems.
- Mentor junior engineers and contribute to establishing best practices.
Requirements
- Strong proficiency in one or more programming languages for data engineering: Python, Java, Scala, or SQL.
- Solid understanding of data modeling, warehousing, and distributed systems.
- Experience with modern data frameworks (e.g., Apache Spark, Flink, Kafka, Airflow, dbt).
- Familiarity with relational and NoSQL databases.
- Good understanding of CI/CD, DevOps practices, and agile workflows.
- Strong problem-solving skills and ability to work in cross-functional teams.
Nice to Have
- Experience with cloud data services (e.g., BigQuery, Snowflake, Redshift, Databricks).
- Knowledge of containerization and orchestration (Docker, Kubernetes).
- Exposure to data governance, security, and compliance frameworks.
- Familiarity with ML/AI pipelines and MLOps practices.
We Offer
- Mentorship and collaboration with senior data architects and engineers.
- Hands-on experience in designing and scaling data platforms.
- Personal learning plan, internal workshops, and peer reviews.
- Projects with real clients across fintech, healthcare, and AI-driven industries.
- Clear growth path toward Lead Data Engineer and Data Architect roles.
-
Β· 25 views Β· 0 applications Β· 28d
Senior Data Engineer
Full Remote Β· Romania Β· 4 years of experience Β· C1 - AdvancedProject Description: We are looking for a Senior Data Engineer. This role focuses on enabling RM practice for mapping business applications and services delivered within this domain. The position offers an opportunity to take ownership of data product...Project Description:
We are looking for a Senior Data Engineer.
This role focuses on enabling RM practice for mapping business applications and services delivered within this domain. The position offers an opportunity to take ownership of data product pipelines, ensuring they are robust, maintainable, and aligned with business needs.
Responsibilities:
- Apply data engineering practices and standards to develop robust and maintainable data pipelines
- Analyze and organize raw data ingestion pipelines
- Evaluate business needs and objectives
- Support senior business stakeholders in defining new data product use cases and their value
- Take ownership of data product pipelines and their maintenance
- Explore ways to enhance data quality and reliability, be the "Quality Gatekeeper" for developed Data Products
- Adapt and apply best practices from the Data One community
- Be constantly on the lookout for ways to improve best practices and efficiencies and make concrete proposals.
- Take leadership and collaborate with other teams proactively to keep things moving
- Be flexible and take on other responsibilities within the scope of the Agile Team
Requirements:
Must have:
- Hands-on experience with Snowflake
- Proven experience as a Data Engineer
- Solid knowledge of data modeling techniques (e.g., Data Vault)
- Advanced expertise with ETL tools (Talend, Alteryx, etc.)
- Strong SQL programming skills; Working knowledge of Python is an advantage.
- Experience with data transformation tools (DBT)
- 2 β 3 years of experience in DB/ETL development (Talend and DBT preferred)
- Hold a B.Sc., B.Eng., or higher, or equivalent in Computer Science, Data Engineering, or related fields
- Be able to communicate in English at the level of C1+
Nice to have:
- Snowflake certification is a plus
- Experience with Agile methodologies in software development
- Familiarity with DevOps/DataOps practices (CI/CD, GitLab, DataOps.live)
- Experience with the full lifecycle management of data products
- Knowledge of Data Mesh and FAIR principles
We offer:
- Long-term B2B contract
- Friendly atmosphere and Trust-based managerial culture
- 100% remote work
- Innovative Environment: Work on cutting-edge AI technologies in a highly impactful program
- Growth Opportunities: Opportunities for professional development and learning in the rapidly evolving field of AI
- Collaborative Culture: Be a part of a diverse and inclusive team that values collaboration and innovation
- Participate only in international projects
- Referral bonuses for recommending your friends to the Unitask Group
- Paid Time Off (Vacation, Sick & Public Holidays in your country)
-
Β· 36 views Β· 0 applications Β· 28d
Data Engineer
Full Remote Β· Romania Β· 4 years of experience Β· C1 - AdvancedDescription: We are looking for a MidβSenior Data Engineer to join our team and contribute to the development of robust, scalable, and high-quality data solutions. This role blends hands-on data engineering with analytical expertise, focusing on building...Description:
We are looking for a MidβSenior Data Engineer to join our team and contribute to the development of robust, scalable, and high-quality data solutions.
This role blends hands-on data engineering with analytical expertise, focusing on building efficient pipelines, ensuring data reliability, and enabling advanced analytics to support business insights.
As part of our team, you will work with modern technologies such as Snowflake, DBT, and Python, and play a key role in enhancing data quality, implementing business logic, and applying statistical methods to real-world challenges.
This position offers the opportunity to work in an innovative environment, contribute to impactful AI-driven projects, and grow professionally within a collaborative and supportive culture.
Responsibilities:
- Build and maintain data pipelines on Snowflake (pipes and streams)
- Implement business logic to ensure scalable and reliable data workflows
- Perform data quality assurance and checks using DBT
- Conduct exploratory data analysis (EDA) to support business insights
- Apply statistical methods and decision tree techniques to data challenges
- Ensure model reliability through cross-validation
Requirements:
- Proven experience with Snowflake β hands-on expertise in building and optimizing data pipelines.
- Strong knowledge of DBT β capable of implementing robust data quality checks and transformations.
- Proficiency in Python, with experience using at least one of the following libraries: Pandas, Matplotlib, or Scikit-learn.
- Familiarity with Jupyter for data exploration, analysis, and prototyping.
- Hold a B.Sc., B.Eng., or higher, or equivalent in Computer Science, Data Engineering, or related fields
- Be able to communicate in English at the level of C1+
We offer:
- Long-term B2B contract
- Friendly atmosphere and Trust-based managerial culture
- 100% remote work
- Innovative Environment: Work on cutting-edge AI technologies in a highly impactful program
- Growth Opportunities: Opportunities for professional development and learning in the rapidly evolving field of AI
- Collaborative Culture: Be a part of a diverse and inclusive team that values collaboration and innovation
- Participate only in international projects
- Referral bonuses for recommending your friends to the Unitask Group
- Paid Time Off (Vacation, Sick & Public Holidays in your country)
-
Β· 44 views Β· 8 applications Β· 28d
ETL Architect (Informatica / Talend / SSIS)
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· C1 - AdvancedWe are seeking an experienced ETL Architect with strong expertise in data integration, ETL pipelines, and data warehousing. The ideal candidate will have hands-on experience with tools such as Informatica PowerCenter/Cloud, Talend, and Microsoft SSIS, and...We are seeking an experienced ETL Architect with strong expertise in data integration, ETL pipelines, and data warehousing. The ideal candidate will have hands-on experience with tools such as Informatica PowerCenter/Cloud, Talend, and Microsoft SSIS, and will be responsible for architecting scalable, secure, and high-performing ETL solutions. This role involves collaborating with business stakeholders, data engineers, and BI teams to deliver clean, consistent, and reliable data for analytics, reporting, and enterprise systems.
Key Responsibilities
- Design, architect, and implement ETL pipelines to extract, transform, and load data across multiple sources and targets.
- Define ETL architecture standards, frameworks, and best practices for performance and scalability.
- Lead the development of data integration solutions using Informatica, Talend, SSIS, or equivalent ETL tools.
- Collaborate with business analysts, data engineers, and BI developers to translate business requirements into data models and ETL workflows.
- Ensure data quality, security, and compliance across all ETL processes.
- Troubleshoot and optimize ETL jobs for performance, scalability, and reliability.
- Support data warehouse / data lake design and integration.
- Manage ETL environments, upgrades, and migration to cloud platforms (AWS, Azure, GCP).
- Provide mentoring, code reviews, and technical leadership to junior ETL developers.
Requirements
- 7+ years of experience in ETL development, with at least 3+ years in a lead/architect role.
- Strong expertise in one or more major ETL tools: Informatica (PowerCenter/Cloud), Talend, SSIS.
- Experience with relational databases (Oracle, SQL Server, PostgreSQL) and data warehousing concepts (Kimball, Inmon).
- Strong knowledge of SQL, PL/SQL, stored procedures, performance tuning.
- Familiarity with cloud data integration (AWS Glue, Azure Data Factory, GCP Dataflow/Dataproc).
- Experience in handling large-scale data migrations, batch and real-time ETL processing.
- Strong problem-solving, analytical, and architectural design skills.
- Excellent communication skills, with the ability to engage technical and non-technical stakeholders.
Nice to Have
- Hands-on experience with big data platforms (Hadoop, Spark, Kafka, Databricks).
- Knowledge of data governance, MDM, and metadata management.
- Familiarity with API-based integrations and microservices architectures.
- Prior experience in industries such as banking, insurance, healthcare, or telecom.
- Certification in Informatica, Talend, or cloud ETL platforms.
-
Β· 29 views Β· 3 applications Β· 25d
Oracle Cloud Architect
Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateDescription You will be joining GlobalLogicβs Media and Entertainment (M&E) practice, a specialized team within a leading digital engineering company. Our practice is at the forefront of the media industryβs technological evolution, partnering with the...Description
You will be joining GlobalLogicβs Media and Entertainment (M&E) practice, a specialized team within a leading digital engineering company. Our practice is at the forefront of the media industryβs technological evolution, partnering with the worldβs largest broadcasters, content creators, and distributors. We have a proven track record of engineering complex solutions, including cloud-based OTT platforms (like VOS360), Media/Production Asset Management (MAM/PAM) systems, software-defined broadcast infrastructure, and innovative contribution/distribution workflows.
This engagement is for a landmark cloud transformation project for a major client in the media sector. The objective is to architect the strategic migration of a large-scale linear broadcasting platform from its current foundation on AWS to Oracle Cloud Infrastructure (OCI). You will be a key advisor on a project aimed at modernizing critical broadcast operations, enhancing efficiency, and building a future-proof cloud architecture.
Requirements
We are seeking a seasoned cloud professional with a deep understanding of both cloud infrastructure and the unique demands of the media industry.
- Expert-Level OCI Experience: Proven hands-on experience designing, building, and managing complex enterprise workloads on Oracle Cloud Infrastructure (OCI).
- Cloud Migration Expertise: Demonstrable experience architecting and leading at least one significant cloud-to-cloud migration project, preferably from AWS to OCI.
- Strong Architectural Acumen: Deep understanding of cloud architecture principles across compute, storage, networking, security, and identity/access management.
- Client-Facing & Consulting Skills: Exceptional communication and presentation skills, with the ability to act as a credible and trusted advisor to senior-level clients.
- Media & Entertainment Domain Knowledge (Highly Preferred): Experience with broadcast and media workflows is a significant advantage. Familiarity with concepts like linear channel playout, live video streaming, media asset management (MAM), and IP video standards (e.g., SMPTE 2110) is highly desirable.
- Infrastructure as Code (IaC): Proficiency with IaC tools, particularly Terraform, for automating OCI environment provisioning.
- Professional Certifications: An OCI Architect Professional certification is strongly preferred. Equivalent certifications in AWS are also valued.
Job responsibilities
As the OCI Architect, you will be the primary technical authority and trusted advisor for this cloud migration initiative. Your responsibilities will include:
- Migration Strategy & Planning: Assess the clientβs existing AWS-based media workflows and architect a comprehensive, phased migration strategy to OCI.
- Architecture Design: Design a secure, scalable, resilient, and cost-efficient OCI architecture tailored for demanding, 24/7 linear broadcast operations. This includes defining compute, storage, networking (including IP video transport), and security models.
- Technical Leadership: Serve as the subject matter expert on OCI for both the client and GlobalLogic engineering teams, providing hands-on guidance, best practices, and technical oversight.
- Stakeholder Engagement: Effectively communicate complex architectural concepts and migration plans to senior client stakeholders, technical teams, and project managers.
- Proof of Concept (PoC) Execution: Lead and participate in PoCs to validate architectural designs and de-risk critical components of the migration.
- Cost Optimization: Develop cost models and identify opportunities for optimizing operational expenses on OCI, ensuring the solution is commercially viable.
- Documentation: Create and maintain high-quality documentation, including architectural diagrams, design specifications, and operational runbooks.
-
Β· 54 views Β· 14 applications Β· 25d
Data Engineer (with Azure)
Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· B1 - IntermediateMain Responsibilities: Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements. You will work on cutting-edge cloud technologies,...Main Responsibilities:
Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements.
You will work on cutting-edge cloud technologies, including Microsoft Fabric, Azure Synapse Analytics, Apache Spark, Data Lake, Data Bricks, Data Factory, Cosmos DB, HD Insights, Stream Analytics, Event Grid in the implementation projects for corporate clients all over EU, CIS, United Kingdom, Middle East.
Our ideal candidate is a professional passionated with technologies, a curious and self-motivated person.
Responsibilities revolve around DevOps and include implementing ETL pipelines, monitoring/maintaining data pipeline performance, model optimization
Mandatory Requirements:
β 2+ years of experience, ideally within a Data Engineer role.
β understanding of data modeling, data warehousing concepts, and ETL processes
β experience with Azure Cloud technologies
β experience in distributed computing principles and familiarity with key architectures, broad experience across a set of data stores (Azure Data Lake Store, Azure Synapse Analytics, Apache Spark, Azure Data Factory)
β Understanding of landing, staging area, data cleansing, data profiling, data security and data architecture concepts (DWH, Data Lake, Delta Lake/Lakehouse, Datamart)
β SQL-skills
β communication and interpersonal skills
β English βΠ2
β Ukrainian language
Will be beneficial if a candidate has experience in SQL migration from on-premises to cloud, data modernization and migration, advanced analytics projects, and/or professional certification in data&analytics.
We offer:
β professional growth and international certification
β free of charge technical and business trainings and the best bootcamps (worldwide, including HQ Microsoft- Redmond courses)
β innovative data & analytics projects, practical experience with cutting-edge Azure data&analytics technologies at various customersβ projects
β great compensation and individual bonus remuneration
β medical insurance
β long-term employment
β ondividual development plan
More -
Β· 52 views Β· 2 applications Β· 25d
Data Engineer/Analyst (Relocation to Spain)
Office Work Β· Spain Β· Product Β· 3 years of experienceDo you know that your professional skills can ensure the liquidity of a cryptocurrency exchange? We are looking for a Data Engineer/Analyst with ETL/ELT for the Spanish office of the most famous Ukrainian company. Working with big data, strong team,...Do you know that your professional skills can ensure the liquidity of a cryptocurrency exchange?
We are looking for a Data Engineer/Analyst with ETL/ELT for the Spanish office of the most famous Ukrainian company.Working with big data, strong team, assistance with family relocation, TOP conditions.
Main Responsibilities
β Build and maintain analytics for PnL, risk, and positions.
β Monitor key performance and risk metrics.
β Develop and optimize ETL/ELT pipelines (both batch and real-time).
β Configure and enhance BI dashboards (Tableau, Grafana).
β Support alerts and anomaly detection mechanisms.
β Work with internal databases, APIs, and streaming data pipelines.
β Collaborate closely with risk, engineering, and operations teams.
β Contribute to the development of the analytics platform: from storage to visualization.Mandatory Requirements
β 3+ years of experience as a Data Engineer
β Strong proficiency in Python (pandas, numpy, pyarrow, SQLAlchemy).
β Deep knowledge of SQL (analysis, aggregation, window functions).
β Experience with BI tools (Tableau, Grafana).
β Scripting experience in Python for automation and report integration.
β Solid understanding of trading principles, margining, VaR, and risk models.
β Proven ability to work with large-scale datasets (millions of rows, low-latency environments).
β Experience working with technical teams to deliver business-oriented analytics.We offer
Immerse yourself in Crypto & Web3:
β Master cutting-edge technologies and become an expert in the most innovative industry.
Work with the Fintech of the Future:
β Develop your skills in digital finance and shape the global market.
Take Your Professionalism to the Next Level:
β Gain unique experience and be part of global transformations.
Drive Innovations:
β Influence the industry and contribute to groundbreaking solutions.
Join a Strong Team:
β Collaborate with top experts worldwide and grow alongside the best.
Work-Life Balance & Well-being:
β Modern equipment.
β Comfortable working conditions, and an inspiring environment to help you thrive.
β 30 calendar days of paid leave.
β Additional days off for national holidays.With us, youβll dive into the world of unique blockchain technologies, reshape the crypto landscape, and become an innovator in your field. If youβre ready to take on challenges and join our dynamic team, apply now and start a new chapter in your career!
More
Letβs Build the Future Together!