Jobs
120-
· 25 views · 9 applications · 14d
Azure Data and AI Engineer
Full Remote · Worldwide · 5 years of experience · B2 - Upper IntermediateRequirement: - Must to have strong experience with Azure, - Must to have strong experience in Data engineering, - Must to have strong experience with AI, We offer: • Attractive financial package • Paid vacation, holidays and sick leaves • Challenging...Requirement:
- Must to have strong experience with Azure,
- Must to have strong experience in Data engineering,
- Must to have strong experience with AI,
We offer:
• Attractive financial package
• Paid vacation, holidays and sick leaves
• Challenging projects
• Professional & career growth
• Great atmosphere in a friendly small team
• Flexible working hours
More -
· 31 views · 7 applications · 28d
GCP Data Engineer
Full Remote · Worldwide · 5 years of experience · B2 - Upper IntermediateRequirements: - Experience with GCP (BigQuery and Dataform) - Excellent communication skills We offer: • Attractive financial package • Challenging projects • Professional & career growth • Great atmosphere in a friendly small team • Flexible working hours ...Requirements:
- Experience with GCP (BigQuery and Dataform)
- Excellent communication skills
We offer:
• Attractive financial package
• Challenging projects
• Professional & career growth
• Great atmosphere in a friendly small team
• Flexible working hours
More -
· 354 views · 40 applications · 8d
Data Engineer
Countries of Europe or Ukraine · 2 years of experience · B1 - IntermediateLooking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV. Skills requirements: • 2+ years of experience with...Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV.
Skills requirements:
• 2+ years of experience with Python;
• 2+ years of experience as a Data Engineer;
• Experience with Pandas;
• Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
• Familiarity with Amazon Web Services;
• Knowledge of data algorithms and data structures is a MUST;
• Working with high volume tables 10m+.
Optional skills (as a plus):
• Experience with Spark (pyspark);
• Experience with Airflow;
• Experience with Kafka;
• Experience in statistics;
• Knowledge of DS and Machine learning algorithms..Key responsibilities:
• Create ETL pipelines and data management solutions (API, Integration logic);
• Different data processing algorithms;
• Involvement in creation of forecasting, recommendation, and classification models.We offer:
• Great networking opportunities with international clients, challenging tasks;
• Building interesting projects from scratch using new technologies;
• Personal and professional development opportunities;
• Competitive salary fixed in USD;
• Paid vacation and sick leaves;
• Flexible work schedule;
• Friendly working environment with minimal hierarchy;
• Team building activities, corporate events.
More -
· 1128 views · 114 applications · 8d
Junior Data Engineer
Full Remote · Countries of Europe or Ukraine · 0.5 years of experience · B1 - IntermediateWe seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...We seek a Junior Data Engineer with basic pandas and SQL experience.
At Dataforest, we are actively seeking Data Engineers of all experience levels.
If you're ready to take on a challenge and join our team, please send us your resume.
We will review it and discuss potential opportunities with you.
Requirements:
• 6+ months of experience as a Data Engineer
• Experience with SQL ;
• Experience with Python;
Optional skills (as a plus):
• Experience with ETL / ELT pipelines;
• Experience with PySpark;
• Experience with Airflow;
• Experience with Databricks;
Key Responsibilities:
• Apply data processing algorithms;
• Create ETL/ELT pipelines and data management solutions;
• Work with SQL queries for data extraction and analysis;
• Data analysis and application of data processing algorithms to solve business problems;
We offer:
• Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark
• Opportunity to work with the high-skilled engineering team on challenging projects;
• Interesting projects with new technologies;
• Great networking opportunities with international clients, challenging tasks;
• Building interesting projects from scratch using new technologies;
• Personal and professional development opportunities;
• Competitive salary fixed in USD;
• Paid vacation and sick leaves;
• Flexible work schedule;
• Friendly working environment with minimal hierarchy;
• Team building activities, corporate events.
More -
· 67 views · 6 applications · 25d
Data Engineer
Full Remote · Ukraine · Product · 3 years of experience · B1 - IntermediateWe are looking for an experienced Data Engineer to design and maintain robust data infrastructure across our systems. In this role, you will be responsible for building scalable data pipelines, ensuring data integrity, and integrating third-party data...We are looking for an experienced Data Engineer to design and maintain robust data infrastructure across our systems. In this role, you will be responsible for building scalable data pipelines, ensuring data integrity, and integrating third-party data sources. Your primary focus will be to enable efficient data flow and support analytical capabilities across the organization. You will also contribute to the development of our data architecture, implement best engineering practices, and collaborate closely with cross-functional teams to turn raw data into actionable insights.
Responsibilities
- Communicate with both technical and non-technical audiences to gather requirements
- Review and analyze data and logic to ensure consistency and accuracy
- Design, implement, and maintain data pipelines for efficient data flow
- Integrate and support of developed solutions
- Research and evaluate third-party components for potential use
- Follow best engineering practices: refactoring, code review, testing, continuous delivery, and Scrum
- Design, optimize, and support of data storage
Requirements
- At least 5+ years of experience in data engineering
- Experience in requirement gathering and communication with stakeholders
- Strong knowledge of DWH (data warehouse) architecture and principles
- Practical experience building ETL pipelines and designing data warehouses
- Deep experience with Python with a strong focus on PySpark
- Proficiency in SQL and databases such as PostgreSQL, ClickHouse, MySQL
- Hands-on experience with data scraping and integrating third-party sources and APIs
- Solid understanding of software design patterns, algorithms, and data structures
- Intermediate English proficiency
Will be a plus
- Experience with RabbitMQ or Kafka
- Understanding of web application architecture
- Familiarity with DataOps practices
- Background in FinTech or Trading domains
We offer
- Tax expenses coverage for private entrepreneurs in Ukraine
- Expert support and guidance for Ukrainian private entrepreneurs
- 20 paid vacation days per year
- 10 paid sick leave days per year
- Public holidays as per the company's approved Public holiday list
- Medical insurance
- Opportunity to work remotely
- Professional education budget
- Language learning budget
- Wellness budget (gym membership, sports gear and related expenses)
More
-
· 25 views · 3 applications · 25d
Cloud System engineer
Full Remote · Ukraine · Product · 2 years of experience · A2 - ElementaryRequirements: Knowledge of the core functionality of virtualization platforms; Experience implementing and migrating workloads in virtualized environment; Experience in complex IT solutions and Hybrid Cloud solution projects. Good understanding of...Requirements:
- Knowledge of the core functionality of virtualization platforms;
- Experience implementing and migrating workloads in virtualized environment;
- Experience in complex IT solutions and Hybrid Cloud solution projects.
- Good understanding of IT-infrastructure services is a plus;
- Strong knowledge in troubleshooting of complex environments in case of failure;
- At least basic knowledge in networking & information security is an advantage
- Hyper-V, Proxmox, VMWare experience would be an advantage;
- Experience in the area of services outsourcing (as customer and/or provider) is an advantage.
- Work experience of 2+ years in a similar position
- Scripting and programming experience/background in PowerShell/Bash is an advantage;
- Strong team communication skills, both verbal and written;
- Experience in technical documentation writing and preparation;
- English skills - intermediate level is minimum and mandatory for global teams communication;
- Industry certification focused on relevant solution area.
Areas of Responsibility includes:
- Participating in deployment and IT-infrastructure migration projects, Hybrid Cloud solution projects; Client support;
- Consulting regarding migration IT-workloads in complex infrastructures;
- Presales support (Articulating service value in the sales process) / Up and cross sell capability);
- Project documentation: technical concepts
- Education and development in professional area including necessary certifications.
-
· 452 views · 40 applications · 8d
Middle Data Engineer
Full Remote · Countries of Europe or Ukraine · 2 years of experience · B1 - IntermediateDataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project — a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture...Dataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project — a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture that powers large-scale data collection, analysis, and integrations. We are waiting for your CV!
Requirements:- 2+ years of commercial experience with Python.
- Experience working with PostgreSQL databases.
- Profound understanding of algorithms and their complexities, with the ability to analyze and optimize them effectively.
- Solid understanding of ETL principles and best practices.
- Excellent collaborative and communication skills, with demonstrated ability to mentor and support team members.
- Experience working with Linux environments, cloud services (AWS), and Docker.
- Strong decision-making capabilities with the ability to work independently and proactively.
Will be a plus:
- Experience in web scraping, data extraction, cleaning, and visualization.
- Understanding of multiprocessing and multithreading, including process and thread management.
- Familiarity with Redis.
- Excellent programming skills in Python with a strong emphasis on optimization and code structuring.
- Experience with Flask / Flask-RESTful for API development.
- Knowledge and experience with Kafka.
Key Responsibilities:
- Develop and maintain a robust data processing architecture using Python.
- Design and manage data pipelines using Kafka and SQS.
- Optimize code for better performance and maintainability.
- Design and implement efficient ETL processes.
- Work with AWS technologies to ensure flexible and reliable data processing systems.
- Collaborate with colleagues, actively participate in code reviews, and improve technical knowledge.
- Take responsibility for your tasks and suggest improvements to processes and systems.
We offer:- Working in a fast growing company;
- Great networking opportunities with international clients, challenging tasks;
- Personal and professional development opportunities;
- Competitive salary fixed in USD;
- Paid vacation and sick leaves;
- Flexible work schedule;
- Friendly working environment with minimal hierarchy;
- Team building activities, corporate events.
More -
· 46 views · 3 applications · 28d
Senior Market Data Engineer
Full Remote · Countries of Europe or Ukraine · 5 years of experience · B2 - Upper IntermediateWe are looking for a skilled and experienced Software Engineer to join our team, building high-performance real-time data pipelines to process financial market data, including security prices for various asset classes such as equities, options, futures,...We are looking for a skilled and experienced Software Engineer to join our team, building high-performance real-time data pipelines to process financial market data, including security prices for various asset classes such as equities, options, futures, and more. You will play a key role in designing, developing, and optimizing data pipelines that handle large volumes of data with low latency and high throughput, ensuring that our systems can process market data in real time and batch modes.
Key Responsibilities:
- Architect, develop, and enhance market data systems
- Contribute to the software development lifecycle in a collaborative team environment, including design, implementation, testing, and support
- Design highly efficient, scalable, mission-critical systems
- Maintain good software quality and test coverage
- Participate in code reviews
- Troubleshooting incidents and reported bugs
Requirements:
- Bachelor’s or advanced degree in Computer Science or Electrical Engineering
- Proficiency in the following programming languages: Java, Python or Go
- Prior experience working with equities or futures market data, such as CME data, US Equities Options, is a must
- Experience in engineering and supporting Market Data feed handlers
- Technically fluent (Python, SQL, JSON, ITCH, FIX, CSV); comfortable discussing pipelines and validation specs.
- Prior experience working on tick data storage, such as KDB+ or Clickhouse
- Familiarity with time series analysis
- Good understanding of the Unix/Linux programming environment
- Expertise with SQL and relational databases
- Excellent problem-solving and communication skills
- Self-starter and works well in a fast-paced environment
-
· 29 views · 3 applications · 14d
Lead Data Engineer
Full Remote · Countries of Europe or Ukraine · 7 years of experience · B2 - Upper IntermediateWe are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data...We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.
Does this relate to you?
- 7+ years of experience in Data Engineering field
- At least 1+ year of experience as Lead\Architect
- Strong expertise in SQL and data modeling concepts.
- Hands-on experience with Airflow.
- Experience working with Redshift.
- Proficiency in Python for data processing.
- Strong understanding of data governance, security, and compliance.
- Experience in implementing CI/CD pipelines for data workflows.
- Ability to work independently and collaboratively in an agile environment.
- Excellent problem-solving and analytical skills.
A new team member will be in charge of:
- Design, develop, and maintain scalable data warehouse solutions.
- Build and optimize ETL/ELT pipelines for efficient data integration.
- Design and implement data models to support analytical and reporting needs.
- Ensure data integrity, quality, and security across all pipelines.
- Optimize data performance and scalability using best practices.
- Work with big data technologies such as Redshift.
- Collaborate with cross-functional teams to understand business requirements and translate them into data solutions.
- Implement CI/CD pipelines for data workflows.
- Monitor, troubleshoot, and improve data processes and system performance.
- Stay updated with industry trends and emerging technologies in data engineering.
Already looks interesting? Awesome! Check out the benefits prepared for you:
- Regular performance reviews, including remuneration
- Up to 25 paid days off per year for well-being
- Flexible cooperation hours with work-from-home
- Fully paid English classes with an in-house teacher
- Perks on special occasions such as birthdays, marriage, childbirth
- Referral program implying attractive bonuses
- External & internal training and IT certifications
Ready to try your hand? Send your CV without a doubt!
More -
· 55 views · 3 applications · 6d
Data Engineer
Full Remote · Countries of Europe or Ukraine · 5 years of experience · B2 - Upper Intermediate MilTech 🪖Who We Are OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments,...Who We Are
OpenMinds is a cognitive defence tech company countering authoritarian influence in the battle for free and open societies. We work with over 30 governments and organisations worldwide, including Ukraine, the UK, and NATO member governments, leading StratCom agencies, and research institutions.
Our expertise lies in accessing restricted and high-risk environments, including conflict zones and closed platforms.
We combine ML technologies with deep local expertise. Our team, based in Kyiv, Lviv, London, Ottawa, and Washington, DC, includes behavioural scientists, ML/AI engineers, data journalists, communications experts, and regional specialists.
Our core values are: speed, experimentation, elegance and focus. We are expanding the team and welcome passionate, proactive, and resourceful professionals who are eager to contribute to the global fight in cognitive warfare.
Who we’re looking for
OpenMinds is seeking a skilled and curious Data Engineer who’s excited to design and build data systems that power meaningful insight. You’ll work closely with a passionate team of behavioral scientists and ML engineers on creating a robust data infrastructure that supports everything from large-scale narrative tracking to sentiment analysis.
In the position you will:
- Take ownership of our multi-terabyte data infrastructure, from data ingestion and orchestration to transformation, storage, and lifecycle management
- Collaborate with data scientists, analysts, ML engineers, and domain experts to develop impactful data solutions
- Optimize and troubleshoot data infrastructure to ensure high performance, cost-efficiency, scalability, and resilience
- Stay up-to-date with trends in data engineering and apply modern tools and practices
- Define and implement best practices for data processing, storage, and governance
- Translate complex requirements into efficient data workflows that support threat detection and response
We are a perfect match if you have:
- 5+ years of hands-on experience as a Data Engineer, with a proven track record of leading complex data projects from design to production
- Highly skilled in SQL and Python for advanced data processing, pipeline development, and optimization
- Deep understanding of software engineering best practices, including SOLID, error handling, observability, performance tuning, and modular architecture
- Ability to write, test and deploy production-ready code
- Extensive experience in database design, data modeling, and modern data warehousing, including ETL orchestration using Airflow or equivalent
- Familiarity with Google Cloud Platform (GCP) and its data ecosystem (BigQuery, GCS, Pub/Sub, Cloud Run, Cloud Functions, Looker)
- Open-headed, capable of coming up with creative solutions and adapting to frequently changing circumstances and technological advances
- Experience in DevOps (docker/k8s, IaaC, CI/CD) and MLOps
- Fluent in English with excellent communication and cross-functional collaboration skills
We offer:
- Work in a fast-growing company with proprietary AI technologies, solving the most difficult problems in the domains of social behaviour analytics and national security
- Competitive market salary
- Opportunity to present your work on tier 1 conferences, panels, and briefings behind closed doors
- Work face-to-face with world-leading experts in their fields, who are our partners and friends
- Flexible work arrangements, including adjustable hours, location, and remote/hybrid options
- Unlimited vacation and leave policies
- Opportunities for professional development within a multidisciplinary team, boasting experience from academia, tech, and intelligence sectors
- A work culture that values resourcefulness, proactivity, and independence, with a firm stance against micromanagement
-
· 16 views · 2 applications · 14d
Senior ML/GenAI Engineer
Full Remote · Ukraine · Product · 5 years of experience · B2 - Upper IntermediateSenior ML Engineer Full-time / Remote About Us ExpoPlatform is a UK-based company founded in 2013, delivering advanced technology for online, hybrid, and in-person events across 30+ countries. Our platform provides end-to-end solutions for event...Senior ML Engineer
Full-time / Remote
About Us
ExpoPlatform is a UK-based company founded in 2013, delivering advanced technology for online, hybrid, and in-person events across 30+ countries. Our platform provides end-to-end solutions for event organizers, including registration, attendee management, event websites, and networking tools.
Role Responsibilities:
- Develop AI Agents, tools for AI Agents, API as a service
- Prepare development and deployment documentation
- Participate in R&D activities of Data Science team
Required Skills & Experience:
- 5+ years of experience with DL frameworks (PyTorch and/or TensorFlow)
- 5+ years of experience in software development in Python
- Hand-on experience with LLM, RAG and AI Agents development
- Experience with Amazon SageMaker, Amazon Bedrock, LangChain, LangGraph, LangSmith, LlamaIndex, HaggingFace, OpenAI
- Hand-on experience of usage AI tools for software development to increase efficiency and code quality, usage AI tools for code review.
- Knowledge of SQL, non-SQL and vector databases
- Understanding of embedding vectors and semantic search
- Proficiency in Git (Bitbucket) and Docker
- Upper-Intermediate (B2+) or higher level of English
Would a Plus:
- Hand-on experience with SLM and LLM fine-tuning
- Education in Data Science, Computer Science, Applied Math or similar
- AWS certifications (AWS Certified ML or equivalent)
- Experience with TypeSense
- Experience with speech recognition, speech-to-text ML models
What We Offer:
- Career growth with an international team.
- Competitive salary and financial stability.
- Flexible working hours (Mon-Fri, 8 hours).
- Free English courses and a budget for education
More
-
· 33 views · 1 application · 25d
Senior BI Engineer
Full Remote · Ukraine · Product · 5 years of experience · B1 - IntermediateFAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment. We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable...FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment.
We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable to withstand enormous loads and provide a unique experience for players.
FAVBET Tech does not organize and conduct gambling on its platform. Its main focus is software development.
We are looking for a Senior BI Engineer to join our BI SB Team.
Requirements:
— At least 5 years of experience in designing and creating modern data integration solutions.
— Leading the BI SB Team.
— People management and task definition skils must have.
— Master’s degree in Computer Science or a related field.
— Proficient in Python and SQL, particularly for data engineering tasks.
— Experience with data processing, ETL (Extract, Transform, Load), ELT (Extract, Load, Transform) processes, and data pipeline development.
— Expirience with DBT framework and AirFlow orchestration.
— Practical experience with both SQL and NoSQL databases (such as PostgreSQL, MongoDB).
— Expirience with SnowFlake.
— Working knowledge of cloud services, particularly AWS (S3, Glue, Redshift, Lamda, RDS, Athena).
— Experience in managing data warehouses and data lakes. Familiarity with star and snowflake DWH design schema. Know the difference between OLAP and OLTP.— Experience in designing data analytic reports with (QuickSight, Pentaho Services, PowerBI).
Would be a plus:
— Experience with cloud data services (e.g., AWS Redshift, Google BigQuery).
— Experience with tools like GitHub, GitLab, Bitbucket.
— Experience with real-time data processing (e.g., Kafka, Flink).
— Familiarity with orchestration tools like Airflow, Luigi.
— Experience with monitoring and logging tools (e.g., ELK Stack, Prometheus, CloudWatch).
— Knowledge of data security and privacy practices.Responsibilities:
— Design, construct, install, test, and maintain highly scalable data management systems.
— Develop ETL/ELT processes and frameworks for data transformation and load.— Implement, optimize and support reports for Sportsbook domain
— Ensure efficient storage and retrieval of big data.
— Optimize data retrieval and query performance.
— Work closely with data scientists and analysts to provide data solutions and insights.We can offer:
— 30 days of paid vacation and sick days — we value rest and recreation. We also comply with the national holidays.
— Medical insurance for employees and the possibility of training employees at the expense of the company and gym membership.
— Remote work; after Ukraine wins the war — our own modern lofty office with spacious workplace, and brand-new work equipment (near Pochaina metro station).
— Flexible work schedule — we expect a full-time commitment but do not track your working hours.
— Flat hierarchy without micromanagement — our doors are open, and all teammates are approachable.
-
· 15 views · 0 applications · 1d
Middle BI Engineer
Full Remote · Ukraine · Product · 2 years of experienceFAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment. We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable...FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment.
We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable to withstand enormous loads and provide a unique experience for players.
FAVBET Tech does not organize and conduct gambling on its platform. Its main focus is software development.
We are looking for a Middle BI Engineer to join our BI SB Team.
Requirements:
— At least 2 years of experience in designing and creating modern data integration solutions.
— Master’s degree in Computer Science or a related field.
— Proficient in Python and SQL, particularly for data engineering tasks.
— Experience with data processing, ETL (Extract, Transform, Load), ELT (Extract, Load, Transform) processes, and data pipeline development.
— Experience with DBT framework and AirFlow orchestration.
— Practical experience with both SQL and NoSQL databases (such as PostgreSQL, MongoDB).
— Experience with SnowFlake.
— Working knowledge of cloud services, particularly AWS (S3, Glue, Redshift, Lamda, RDS, Athena).
— Experience in managing data warehouses and data lakes. Familiarity with star and snowflake DWH design schema. Know the difference between OLAP and OLTP.— Experience in designing data analytic reports with (QuickSight, Pentaho Services, PowerBI).
Would be a plus:
— Experience with cloud data services (e.g., AWS Redshift, Google BigQuery).
— Experience with tools like GitHub, GitLab, Bitbucket.
— Experience with real-time data processing (e.g., Kafka, Flink).
— Familiarity with orchestration tools like Airflow, Luigi.
— Experience with monitoring and logging tools (e.g., ELK Stack, Prometheus, CloudWatch).
— Knowledge of data security and privacy practices.Responsibilities:
— Design, construct, install, test, and maintain highly scalable data management systems.
— Develop ETL/ELT processes and frameworks for data transformation and load.— Implement, optimize and support reports for Sportsbook domain
— Ensure efficient storage and retrieval of big data.
— Optimize data retrieval and query performance.
— Work closely with data scientists and analysts to provide data solutions and insights.
We can offer:
— 30 days of paid vacation and sick days — we value rest and recreation. We also comply with the national holidays.
— Medical insurance for employees and the possibility of training employees at the expense of the company and gym membership.
— Remote work; after Ukraine wins the war — our own modern lofty office with spacious workplace, and brand-new work equipment (near Pochaina metro station).
— Flexible work schedule — we expect a full-time commitment but do not track your working hours.
— Flat hierarchy without micromanagement — our doors are open, and all teammates are approachable.
-
· 96 views · 18 applications · 15d
Data Engineer
Full Remote · Countries of Europe or Ukraine · 4 years of experience · B2 - Upper IntermediateWe are seeking a talented and experienced Data Engineer to join our professional services team of 50+ engineers, on a full-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud...We are seeking a talented and experienced Data Engineer to join our professional services team of 50+ engineers, on a full-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud platforms like AWS, Google Cloud. You will play a vital role in ensuring the performance, efficiency, and integrity of data pipelines of our customers while contributing to insightful data analysis and utilization.
About us:Opsfleet is a boutique services company who specializes in cloud infrastructure, data, AI, and human‑behavior analytics to help organizations make smarter decisions and boost performance.
Our experts provide end‑to‑end solutions—from data engineering and advanced analytics to DevOps—ensuring scalable, secure, and AI‑ready platforms that turn insights into action.
Role Overview
As a Data Engineer at Opsfleet, you will lead the entire data lifecycle—gathering and translating business requirements, ingesting and integrating diverse data sources, and designing, building, and orchestrating robust ETL/ELT pipelines with built‑in quality checks, governance, and observability. You’ll partner with data scientists to prepare, deploy, and monitor ML/AI models in production, and work closely with analysts and stakeholders to transform raw data into actionable insights and scalable intelligence.
What You’ll Do
* E2E Solution Delivery: Lead the full spectrum of data projects—requirements gathering, data ingestion, modeling, validation, and production deployment.
* Data Modeling: Develop and maintain robust logical and physical data models—such as star and snowflake schemas—to support analytics, reporting, and scalable data architectures.
* Data Analysis & BI: Transform complex datasets into clear, actionable insights; develop dashboards and reports that drive operational efficiency and revenue growth.
* ML Engineering: Implement and manage model‑serving pipelines using cloud’s MLOps toolchain, ensuring reliability and monitoring in production.
* Collaboration & Research: Partner with cross‑functional teams to prototype solutions, identify new opportunities, and drive continuous improvement.
What We’re Looking For
Experience: 4+ years in a data‑focused role (Data Engineer, BI Developer, or similar)
Technical Skills: Proficient in SQL and Python for data manipulation, cleaning, transformation, and ETL workflows. Strong understanding of statistical methods and data modeling concepts. Soft Skills: Excellent problem‑solving ability, critical thinking, and attention to detail. Outstanding written and verbal communication.
Education: BSc or higher in Mathematics, Statistics, Engineering, Computer Science, Life Science, or a related quantitative discipline.
Nice to Have
Cloud & Data Warehousing: Hands‑on experience with cloud platforms (GCP, AWS or others) and modern data warehouses such as BigQuery and Snowflake.
More
-
· 43 views · 2 applications · 10d
Data Engineer / DataOps
Full Remote · Countries of Europe or Ukraine · Product · 2 years of experience · B2 - Upper IntermediateDeepX is looking for an experienced Data Engineer to drive our data integration initiatives. In this role, you will connect, transform, and prepare complex datasets to support centralized reporting and actionable business insights. Leveraging modern...DeepX is looking for an experienced Data Engineer to drive our data integration initiatives. In this role, you will connect, transform, and prepare complex datasets to support centralized reporting and actionable business insights. Leveraging modern cloud-based technologies, data orchestration frameworks, and API integrations, you will play a pivotal role in ensuring our data infrastructure meets the evolving needs of our organization.
Key Responsibilities
- Architect, build, and maintain scalable and reliable ETL/ELT pipelines to integrate data from diverse international sources.
- Engineer data transformations that convert raw, complex data into clean, analysis-ready formats suitable for downstream analytics.
- Leverage the Google Cloud Platform (GCP) suite to build and manage scalable data storage and processing solutions, ensuring optimal security, reliability, and performance.
- Orchestrate complex data workflows using Apache Airflow, developing and maintaining robust DAGs for scheduling and monitoring.
- Troubleshoot and resolve issues within data pipelines and optimize workflow scheduling to guarantee timely data availability.
- Independently integrate with third-party services by interpreting API documentation, managing authentication, and developing custom data extraction solutions.
- Master Google Analytics 4's BigQuery export, structuring raw event data by flattening nested fields (e.g., event_params, user_properties) into query-optimized tables.
- Partner with our Business Intelligence teams to align data models and pipelines, seamlessly feeding into visualization tools like Looker Studio, DOMO, and Looker.
- Provide dedicated data support for dashboards, analytical projects, and ad-hoc reporting.
- Integrate and manage modern data connector tools, such as Stitch Data, and stay current with emerging technologies to enhance our data capabilities.
- Collaborate effectively with data analysts, data scientists, and other cross-functional teams to translate business needs into technical specifications.
- Curate and maintain comprehensive documentation for all data workflows, architectural designs, and transformation logic.
- Implement rigorous data validation, monitoring, and testing strategies to ensure data integrity and continuously improve pipeline performance and cost-efficiency.
Qualifications
- A minimum of 3 years of professional experience in a data engineering role, preferably with exposure to international datasets.
- Deep, hands-on experience with the Google Cloud Platform (GCP) ecosystem.
- Demonstrable expertise in orchestrating data pipelines with Apache Airflow, including DAG development and maintenance.
- Solid background in building production-grade ETL/ELT pipelines and utilizing connector tools like Stitch Data.
- Proven ability to work with APIs, from reading documentation to implementing data extraction logic.
- Experience handling Google Analytics 4 BigQuery exports, specifically with flattening nested data structures.
- Proficiency in SQL and at least one programming language (e.g., Python, Java, or Scala) for data manipulation and automation.
- Familiarity with BI platforms (Looker Studio, DOMO, Looker) and supporting BI team requirements.
- Proficiency with version control systems, particularly Git.
- Strong problem-solving skills with the ability to translate business requirements into technical solutions and optimize complex data processes.
- Excellent communication and collaboration skills, with the ability to work effectively in an international team environment.
- A proactive and detail-oriented mindset with a commitment to data quality and performance.
- English proficiency: Upper-Intermediate or higher.
About DeepX
DeepX is an R&D intensive and innovation-driven consortium that provides Artificial Intelligence-powered Computer Vision solutions for businesses. To find out more about us, please visit: https://deepxhub.com/
More