Jobs
115-
· 258 views · 36 applications · 12d
Junior Data Engineer
Full Remote · Ukraine · IntermediateWe are looking for a Data Engineer to join our team! Data Engineer is responsible for designing, maintaining, and optimizing data infrastructure for data collection, management, transformation, and access. He/she will be in charge of creating pipelines...We are looking for a Data Engineer to join our team!
Data Engineer is responsible for designing, maintaining, and optimizing data infrastructure for data collection, management, transformation, and access.
He/she will be in charge of creating pipelines that convert raw data into usable formats for data scientists and other data consumers to utilize.
Data Engineer should be comfortable working with RDBMS and has a good knowledge of the appropriate RDBMS programming language(s) as well.
The Data Engineer fulfills processing of client data based on proper specification and documentation.
*Ukrainian student in UA (2d year and higher).
Main responsibilities:
- Design and develop ETL pipelines;
- Data integration and cleansing;
- Implement stored procedures and function for data transformations;
- ETL processes performance optimization.
Skills and Requirements:
- Experience with ETL tools (to take charge of the ETL processes and performs tasks connected with data analytics, data science, business intelligence and system architecture skills);
- Database/DBA/Architect background (understanding of data storage requirements and design warehouse architecture, should have the basic expertise with SQL/NoSQL databases and data mapping, the awareness of Hadoop environment);
- Data analysis expertise (data modeling, mapping, and formatting, data analysis basic expertise is required);
- Knowledge of scripting languages (Python is preferable);
- Troubleshooting skills (data processing systems operate with large amounts of data and include multiple structural elements. Data Engineer is responsible for the proper functioning of the system, which requires strong analytical thinking and troubleshooting skills);
- Tableau experience is good to have;
- Software engineering background is good to have;
- Good organizational skills, and task management abilities;
- Effective self-motivator;
- Good communication skills in written and spoken English.
Salary Range
Compensation packages are based on several factors including but not limited to: skill set, depth of experience, certifications, and specific work location.
More -
· 90 views · 7 applications · 26d
Data Engineer
Countries of Europe or Ukraine · 2 years of experience · IntermediateLooking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV. Skills requirements: • 2+ years of experience with...Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV.
Skills requirements:
• 2+ years of experience with Python;
• 2+ years of experience as a Data Engineer;
• Experience with Pandas;
• Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
• Familiarity with Amazon Web Services;
• Knowledge of data algorithms and data structures is a MUST;
• Working with high volume tables 10m+.
Optional skills (as a plus):
• Experience with Spark (pyspark);
• Experience with Airflow;
• Experience with Kafka;
• Experience in statistics;
• Knowledge of DS and Machine learning algorithms..Key responsibilities:
• Create ETL pipelines and data management solutions (API, Integration logic);
• Different data processing algorithms;
• Involvement in creation of forecasting, recommendation, and classification models.We offer:
• Great networking opportunities with international clients, challenging tasks;
• Building interesting projects from scratch using new technologies;
• Personal and professional development opportunities;
• Competitive salary fixed in USD;
• Paid vacation and sick leaves;
• Flexible work schedule;
• Friendly working environment with minimal hierarchy;
• Team building activities, corporate events.
More -
· 21 views · 2 applications · 5d
Team/ Tech Lead Data Engineer
Full Remote · Worldwide · 5 years of experience · Upper-IntermediateLooking for a Team Lead Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV. As a Team Lead, you will be an expert and...Looking for a Team Lead Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV.
As a Team Lead, you will be an expert and a leader, playing a crucial role in guiding the development team, making technical decisions, and ensuring the successful delivery of high-quality software products.
Skills requirements:
• 5+ years of experience with Python;
• 4+ years of experience as a Data Engineer;
• Knowledge of data algorithms and data structures is a MUST;
• Excellent experience with Pandas;
• Excellent experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
• Experience Apache Kafka, Apache Spark (pyspark);
• Experience with Hadoop;
• Familiarity with Amazon Web Services;
• Understanding of cluster computing fundamentals;
• Working with high volume tables 100m+.
Optional skills (as a plus):
• Experience with scheduling and monitoring (Databricks, Prometheus, Grafana);
• Experience with Airflow;
• Experience with Snowflake, Terraform;
• Experience in statistics;
• Knowledge of DS and Machine learning algorithms.
Key responsibilities:
• Manage the development process and support team members;
• Conduct R&D work with new technology;
• Maintain high-quality coding standards within the team;
• Create ETL pipelines and data management solutions (API, Integration logic);
• Elaborate different data processing algorithms;
• Involvement in creation of forecasting, recommendation, and classification models;
• Develop and implement workflows for receiving and transforming new data sources to be used in the company;
• Develop existing Data Engineering infrastructure to make it scalable and prepare it for anticipated projected future volumes;
• Identify, design and implement process improvements (i.e. automation of manual processes, infrastructure redesign, etc.).
We offer:
• Great networking opportunities with international clients, challenging tasks;
• Building interesting projects from scratch using new technologies;
• Personal and professional development opportunities;
• Competitive salary fixed in USD;
• Paid vacation and sick leaves;
• Flexible work schedule;
• Friendly working environment with minimal hierarchy;
• Team building activities, corporate events.
More -
· 66 views · 3 applications · 2d
Junior Data Engineer
Full Remote · Countries of Europe or Ukraine · 0.5 years of experience · IntermediateWe seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...We seek a Junior Data Engineer with basic pandas and SQL experience.
At Dataforest, we are actively seeking Data Engineers of all experience levels.
If you're ready to take on a challenge and join our team, please send us your resume.
We will review it and discuss potential opportunities with you.
Requirements:
• 6+ months of experience as a Data Engineer
• Experience with SQL ;
• Experience with Python;
Optional skills (as a plus):
• Experience with ETL / ELT pipelines;
• Experience with PySpark;
• Experience with Airflow;
• Experience with Databricks;
Key Responsibilities:
• Apply data processing algorithms;
• Create ETL/ELT pipelines and data management solutions;
• Work with SQL queries for data extraction and analysis;
• Data analysis and application of data processing algorithms to solve business problems;
We offer:
• Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark
• Opportunity to work with the high-skilled engineering team on challenging projects;
• Interesting projects with new technologies;
• Great networking opportunities with international clients, challenging tasks;
• Building interesting projects from scratch using new technologies;
• Personal and professional development opportunities;
• Competitive salary fixed in USD;
• Paid vacation and sick leaves;
• Flexible work schedule;
• Friendly working environment with minimal hierarchy;
• Team building activities, corporate events.
More -
· 57 views · 7 applications · 16d
Data Engineer
Full Remote · EU · Product · 2 years of experience · Upper-IntermediateRole Overview: We are looking for a Data Engineer to join the growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow. The right candidate will...Role Overview:
We are looking for a Data Engineer to join the growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.
Key Responsibilities:
— Develop and maintain data infrastructure and data warehouse solutions;
— Design, develop, and maintain scalable and efficient data pipelines and ETL processes;
— Develop APIs;
— Gathering and defining business requirements for data tools and analytics;
— Communicate and collaborate with the analytics team;
—Monitor and troubleshoot data pipelines and infrastructure, and implement measures to ensure data integrity, security, and performance;
— Assistance in the implementation of data science solutions;
— Develop and maintain documentation for data pipelines, infrastructure, and workflows;
— Stay up-to-date with the latest data engineering technologies and best practices, and make recommendations for new tools and approaches to improve efficiency and quality;
— Automation of the data processes;
— Collect data from different sources.
Ideal profile for the position:
— 2+ years of work experience as a Data Engineer;
— Experience with AWS - S3, Redshift, DMS, Glue, Lambda, Athena, QuickSight;
— Excellent level of SQL;
— Proficient in Python;
— Knowledge and experience with the development of data warehousing and ETL pipelines;
— API development experience;
— Basic understanding of machine learning and data science;
— Experience with API development;
— Experience in relational and non-relational databases;
— Good-level written and verbal communication skills;
— Upper-intermediate or higher English level.
The company guarantees you the following benefits:
The company guarantees you the following benefits:
— Global Collaboration: Join an international team where everyone treats each other with respect and moves towards the same goal;
Autonomy and Responsibility: Enjoy the freedom and responsibility to make decisions without the need for constant supervision.
— Competitive Compensation: Receive competitive salaries reflective of your expertise and knowledge as our partner seeks top performers.
— Remote Work Opportunities: Embrace the flexibility of fully remote work, with the option to visit company offices that align with your current location.
— Flexible Work Schedule: Focus on performance, not hours, with a flexible work schedule that promotes a results-oriented approach;
— Unlimited Paid Time Off: Prioritize work-life balance with unlimited paid vacation and sick leave days to prevent burnout;
— Career Development: Access continuous learning and career development opportunities to enhance your professional growth;
— Corporate Culture: Experience a vibrant corporate atmosphere with exciting parties and team-building events throughout the year;
— Referral Bonuses: Refer talented friends and receive a bonus after they successfully complete their probation period;
— Medical Insurance Support: Choose the right private medical insurance, and receive compensation (full or partial) based on the cost;
— Flexible Benefits: Customize your compensation by selecting activities or expenses you'd like the company to cover, such as a gym subscription, language courses, Netflix subscription, spa days, and more;
— Education Foundation: Participate in a biannual raffle for a chance to learn something new, unrelated to your job, as part of our commitment to ongoing education.
Interview process:
— A 30-minute interview with a member of our HR team to get to know you and your experience;
— A final 2-hour interview with the team to gauge your fit with our culture and working style.
If you find this opportunity right for you, don't hesitate to apply or get in touch with us if you have any questions!
-
· 116 views · 14 applications · 10d
Data Engineer
Full Remote · Worldwide · Product · 3 years of experience · IntermediatePrimary Responsibilities: Organizing and maintaining real-time data collection, processing, and analysis; Designing and implementing automated reporting systems for business metric monitoring; Configuring and optimizing data pipelines (ETL...Primary Responsibilities:
- Organizing and maintaining real-time data collection, processing, and analysis;
- Designing and implementing automated reporting systems for business metric monitoring;
- Configuring and optimizing data pipelines (ETL processes);
- Working with data visualization tools (e.g., Grafana) to create clear and informative dashboards;
- Optimizing high-load analytical queries;
- Developing and maintaining predictive models and machine learning; algorithms for data analysis (if required).
Core Skills:- Strong knowledge of SQL with experience in query optimization for large datasets;
- Hands-on experience with data pipeline orchestration tools;
- Proficiency in data visualization tools (e.g., Grafana, Power BI, Tableau);
- Experience working with real-time analytics and data warehouses;
- Expertise in big data processing and ETL optimization;
- Proficiency in data processing programming languages (e.g., Python, Scala, or SQL);
- Experience with Data Bricks (preferred).
Additional Skills:- Understanding of machine learning fundamentals and experience with libraries such as scikit-learn, TensorFlow, or PyTorch (a plus);
- Experience working with cloud platforms (AWS, GCP) to deploy analytical solutions;
- Understanding of CI/CD processes for automating data analytics infrastructure.
Language Requirements:- Intermediate English proficiency for working with technical documentation and communicating with external service support.
We offer:- An interesting project and non-trivial tasks that will allow us to show your professional attitude and creativity;
- Friendly team;
- Comfortable working schedule and working conditions;
- Opportunity to work remotely as well as in an office located in the city centre;
- Stable, competitive salary;
- Paid vacation and sick leaves;
- Opportunity for professional growth and career development;
- English, paid professional courses, coffee/fruits and other pluses :)
-
· 35 views · 4 applications · 30d
Middle Software Developer (Data Researcher/Data Integration)
Full Remote · Ukraine · 3 years of experience · Upper-IntermediateOur partner is a leading technology company transforming how investigations are conducted with smart tools that help teams collect, analyze, and act on data effectively. Their AI-powered platform streamlines case management, data visualization, and...Our partner is a leading technology company transforming how investigations are conducted with smart tools that help teams collect, analyze, and act on data effectively. Their AI-powered platform streamlines case management, data visualization, and reporting — making it a go-to solution for law enforcement, financial investigations, and cyber threat intelligence. With deep expertise in business intelligence and data operations, they help organizations make faster, more informed decisions. Innovation, technical excellence, and strong collaboration define their workplace culture.
You’ll work alongside a skilled team of engineers and data experts to build, test, and refine integrations and data pipelines that power real-world security applications. Most of your time will be spent analyzing API documentation, testing endpoints, reviewing and configuring data sources (especially in JSON), and occasionally updating Python-based microservices. If you’re curious, pragmatic, and eager to see your work make a real impact, this might be the right place for you.
P.S. Enjoy being the first to spot hidden insights in complex data flows? You’ll feel right at home 😉
Required Skills
- 2+ years of experience in API Integration or Technical Implementation domain
- Solid experience working with APIs (reading docs, testing, integrating)
- Understanding of JSON and NoSQL (MongoDB in particular)
- Python experience, especially in reading or modifying microservices
- Familiarity with tools like Postman, Git, and optionally Docker or Elasticsearch
- Upper-Intermediate English
- Strong problem-solving skills and analytical thinking
Comfortable working in a remote, collaborative environment
Will be a Bonus
- Familiarity with integrating APIs and handling various data sources
- Ability to anticipate and handle multiple potential edge cases related to data consistency
Your Day-to-Day Responsibilities Will Include
- Researching and analyzing various APIs and data sources
- Integrating new data sources into existing system for seamless data flow
- Collaborating closely with the team to define and implement data solutions
- Identifying and addressing multiple potential edge cases in data integration
- Planning your work, estimating effort, and delivering on deadlines
We Offer
📈 Constant professional growth and improvement:
- Challenging projects with cutting-edge technologies
- Close cooperation with clients and industry leaders
- Support for personal development and mentorship
😄 Comfortable, focused work environment:
- Remote work encouraged and supported
- Minimal bureaucracy
- Flexible schedule
- High-quality hardware provided
And, of course, all the traditional benefits you'd expect in the IT industry.
More -
· 43 views · 1 application · 16d
Data Engineer (Azure)
Full Remote · Countries of Europe or Ukraine · 2 years of experience · Upper-IntermediateDataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are...Dataforest is looking for a Data Engineer to join an interesting software development project in the field of water monitoring. Our EU client’s platform offers full visibility into water quality, compliance management, and system performance. If you are looking for a friendly team, a healthy working environment, and a flexible schedule ‒ you have found the right place to send your CV.
Key Responsibilities:
- Create and manage scalable data pipelines with Azure SQL and other databases;
- Use Azure Data Factory to automate data workflows;
- Write efficient Python code for data analysis and processing;
- Ability to develop data reports and dashboards using PowerBI;
- Use Docker for application containerization and deployment streamlining;
- Manage code quality and version control with Git.
Skills requirements:
- 3+ years of experience with Python;
- 2+ years of experience as a Data Engineer;
- Strong SQL knowledge, preferably with Azure SQL experience;
- Python skills for data manipulation;
- Expertise in Docker for app containerization;
- Familiarity with Git for managing code versions and collaboration;
- Upper- intermediate level of English.
Optional skills (as a plus):
- Experience with Azure Data Factory for orchestrating data processes;
- Experience developing APIs with FastAPI or Flask;
- Proficiency in Databricks for big data tasks;
- Experience in a dynamic, agile work environment;
- Ability to manage multiple projects independently;
- Proactive attitude toward continuous learning and improvement.
We offer:- Great networking opportunities with international clients, challenging tasks;
- Building interesting projects from scratch using new technologies;
- Personal and professional development opportunities;
- Competitive salary fixed in USD;
- Paid vacation and sick leaves;
- Flexible work schedule;
- Friendly working environment with minimal hierarchy;
- Team building activities and corporate events.
More -
· 17 views · 0 applications · 15d
Senior Data Engineer
Full Remote · Ukraine · 4 years of experience · Upper-IntermediateN-iX is looking Senior Data Engineer to join our skilled and continuously growing team! The position is for our fintech customer from Europe. The person would be a part of the customer’s Data Platform team - a key function within the company, responsible...N-iX is looking Senior Data Engineer to join our skilled and continuously growing team! The position is for our fintech customer from Europe. The person would be a part of the customer’s Data Platform team - a key function within the company, responsible for the architecture, development, and management of our core data infrastructure. We leverage Snowflake, Looker, Airflow (MWAA), and dbt while managing DevOps configurations for the platform. Our goal is to build and maintain a self-serve data platform that empowers stakeholders with tools for efficient data management while ensuring security, governance, and compliance standards.
Requirements:
- 6+ years of experience in Data Engineering.
- Strong proficiency in Airflow, Python, and SQL.
- Hands-on experience with cloud data warehouses (Snowflake or equivalent).
- Solid understanding of AWS services and Kubernetes at an advanced user level.
- Familiarity with Data Quality and Observability best practices.
- Ability to thrive in a dynamic environment with a strong sense of ownership and responsibility.
- Analytical mindset and problem-solving skills for tackling complex technical challenges.
- Bachelor's in Mathematics, Computer Science,e or other relevant quantitative fields
Nice-to-Have Skills:
- Experience with DevOps practices, CI/CD, and Infrastructure as Code (IaC).
- Hands-on experience with Looker or other BI tools.
- Performance optimization of large-scale data pipelines.
- Knowledge of metadata management and Data Governance best practices.
Responsibilities:
- Design and develop a scalable data platform to efficiently process and analyze large volumes of data using Snowflake, Looker, Airflow, and dbt.
- Enhance the self-serve data platform by implementing new features to improve stakeholder access and usability.
- Work with cross-functional teams to provide tailored data solutions and optimize data pipelines.
- Foster a culture of knowledge sharing within the team to enhance collaboration and continuous learning.
- Stay updated on emerging technologies and best practices in data engineering and bring innovative ideas to improve the platform.
-
· 58 views · 4 applications · 26d
Data Engineer (RnD team)
Full Remote · Countries of Europe or Ukraine · Product · 3 years of experience · IntermediateIn Competera, we are building a place where optimal pricing decisions can be made easily. We believe that AI technologies will soon drive all challenging decisions and are capable of helping humans be better. We are now looking for a Data Engineer to...In Competera, we are building a place where optimal pricing decisions can be made easily. We believe that AI technologies will soon drive all challenging decisions and are capable of helping humans be better.
We are now looking for a Data Engineer to improve our data processing pipelines from performance, cost and correctness standpoints.
You could be a perfect match for the position ifYou want to:
- Migrate BigQuery workloads to Spark 3.5 on Databricks.
Re-engineer our daily-TB batch jobs into Delta Lake pipelines that run faster and cost less. - Turn full reloads into true incremental processing.
Build CDC / MERGE logic so we scan only the data that changed and deliver fresh features within minutes. - Add quality gates & observability from day one.
Instrument every stage with custom metrics, data-drift alerts and cost reports the product team can read. - Set up monitoring & slot-second cost dashboards.
Expose processing-time, SLA and $-per-feature charts so we can make data-driven trade-offs. - Pair with Data Scientists and Product Managers.
Work side-by-side with Data Scientists and Product Managers from idea to release, instead of simply passing datasets back and forth. - Continuously tune for scale.
Dozens of terabytes move through the platform daily; you’ll experiment with partitioning, Z-ORDER, and Photon to keep latency low as volume grows.
You have:
- 3+ years of experience in data engineer role.
- Strong knowledge of SQL, Spark, Python, Airflow, binary file formats.
- English level - Upper-intermediate +.
Pleasant extras:
- Databricks, GCP, BigQuery, Kafka, data modeling patterns, data quality approaches and tools.
Soft skills:
- Product mindset.
- Ability to work in a fast-paced environment.
- Willingness to take ownership of a feature and guide it through all stages of the development lifecycle.
- Proactivity, openness, desire to dive deep into the domain and learn new approaches and tools
You’re gonna love it, and here’s why:
- Rich innovative software stack, freedom to choose the best suitable technologies.
- Remote-first ideology: freedom to operate from the home office or any suitable coworking.
- Flexible working hours (we start from 8 to 11 am) and no time tracking systems on.
- Regular performance and compensation reviews.
- Recurrent 1-1s and measurable OKRs.
- In-depth onboarding with a clear success track.
- Competera covers 70% of your training/course fee.
- 20 vacation days, 15 days off, and up to one week of paid Christmas holidays.
- 20 business days of sick leave.
- Partial medical insurance coverage.
Drive innovations with us. Be a Competerian.
More - Migrate BigQuery workloads to Spark 3.5 on Databricks.
-
· 87 views · 27 applications · 29d
Data Engineer
Full Remote · Worldwide · 4 years of experience · Advanced/FluentRequirements: • Develop and maintain data pipelines and ETLs. • Support the development and maintenance of data visualization solutions for the developed data products. • Build and maintain cloud infrastructure for multiple solutions using various AWS...Requirements:
• Develop and maintain data pipelines and ETLs.
• Support the development and maintenance of data visualization solutions for the developed data products.
• Build and maintain cloud infrastructure for multiple solutions using various AWS services through AWS CDK written in Python.
• Build reusable components for multiple solutions.
• Design, build, and implement data quality checks.
• Gather and translate business requirements into technical requirements.
• Implement Data Engineering best practices.
• Document all developed components.
• Assist in solution architecture design and implementation.
• Build queries to solve analytical questions.
• Ensure information security standards are always maintained.
• Design, build, and maintain robust and scalable data models across various database vendors and types, including SQL and NoSQL.
We offer:
• Attractive financial package
• Challenging projects
• Professional & career growth
• Great atmosphere in a friendly small team
More -
· 51 views · 4 applications · 25d
Strong middle/Senior Data engineer
Full Remote · Ukraine · 4 years of experience · Upper-IntermediateJob Description We are looking for an experienced and skilled Senior Data Engineer to work on building Data Processing pipelines with 4+ years of commercial experience in Spark (BigData solutions). Experience in building Big Data solutions on AWS or other...Job Description
We are looking for an experienced and skilled Senior Data Engineer to work on building Data Processing pipelines with 4+ years of commercial experience in Spark (BigData solutions).
Experience in building Big Data solutions on AWS or other cloud platforms
Experience in building Data Lake platforms
Strong practical experience with Apache Spark.
Hands-on experience in building data pipelines using Databricks
Hands-on experience in Python, Scala
Upper-Intermediate English level
Bachelor’s degree in Computer Science, Information Systems, Mathematics, or related technical disciplineJob Responsibilities
Responsible for the design and implementation of data integration pipelines
Perform performance tuning and improve functionality with respect to NFRs.
Contributes design, code, configurations, and documentation for components that manage data ingestion, real-time streaming, batch processing, data extraction, transformation, and loading across multiple data storage
Take part in the full-cycle of feature development (requirements analysis, decomposition, design, etc)
Design, develop and implement data platform enterprise solutions with other talented engineers in a collaborative team environment.
Contribute to the overall quality of development services through brainstorming, unit testing and proactive offering of different improvements and innovations.Department/Project Description
Is it even possible to sleep not only deeply, but smartly? Yes, it is, if the GlobalLogic and Sleep Number teams get down to business! Sleep Number is a pioneer in the development of technologies for monitoring sleep quality. Smart beds have already provided 13 million people with quality sleep, and this is just the beginning.
The GlobalLogic team is a strategic partner of Sleep Number in the development of innovative technologies to improve sleep. By joining the project, you will be dealing with technologies that have already turned the smart bed into a health improvement and wellness center. The world's largest biometric database allows building necessary infrastructure for future inventions.
Join the team and get ready to innovate, lead the way, and improve lives!
More -
· 47 views · 1 application · 17d
Data Engineer
Ukraine · 4 years of experience · Upper-IntermediateOn behalf of our Client, a well-established financial institution from the Caribbean region Mobilunity is looking for a Data Engineer. Our Client is the largest bank in the Caribbean region that serves 14 countries/territories. The aim is to make this...On behalf of our Client, a well-established financial institution from the Caribbean region Mobilunity is looking for a Data Engineer.
Our Client is the largest bank in the Caribbean region that serves 14 countries/territories. The aim is to make this organization from a traditional bank into a new era of fintech, working on the edge of what current fintech may offer.
Requirements:
- Experience with ETL/ELT
- Proficiency in Glue and Spark
- Strong programming skills in Python and SQL
- Hands-on experience with MWAA / Airflow
- Good understanding of AWS Basics (IAM, S3)
- Experience working with Aurora and PostgreSQL
- Knowledge of Kafka / MSK, including Kafka Connect and Debezium
- Familiarity with Lake Formation
- Experience using Glue Data Catalog
- Solid understanding of data modeling principles
- Experience with Glue Streaming
Level of English – Upper-Intermediate and higher
Nice to have:
- Previous experience working in the fintech industry
🐳In return we offer:
- The friendliest community of like-minded IT-people
- Open knowledge-sharing environment – exclusive access to a rich pool of colleagues willing to share their endless insights into the broadest variety of modern technologies
- Perfect office location in the city-center (900m from Lukyanivska metro station with a green and spacious neighborhood) or remote mode engagement: you can choose a convenient one for you, with a possibility to fit together both
- No open-spaces setup – separate rooms for every team’s comfort and multiple lounge and gaming zones
- Neverending fun: sports events, tournaments, music band, multiple affinity groups
🐳 Come on board, and let’s grow together! 🐳
More -
· 78 views · 13 applications · 19d
Senior Data Engineer (Python) to $8000
Full Remote · Bulgaria, Poland, Portugal, Romania, Ukraine · 5 years of experience · Upper-IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.About the Role:
As a data engineer you’ll have end-to-end ownership - from system architecture and softwaredevelopment to operational excellence.
Key Responsibilities:
● Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.● Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
● Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
● Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
● Collaborate closely with software architects, DevOps engineers, and product teams to deliver resilient, scalable, production-grade machine learning pipelines.
Required Competence and Skills:
To excel in this role, candidates should possess the following qualifications and experiences:● A Bachelor’s or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
● At least 5 years of experience as a data engineer, software engineer, or similar role and using data to drive business results.
● At least 5 years of experience with Python, building modular, testable, and production-ready code.
● Solid understanding of SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
● Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
● A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
● Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.
Nice-to-Haves
● Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
● Familiarity with API development frameworks (e.g., FastAPI).
● Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
● Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.
Why Us?
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).
We provide full accounting and legal support in all countries we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.
More -
· 59 views · 8 applications · 15d
Senior Data Engineer
Full Remote · Countries of Europe or Ukraine · Product · 3 years of experience · Pre-IntermediateWe’re Applyft - an IT product company which creates value-driven mobile apps. Our journey began with the Geozilla family locator product, but now our portfolio consists of four apps in Family Safety, Entertainment and Mental Health Spheres. We’re proud...We’re Applyft - an IT product company which creates value-driven mobile apps. Our journey began with the Geozilla family locator product, but now our portfolio consists of four apps in Family Safety, Entertainment and Mental Health Spheres. We’re proud to have a 5M monthly active users base and to achieve 20% QoQ revenue growth
Now we are looking for a Middle/Senior Data Engineer to join our Analytics team
What you’ll do:
- Design, develop and maintain Data pipelines and ETL processes for internal DWH
- Develop and support integrations with 3rd party systems
- Be responsible for the quality of data presented in BI dashboards
- Collaborate with data analysts to troubleshoot data issues and optimize data workflows
Your professional qualities:
- 3+ years of BI/DWH development experience
- Excellent knowledge of database concepts and hands-on experience with SQL
- Proven experience of designing, implementing, and maintaining ETL data pipelines
- Hands-on experience writing production-level Python code and managing workflows with Airflow
- Experience working with cloud-native technologies (AWS/GCP)
Will be a plus:
- Experience with billing systems, enterprise financial reporting, subscription monetization products
- Experience supporting product and marketing data analytics
We offer:
- Remote-First culture: We provide a flexible working schedule and you can work anywhere in the world
- Health taking care program: We provide Health insurance, sport compensation and 20 paid sick days
- Professional Development: The company provides budget for each employee for courses, trainings and conferences
- Personal Equipment Policy: We provide all necessary equipment for your work. For Ukrainian employees we also provide Ecoflow
- Vacation Policy: Each employee in our company has 20 paid vacation days and extra days on the occasion of special evens
- Knowledge sharing: We are glad to share our knowledge and experience in our internal events
- Corporate Events: We organize corporate events and team-building activities across our hubs