Jobs
101-
· 42 views · 4 applications · 29d
Senior Python Engineer - Data Platform
Full Remote · Worldwide · Product · 8 years of experience · Upper-IntermediateDuties and responsibilities: Integration of blockchains, Automated Market Maker (AMM) protocols, and bridges within product's platform; Active participation in development and maintenance of our data pipelines and backend services; Integrate new...Duties and responsibilities:
- Integration of blockchains, Automated Market Maker (AMM) protocols, and bridges within product's platform;
- Active participation in development and maintenance of our data pipelines and backend services;
- Integrate new technologies into our processes and tools;
- End-to-end feature designing and implementation;
- Code, debug, test and deliver features and improvements in a continuous manner;
Provide code review, assistance and feedback for other team members.
Required:
- 8+ years of experience developing Python backend services and APIs;
- Advanced knowledge of SQL - ability to write, understand and debug complex queries;
- Data Warehousing and database basic architecture principles;
- POSIX/Unix/Linux ecosystem knowledge;
- Strong knowledge and experience with Python, and API frameworks such as Flask or FastAPI;
- Knowledge about blockchain technologies or willingness to learn;
- Experience with PostgreSQL database system;
- Knowledge of Unit Testing principles;
- Independent and autonomous way of working;
Team-oriented work and good communication skills are an asset.
Would be a plus:
- Practical experience in big data and frameworks – Kafka, Spark, Flink, Data Lakes and Analytical Databases such as ClickHouse;
- Knowledge of Docker, Kubernetes and Infrastructure as Code - Terraform, Ansible, etc;
- Passion for Bitcoin and Blockchain technologies;
- Experience with distributed systems;
- Experience with opensource solutions;
- Experience with Java or willingness to learn.
-
· 48 views · 4 applications · 11d
Strong middle/Senior Data engineer
Full Remote · Ukraine · 4 years of experience · Upper-IntermediateJob Description We are looking for an experienced and skilled Senior Data Engineer to work on building Data Processing pipelines with 4+ years of commercial experience in Spark (BigData solutions). Experience in building Big Data solutions on AWS or other...Job Description
We are looking for an experienced and skilled Senior Data Engineer to work on building Data Processing pipelines with 4+ years of commercial experience in Spark (BigData solutions).
Experience in building Big Data solutions on AWS or other cloud platforms
Experience in building Data Lake platforms
Strong practical experience with Apache Spark.
Hands-on experience in building data pipelines using Databricks
Hands-on experience in Python, Scala
Upper-Intermediate English level
Bachelor’s degree in Computer Science, Information Systems, Mathematics, or related technical disciplineJob Responsibilities
Responsible for the design and implementation of data integration pipelines
Perform performance tuning and improve functionality with respect to NFRs.
Contributes design, code, configurations, and documentation for components that manage data ingestion, real-time streaming, batch processing, data extraction, transformation, and loading across multiple data storage
Take part in the full-cycle of feature development (requirements analysis, decomposition, design, etc)
Design, develop and implement data platform enterprise solutions with other talented engineers in a collaborative team environment.
Contribute to the overall quality of development services through brainstorming, unit testing and proactive offering of different improvements and innovations.Department/Project Description
Is it even possible to sleep not only deeply, but smartly? Yes, it is, if the GlobalLogic and Sleep Number teams get down to business! Sleep Number is a pioneer in the development of technologies for monitoring sleep quality. Smart beds have already provided 13 million people with quality sleep, and this is just the beginning.
The GlobalLogic team is a strategic partner of Sleep Number in the development of innovative technologies to improve sleep. By joining the project, you will be dealing with technologies that have already turned the smart bed into a health improvement and wellness center. The world's largest biometric database allows building necessary infrastructure for future inventions.
Join the team and get ready to innovate, lead the way, and improve lives!
More -
· 31 views · 1 application · 3d
Data Engineer
Ukraine · 4 years of experience · Upper-IntermediateOn behalf of our Client, a well-established financial institution from the Caribbean region Mobilunity is looking for a Data Engineer. Our Client is the largest bank in the Caribbean region that serves 14 countries/territories. The aim is to make this...On behalf of our Client, a well-established financial institution from the Caribbean region Mobilunity is looking for a Data Engineer.
Our Client is the largest bank in the Caribbean region that serves 14 countries/territories. The aim is to make this organization from a traditional bank into a new era of fintech, working on the edge of what current fintech may offer.
Requirements:
- Experience with ETL/ELT
- Proficiency in Glue and Spark
- Strong programming skills in Python and SQL
- Hands-on experience with MWAA / Airflow
- Good understanding of AWS Basics (IAM, S3)
- Experience working with Aurora and PostgreSQL
- Knowledge of Kafka / MSK, including Kafka Connect and Debezium
- Familiarity with Lake Formation
- Experience using Glue Data Catalog
- Solid understanding of data modeling principles
- Experience with Glue Streaming
Level of English – Upper-Intermediate and higher
Nice to have:
- Previous experience working in the fintech industry
🐳In return we offer:
- The friendliest community of like-minded IT-people
- Open knowledge-sharing environment – exclusive access to a rich pool of colleagues willing to share their endless insights into the broadest variety of modern technologies
- Perfect office location in the city-center (900m from Lukyanivska metro station with a green and spacious neighborhood) or remote mode engagement: you can choose a convenient one for you, with a possibility to fit together both
- No open-spaces setup – separate rooms for every team’s comfort and multiple lounge and gaming zones
- Neverending fun: sports events, tournaments, music band, multiple affinity groups
🐳 Come on board, and let’s grow together! 🐳
More -
· 45 views · 1 application · 26d
Senior Data Engineer/Lead Data Engineer (Healthcare domain)
Full Remote · EU · 5 years of experience · Upper-IntermediateWe are looking for a Senior Data Engineer with extensive experience in data engineering who is passionate about making an impact. Join our team, where you will have the opportunity to drive innovation, improve solutions, and help us reach new heights! If...We are looking for a Senior Data Engineer with extensive experience in data engineering who is passionate about making an impact. Join our team, where you will have the opportunity to drive innovation, improve solutions, and help us reach new heights!
If you're ready to take your expertise to the next level and contribute significantly to the success of our projects, submit your resume now.
Our client is a leading medical technology company. The portfolio of products, services, and solutions is central to clinical decision-making and treatment pathways. Patient-centered innovation has always been at the core of the company, which is committed to improving patient outcomes and experiences, no matter where they live or what challenges they face. The company is innovating sustainably to provide healthcare for everyone, everywhere.
The Project’s mission is to enable healthcare providers to increase their value by equipping them with innovative technologies and services in diagnostic and therapeutic imaging, laboratory diagnostics, molecular medicine, and digital health and enterprise services.
Responsibilities:- Work closely with the client (PO) as well as other team members to clarify tech requirements and expectations
- Contribute to the design, development, and optimization of squad-specific data architecture and pipelines adhering to defined ETL and Data Lake principles
- Implement architectures using Azure Cloud platforms (Data Factory, Databricks, Event Hub)
- Discover, understand, and organize disparate data sources, structuring them into clean data models with clear, understandable schemas
- Evaluate new tools for analytical data engineering or data science and suggest improvements
- Contribute to training plans to improve analytical data engineering skills, standards, and processes
Requirements:- Solid experience in data engineering and cloud computing services, specifically in the areas of data and analytics (Azure preferred)
- Strong conceptual knowledge of data analytics fundamentals, including dimensional modeling, ETL, reporting tools, data governance, data warehousing, and handling both structured and unstructured data
- Expertise in SQL and at least one programming language (Python/Scala)
- Excellent communication skills and fluency in business English
- Familiarity with Big Data DB technologies such as Snowflake, BigQuery, etc. (Snowflake preferred)
- Experience with database development and data modeling, ideally with Databricks/Spark
-
· 39 views · 1 application · 26d
Senior Python Data Engineer (only Ukraine)
Ukraine · Product · 6 years of experience · Upper-IntermediateThe company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer...The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer data, 2) AI-led multichannel journey orchestration, and 3) statistically credible multitouch attribution of every marketing action.
Requirements:
- At least 5 years of experience with Python
- At least 3 years of experience in processing structured terabyte-scale data (processing structured data of several hundreds of gigabytes).
- Solid experience in SQL and NoSQL (ideally GCP storages Firestore, BigQuery, BigTable and/or Redis, Kafka), (advanced skills in DML).
- Hands-on experience with OLAP storage (at least one of Snowflake, BigQuery, ClickHouse, etc).
- Deep understanding of data processing services (at least one of Apache Airflow, GCP Dataflow, Apache Hadoop, Apache Spark).
- Experience in automated test creation (TDD).
Freely spoken English.
Advantages:
- Being fearless of mathematical algorithms (part of our team’s responsibility is developing ML models for data analysis; although knowledge of ML is not required for the current position, it would be awesome if a person felt some passion for algorithms).
- Experience in any OOP language.
- Experience in DevOps (Familiarity with Docker and Kubernetes).
- Experience with GCP services would be a plus.
- Experience with IaC would be a plus.
- Experience in Scala.
What we offer:
- 20 working days’ vacation;
- 10 paid sick leaves;
- public holidays;
- equipment;
- accountant helps with documents;
- many cool team activities.
Apply now and start a new page of your fast career growth with us!
More -
· 64 views · 12 applications · 5d
Senior Data Engineer (Python) to $8000
Full Remote · Bulgaria, Poland, Portugal, Romania, Ukraine · 5 years of experience · Upper-IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.About the Role:
As a data engineer you’ll have end-to-end ownership - from system architecture and softwaredevelopment to operational excellence.
Key Responsibilities:
● Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.● Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
● Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
● Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
● Collaborate closely with software architects, DevOps engineers, and product teams to deliver resilient, scalable, production-grade machine learning pipelines.
Required Competence and Skills:
To excel in this role, candidates should possess the following qualifications and experiences:● A Bachelor’s or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
● At least 5 years of experience as a data engineer, software engineer, or similar role and using data to drive business results.
● At least 5 years of experience with Python, building modular, testable, and production-ready code.
● Solid understanding of SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
● Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
● A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
● Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.
Nice-to-Haves
● Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
● Familiarity with API development frameworks (e.g., FastAPI).
● Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
● Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.
Why Us?
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).
We provide full accounting and legal support in all countries we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.
More -
· 13 views · 3 applications · 1d
Senior Data Engineer
Full Remote · Countries of Europe or Ukraine · Product · 3 years of experience · Pre-IntermediateWe’re Applyft - an IT product company which creates value-driven mobile apps. Our journey began with the Geozilla family locator product, but now our portfolio consists of four apps in Family Safety, Entertainment and Mental Health Spheres. We’re proud...We’re Applyft - an IT product company which creates value-driven mobile apps. Our journey began with the Geozilla family locator product, but now our portfolio consists of four apps in Family Safety, Entertainment and Mental Health Spheres. We’re proud to have a 5M monthly active users base and to achieve 20% QoQ revenue growth
Now we are looking for a Middle/Senior Data Engineer to join our Analytics team
What you’ll do:
- Design, develop and maintain Data pipelines and ETL processes for internal DWH
- Develop and support integrations with 3rd party systems
- Be responsible for the quality of data presented in BI dashboards
- Collaborate with data analysts to troubleshoot data issues and optimize data workflows
Your professional qualities:
- 3+ years of BI/DWH development experience
- Excellent knowledge of database concepts and hands-on experience with SQL
- Proven experience of designing, implementing, and maintaining ETL data pipelines
- Hands-on experience writing production-level Python code and managing workflows with Airflow
- Experience working with cloud-native technologies (AWS/GCP)
Will be a plus:
- Experience with billing systems, enterprise financial reporting, subscription monetization products
- Experience supporting product and marketing data analytics
We offer:
- Remote-First culture: We provide a flexible working schedule and you can work anywhere in the world
- Health taking care program: We provide Health insurance, sport compensation and 20 paid sick days
- Professional Development: The company provides budget for each employee for courses, trainings and conferences
- Personal Equipment Policy: We provide all necessary equipment for your work. For Ukrainian employees we also provide Ecoflow
- Vacation Policy: Each employee in our company has 20 paid vacation days and extra days on the occasion of special evens
- Knowledge sharing: We are glad to share our knowledge and experience in our internal events
- Corporate Events: We organize corporate events and team-building activities across our hubs
-
· 100 views · 5 applications · 23d
Data Engineer (Python)
Full Remote · Countries of Europe or Ukraine · 1.5 years of experienceDataforest is seeking an experienced Data Engineer (Python) to join our dynamic team. You will be responsible for developing and maintaining data-processing architecture, as well as optimizing and monitoring our internal systems. Requirements: - 1,5+...Dataforest is seeking an experienced Data Engineer (Python) to join our dynamic team. You will be responsible for developing and maintaining data-processing architecture, as well as optimizing and monitoring our internal systems.
More
Requirements:
- 1,5+ years of commercial experience with Python.
- Experience with ElasticSearch and PostgreSQL.
- Knowledge and experience with Redis, Kafka, and SQS.
- Experience setting up monitoring systems with CloudWatch, Prometheus, and Grafana.
- Deep understanding of algorithms and their complexities.
- Excellent programming skills in Python with a focus on optimization and code structuring.
- Knowledge of ETL principles and practices.
- Ability to work collaboratively and communicate effectively.
- Experience with Linux environments, cloud services (AWS), and Docker.
Will be plus:
- Knowledge in web scraping, data extraction, cleaning, and visualization.
- Understanding of multiprocessing and multithreading, including process and thread management.
- Experience with Flask / Flask-RESTful for API development.
Key Responsibilities:
- Develop and maintain data processing architecture using Python.
- Efficiently utilize ElasticSearch and PostgreSQL for data management.
- Implement and manage data pipelines using Redis, Kafka, and SQS.
- Set up and monitor logging systems using CloudWatch, Prometheus, and Grafana.
- Optimize code and improve its structure and performance.
- Understand and implement ETL processes.
- Analyze algorithms and code complexity to enhance efficiency.
- Work with the AWS stack to ensure flexibility and reliability in data processing.
We offer:
- Great networking opportunities with international clients, challenging tasks;
- Building interesting projects from scratch using new technologies;
- Personal and professional development opportunities;
- Competitive salary fixed in USD;
- Paid vacation and sick leaves;
- Flexible work schedule;
- Friendly working environment with minimal hierarchy;
- Team building activities and corporate events. -
· 43 views · 4 applications · 22d
Senior Python Data Engineer (only Ukraine)
Ukraine · Product · 5 years of experience · Upper-IntermediateThe company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer...The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer data, 2) AI-led multichannel journey orchestration, and 3) statistically credible multitouch attribution of every marketing action.
Requirements:
- At least 5 years of experience with Python
- At least 3 years of experience in processing structured terabyte-scale data (processing structured data of several hundreds of gigabytes).
- Solid experience in SQL and NoSQL (ideally GCP storages Firestore, BigQuery, BigTable and/or Redis, Kafka).
- Hands-on experience with OLAP storage (at least one of Snowflake, BigQuery, ClickHouse, etc).
- Deep understanding of data processing services (Apache Airflow, GCP Dataflow, Hadoop, Apache Spark).
- Experience in automated test creation (TDD).
Freely spoken English.
Advantages:
- Being fearless of mathematical algorithms (part of our team’s responsibility is developing ML models for data analysis; although knowledge of ML is not required for the current position, it would be awesome if a person felt some passion for algorithms).
- Experience in any OOP language.
- Experience in DevOps (Familiarity with Docker and Kubernetes.)
- Experience with GCP services would be a plus.
- Experience with IaC would be a plus.
- Experience in Scala.
What we offer:- 20 working days’ vacation;
- 10 paid sick leaves;
- public holidays;
- equipment;
- accountant helps with documents;
many cool team activities.
Apply now and start a new page of your fast career growth with us!
More -
· 211 views · 22 applications · 23d
Junior Data Engineer
Full Remote · Countries of Europe or Ukraine · 0.5 years of experience · IntermediateWe seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...We seek a Junior Data Engineer with basic pandas and SQL experience.
At Dataforest, we are actively seeking Data Engineers of all experience levels.
If you're ready to take on a challenge and join our team, please send us your resume.
We will review it and discuss potential opportunities with you.
Requirements:
• 6+ months of experience as a Data Engineer
• Experience with SQL ;
• Experience with Python;
Optional skills (as a plus):
• Experience with ETL / ELT pipelines;
• Experience with PySpark;
• Experience with Airflow;
• Experience with Databricks;
Key Responsibilities:
• Apply data processing algorithms;
• Create ETL/ELT pipelines and data management solutions;
• Work with SQL queries for data extraction and analysis;
• Data analysis and application of data processing algorithms to solve business problems;
We offer:
• Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark
• Opportunity to work with the high-skilled engineering team on challenging projects;
• Interesting projects with new technologies;
• Great networking opportunities with international clients, challenging tasks;
• Building interesting projects from scratch using new technologies;
• Personal and professional development opportunities;
• Competitive salary fixed in USD;
• Paid vacation and sick leaves;
• Flexible work schedule;
• Friendly working environment with minimal hierarchy;
• Team building activities, corporate events.
More -
· 64 views · 7 applications · 19d
Data Engineer
Full Remote · Ukraine · Product · 3 years of experience · IntermediateWe are looking for an experienced Data Engineer to design and maintain robust data infrastructure across our systems. In this role, you will be responsible for building scalable data pipelines, ensuring data integrity, and integrating third-party data...We are looking for an experienced Data Engineer to design and maintain robust data infrastructure across our systems. In this role, you will be responsible for building scalable data pipelines, ensuring data integrity, and integrating third-party data sources. Your primary focus will be to enable efficient data flow and support analytical capabilities across the organization. You will also contribute to the development of our data architecture, implement best engineering practices, and collaborate closely with cross-functional teams to turn raw data into actionable insights.
Responsibilities
- Communicate with both technical and non-technical audiences to gather requirements
- Review and analyze data and logic to ensure consistency and accuracy
- Design, implement, and maintain data pipelines for efficient data flow
- Integrate and support of developed solutions
- Research and evaluate third-party components for potential use
- Follow best engineering practices: refactoring, code review, testing, continuous delivery, and Scrum
- Design, optimize, and support of data storage
Requirements
- At least 5+ years of experience in data engineering
- Experience in requirement gathering and communication with stakeholders
- Strong knowledge of DWH (data warehouse) architecture and principles
- Practical experience building ETL pipelines and designing data warehouses
- Deep experience with Python with a strong focus on PySpark
- Proficiency in SQL and databases such as PostgreSQL, ClickHouse, MySQL
- Hands-on experience with data scraping and integrating third-party sources and APIs
- Solid understanding of software design patterns, algorithms, and data structures
- Intermediate English proficiency
Will be a plus
- Experience with RabbitMQ or Kafka
- Understanding of web application architecture
- Familiarity with DataOps practices
- Background in FinTech or Trading domains
We offer
- Tax expenses coverage for private entrepreneurs in Ukraine
- Expert support and guidance for Ukrainian private entrepreneurs
- 20 paid vacation days per year
- 10 paid sick leave days per year
- Public holidays as per the company's approved Public holiday list
- Medical insurance
- Opportunity to work remotely
- Professional education budget
- Language learning budget
- Wellness budget (gym membership, sports gear and related expenses)
More
-
· 49 views · 1 application · 11d
Data Engineer
Office Work · Ukraine (Kyiv) · Product · 3 years of experience Ukrainian Product 🇺🇦Ajax Systems is a full-cycle company working from idea generation and R&D to mass production and sales. We do everything: we produce physical devices (the system includes many different sensors and hubs), write firmware for them, develop the server part...Ajax Systems is a full-cycle company working from idea generation and R&D to mass production and sales. We do everything: we produce physical devices (the system includes many different sensors and hubs), write firmware for them, develop the server part and release mobile applications. The whole team is in one office in Kyiv, all technical and product decisions are made locally. We’re looking for a Data Engineer to join us and continue the evolution of a product that we love: someone who takes pride in their work to ensure that user experience and development quality are superb.
Required skills:
Proven experience as a Data Architect or Architect Data Engineer role
At least 3 years of experience as a Python Developer
Strong problem solving, troubleshooting and analysis skills
Previous years of experience and a substantial understanding in:
Data ingestion frameworks for real-time and batch processing
Development and optimization of relational databases such as MySQL or PostgreSQL
Working with NoSQL databases and search systems (including Elasticsearch, Kibana, and MongoDB)
Cloud-based object storage systems (e.g. S3-compatible services)
Data access and warehousing tools for analytical querying (e.g. distributed query engines, cloud data warehouses)
Will be a plus:
Working with large volumes of data and databases
Knowledge of version control tools such as Git
English at the level of reading and understanding technical documentation
Create complex SQL queries against data warehouses and application databases
Tasks and responsibilities:
Develop and manage large scale data systems and ingestion capabilities and infrastructure. Support Design and development of solutions for the deployment of dashboards and reports to various stakeholders.
Architect data pipelines and ETL processes to connect with various data sources Design and maintain enterprise data warehouse models Manage cloud based data & analytics platform Deploy updates and fixes
Evaluate large and complex data sets
Ensure queries are efficient and use the least amount of resources possible Troubleshoot queries to address critical production issues
Assist other team members in refining complex queues and performance tuning
Understand and analyze requirements to develop, test and deploy complex SQL queries used to extract business data for regulatory and other purposes;
Write and maintain technical documentation.
Apply for this job
-
· 99 views · 17 applications · 4d
Middle\Senior Database Engineer to $5500
Full Remote · Worldwide · Product · 1 year of experience · IntermediateResponsibilities: Support the development and maintenance of data pipelines using PostgreSQL, Python, Bash, and Airflow Write and optimize SQL queries for data extraction and transformation Assist with SQL performance tuning and monitoring database...Responsibilities:
- Support the development and maintenance of data pipelines using PostgreSQL, Python, Bash, and Airflow
- Write and optimize SQL queries for data extraction and transformation
- Assist with SQL performance tuning and monitoring database performance (mainly PostgreSQL)
- Work closely with senior engineers to implement and improve ETL processes
- Participate in automation of data workflows and ensure data quality
- Document solutions and contribute to knowledge sharing
Requirements:
- 3-5 years of experience in a similar role (Database Engineer, Data Engineer, etc.)
- Solid knowledge of PostgreSQL, Oracle* and SQL (must be confident writing complex queries)
- Basic to intermediate knowledge of Python and Bash scripting
- Familiarity with Apache Airflow or similar workflow tools
- Willingness to learn and grow in a data-focused engineering role
Nice to Have:
- Experience with Oracle, MS SQL Server, or Talend
- Understanding of SQL performance tuning techniques
- Exposure to cloud platforms (AWS, GCP, etc.)
-
· 23 views · 7 applications · 2d
Senior Data Engineer
Full Remote · Countries of Europe or Ukraine · 5 years of experience · Upper-IntermediateWe are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient...We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.
Does this relate to you?- 5+ years of experience in Data Engineering or a related field
- Strong expertise in SQL and data modeling concepts
- Hands-on experience with Airflow
- Experience working with Redshift
- Proficiency in Python for data processing
- Strong understanding of data governance, security, and compliance
- Experience in implementing CI/CD pipelines for data workflows
- Ability to work independently and collaboratively in an agile environment
- Excellent problem-solving and analytical skills
A new team member will be in charge of:
- Design, develop, and maintain scalable data warehouse solutions
- Build and optimize ETL/ELT pipelines for efficient data integration
- Design and implement data models to support analytical and reporting needs
- Ensure data integrity, quality, and security across all pipelines
- Optimize data performance and scalability using best practices
- Work with big data technologies such as Redshift
- Collaborate with cross-functional teams to understand business requirements and translate them into data solutions
- Implement CI/CD pipelines for data workflows
- Monitor, troubleshoot, and improve data processes and system performance
- Stay updated with industry trends and emerging technologies in data engineering
Already looks interesting? Awesome! Check out the benefits prepared for you:
- Regular performance reviews, including remuneration
- Up to 25 paid days off per year for well-being
- Flexible cooperation hours with work-from-home
- Fully paid English classes with an in-house teacher
- Perks on special occasions such as birthdays, marriage, childbirth
- Referral program implying attractive bonuses
- External & internal training and IT certifications
-
· 9 views · 0 applications · 1d
Senior/Tech Lead Data Engineer
Hybrid Remote · Poland, Ukraine (Kyiv, Lviv) · 5 years of experience · Upper-IntermediateQuantum is a global technology partner delivering high-end software products that address real-world problems. We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps,...Quantum is a global technology partner delivering high-end software products that address real-world problems.
We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps, Blockchain, and more.
Here at Quantum, we are dedicated to creating state-of-art solutions that effectively address the pressing issues faced by businesses and the world. To date, our team of exceptional people has already helped many organizations globally attain technological leadership.
We constantly discover new ways to solve never-ending business challenges by adopting new technologies, even when there isn’t yet a best practice. If you share our passion for problem-solving and making an impact, join us and enjoy getting to know our wealth of experience!
About the position
Quantum is expanding the team and has brilliant opportunities for a Data Engineer. As a Senior/Tech Lead Data Engineer, you will be pivotal in designing, implementing, and optimizing data platforms. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Air Flow, Spark, using Python, and various cloud-based solutions.
The client is a technological research company that utilizes proprietary AI-based analysis and language models to provide comprehensive insights into global stocks in all languages. Our mission is to bridge the knowledge gap in the investment world and empower investors of all types to become “super-investors.”
Through our generative AI technology implemented into brokerage platforms and other financial institutions’ infrastructures, we offer instant fundamental analyses of global stocks alongside bespoke investment strategies, enabling informed investment decisions for millions of investors worldwide.
Must have skills:
- Bachelor's Degree in Computer Science or related field
- At least 5 years of experience in Data Engineering
- Proven experience as a Tech Lead or Architect in data-focused projects, leadership skills, and experience managing or mentoring data engineering teams
- Strong proficiency in Python and PySpark for building ETL pipelines and large-scale data processing
- Deep understanding of Apache Spark, including performance tuning and optimization (job execution plans, broadcast joins, partitioning, skew handling, lazy evaluation)
- Hands-on experience with AWS Cloud (minimum 2 years), including EMR and Glue
- Familiarity with PySpark internals and concepts (Window functions, Broadcast joins, Sort & merge joins, Watermarking, UDFs, Lazy computation, Partition skew)
- Practical experience with performance optimization of Spark jobs (MUST)
- Strong understanding of OOD principles and familiarity with SOLID (MUST)
- Experience with cloud-native data platforms and lakehouse architectures
- Comfortable with SQL & NoSQL databases
- Experience with testing practices such as TDD, unit testing, and integration testing
- Strong problem-solving skills and a collaborative mindset
- Upper-Intermediate or higher level of English (spoken and written)
Your tasks will include:
- Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources
- Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions
- Lead efforts in performance tuning and query optimization to enhance data processing efficiency
- Provide expertise in data modeling and database design to ensure the scalability and reliability of data platforms
- Contribute to the development of best practices and standards for data engineering processes
- Stay updated on emerging technologies and trends in the data engineering landscape
We offer:
- Delivering high-end software projects that address real-world problems
- Surrounding experts who are ready to move forward professionally
- Professional growth plan and team leader support
- Taking ownership of R&D and socially significant projects
- Participation in worldwide tech conferences and competitions
- Taking part in regular educational activities
- Being a part of a multicultural company with a fun and lighthearted atmosphere
- Working from anywhere with flexible working hours
Paid vacation and sick leave days
Join Quantum and take a step toward your data-driven future.
More