Jobs
70-
Β· 43 views Β· 6 applications Β· 9d
Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· Upper-Intermediate Ukrainian Product πΊπ¦We are Boosta β an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+. Weβre looking for a...We are Boosta β an international IT company with a portfolio of successful products, performance marketing projects, and our investment fund, Burner. Boosta was founded in 2014, and since then, the number of Boosters has grown to 600+.
Weβre looking for a Data Engineer to join our team in the iGaming industry, where real-time insights, affiliate performance, and marketing analytics are at the center of decision-making. In this role, youβll own and scale our data infrastructure, working across affiliate integrations, product analytics, and experimentation workflows.
Your primary responsibilities will include building and maintaining data pipelines, implementing automated data validation, integrating external data sources via APIs, and creating dashboards to monitor data quality, consistency, and reliability. Youβll collaborate daily with the Affiliate Management team, Product Analysts, and Data Scientists to ensure the data powering our reports and models is clean, consistent, and trustworthy.
WHAT YOUβLL DO
- Design, develop, and maintain ETL/ELT pipelines to transform raw, multi-source data into clean, analytics-ready tables in Google BigQuery, using tools such as dbt for modular SQL transformations, testing, and documentation.
- Integrate and automate affiliate data workflows, replacing manual processes in collaboration with the related stakeholders.
- Proactively monitor and manage data pipelines using tools such as Airflow, Prefect, or Dagster, with proper alerting and retry mechanisms in place.
- Emphasize data quality, consistency, and reliability by implementing robust validation checks, including schema drift detection, null/missing value tracking, and duplicate detection using tools like Great Expectations or
- Build a Data Consistency Dashboard (in Looker Studio, Power BI, Tableau or Grafana) to track schema mismatches, partner anomalies, and source freshness, with built-in alerts and escalation logic.
- Ensure timely availability and freshness of all critical datasets, resolving latency and reliability issues quickly and sustainably.
- Control access to cloud resources, implement data governance policies, and ensure secure, structured access across internal teams.
- Monitor and optimize data infrastructure costs, particularly related to BigQuery usage, storage, and API-based ingestion.
- Document all pipelines, dataset structures, transformation logic, and data contracts clearly to support internal alignment and knowledge sharing.
- Build and maintain postback-based ingestion pipelines to support event-level tracking and attribution across the affiliate ecosystem.
- Collaborate closely with Data Scientists and Product Analysts to deliver high-quality, structured datasets for modeling, experimentation, and KPI reporting.
Act as a go-to resource across the organization for troubleshooting data discrepancies, supporting analytics workflows, and enabling self-service data access.
WHAT WE EXPECT FROM YOU
- Strong proficiency in SQL and Python.
- Experience with Google BigQuery and other GCP tools (e.g., Cloud Storage, Cloud Functions, Composer).
- Proven ability to design, deploy, and scale ETL/ELT pipelines.
- Hands-on experience integrating and automating data from various platforms.
- Familiarity with postback tracking, attribution logic, and affiliate data reconciliation.
- Skilled in orchestration tools like Airflow, Prefect, or Dagster.
- Experience with Looker Studio, Power BI, Tableau, or Grafana for building dashboards for data quality monitoring.
- Use of Git for version control and experience managing CI/CD pipelines (e.g., GitHub Actions).
- Experience with Docker to build isolated and reproducible environments for data workflows.
- Exposure to iGaming data structures and KPIs is a strong advantage.
Strong sense of data ownership, documentation, and operational excellence.
HOW IT WORKS
- Stage 1: pre-screen with a recruiter.
- Stage 2: test task.
- Stage 3: interview.
- Stage 4: bar-raising.
- Stage 5: reference check.
- Stage 6: job offer!
A trial period for this position is 3 months, during which we will get used to working together.
WHAT WE OFFER
- 28 business days of paid off
- Flexible hours and the possibility to work remotely
- Medical insurance and mental health care
- Compensation for courses, trainings
- English classes and speaking clubs
- Internal library, educational events
- Outstanding corporate parties, teambuildings
-
Β· 20 views Β· 0 applications Β· 3d
Middle BI/DB Developer
Office Work Β· Ukraine (Lviv) Β· Product Β· 2 years of experience Β· Upper-IntermediateAbout us: EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide. But that's not all! We're not just about...About us:
EveryMatrix is a leading B2B SaaS provider delivering iGaming software, content and services. We provide casino, sports betting, platform and payments, and affiliate management to 200 customers worldwide.
But that's not all! We're not just about numbers, we're about people. With a team of over 1000 passionate individuals spread across twelve countries in Europe, Asia, and the US, we're all united by our love for innovation and teamwork.
EveryMatrix is a member of the World Lottery Association (WLA) and European Lotteries Association. In September 2023 it became the first iGaming supplier to receive WLA Safer Gambling Certification. EveryMatrix is proud of its commitment to safer gambling and player protection whilst producing market leading gaming solutions.Join us on this exciting journey as we continue to redefine the iGaming landscape, one groundbreaking solution at a time.
We are looking for a passionate and dedicated Junior QA to join our team in Lviv!
About the unit:
DataMatrix is a part of EveryMatrix platform that is responsible for collecting, storing, processing and utilizing hundreds of millions of transactions from the whole platform every single day. We develop Business Intelligent solutions, reports, 3rd party integrations, data streaming and other products for both external and internal use. The team consists of 35 people and is located in Lviv.What You'll get to do:
- Develop real time data processing and aggregations
- Create and modify data marts (enhance our data warehouse)
- Take care of internal and external integrations
- Forge various types of reports
Our main stack:
- DB: BigQuery, PostgreSQL
- ETL: Apache Airflow, Apache NiFi
- Streaming: Apache Kafka
What You need to know:
Here's what we offer:
- Start with 22 days of annual leave, with 2 additional days added each year, up to 32 days by your fifth year with us.
- Stay Healthy: 10 sick leave days per year, no doctor's note required; 30 medical leave days with medical allowance
- Support for New Parents:
- 21 weeks of paid maternity leave, with the flexibility to work from home full-time until your child turns 1 year old.
- 4 weeks of paternity leave, plus the flexibility to work from home full-time until your child is 13 weeks old.
Our office perks include on-site massages and frequent team-building activities in various locations.
Benefits & Perks:
- Daily catered lunch or monthly lunch allowance.β―
- Private Medical Subscription.β―
- Access online learning platforms like Udemy for Business, LinkedIn Learning or OβReilly, and a budget for external training.
- Gym allowance
At EveryMatrix, we're committed to creating a supportive and inclusive workplace where you can thrive both personally and professionally. Come join us and experience the difference!
More -
Β· 39 views Β· 7 applications Β· 3d
Data Engineer
Full Remote Β· Worldwide Β· 5 years of experience Β· Upper-IntermediateBoosty Labs is one of the most prominent outsourcing companies in the blockchain domain. Among our clients are such well-known companies as Ledger, Consensys, Storj, Animoca brands, Walletconnect, Coinspaid, Paraswap, and others. About project: Advanced...Boosty Labs is one of the most prominent outsourcing companies in the blockchain domain. Among our clients are such well-known companies as Ledger, Consensys, Storj, Animoca brands, Walletconnect, Coinspaid, Paraswap, and others.
About project: Advanced blockchain analytics and on-the-ground intelligence to empower financial institutions, governments & regulators in the fight against cryptocurrency crime
- Requirements:
- 6+ years of experience with Python backend development
Solid knowledge of SQL (including writing/debugging complex queries)
Understanding of data warehouse principles and backend architecture - Experience working in Linux/Unix environments
Experience with APIs and Python frameworks (e.g., Flask, FastAPI) - Experience with PostgreSQL
- Familiarity with Docker
- Basic understanding of unit testing
- Good communication skills and ability to work in a team
- Interest in blockchain technology or willingness to learn
- Experience with CI/CD processes and containerization (Docker, Kubernetes) is a plus
- Strong problem-solving skills and the ability to work independently
- 6+ years of experience with Python backend development
- Responsibilities:
- Integrate new blockchains, AMM protocols, and bridges into the our platform
- Build and maintain data pipelines and backend services
- Help implement new tools and technologies into the system
- Participate in the full cycle of feature development β from design to release
- Write clean and testable code
- Collaborate with the team through code reviews and brainstorming
- Nice to Have:
- Experience with Kafka, Spark, or ClickHouse
- Knowledge of Kubernetes, Terraform, or Ansible
- Interest in crypto, DeFi, or distributed systems
- Experience with open-source tools
- Some experience with Java or readiness to explore it
- What we offer:
- Remote working format
- Flexible working hours
- Informal and friendly atmosphere
- The ability to focus on your work: a lack of bureaucracy and micromanagement
- 20 paid vacation days
- 7 paid sick leaves
- Education reimbursement
- Free English classes
- Psychologist consultations
Recruitment process:
Recruitment Interview β Technical Interview
- Requirements:
-
Β· 90 views Β· 14 applications Β· 29d
Data Engineer (6 months, Europe-based)
Full Remote Β· EU Β· 4 years of experience Β· Upper-IntermediateThe client is seeking an experienced Data Engineer to build and migrate data solutions to Google Cloud Platform (GCP) in support of data analytics and ML/AI initiatives. Key responsibilities: Develop data products on GCP using BigQuery and DBT Integrate...The client is seeking an experienced Data Engineer to build and migrate data solutions to Google Cloud Platform (GCP) in support of data analytics and ML/AI initiatives.
Key responsibilities:
- Develop data products on GCP using BigQuery and DBT
- Integrate data from multiple sources using Python and Cloud Functions
- Orchestrate pipelines with Terraform and Cloud Workflows
- Collaborate with Solution Architects, Data Scientists, and Software Engineers
Tech stack:
GCP (BigQuery, Cloud Functions, Cloud Workflows), DBT, Python, Terraform, GitRequirements:
Ability to work independently and within cross-functional teams;
Strong hands-on experience;
English: Upper Intermediate or higherNice to have:
Experience with OLAP cubes and PowerBI -
Β· 54 views Β· 11 applications Β· 29d
Senior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Upper-IntermediateSenior Data Engineer | Fintech | Remote | Full-Time Level: Senior English: Upper-Intermediate or higher Workload: Full-time Location: Fully remote (Preference for time zones close to Israel) Time Zone: CET (Israel) Start Date: ASAP ...π£ Senior Data Engineer | Fintech | Remote | Full-Time
π§ Level: Senior
π£οΈ English: Upper-Intermediate or higher
π Workload: Full-time
π Location: Fully remote (Preference for time zones close to Israel)
π Time Zone: CET (Israel)
π Start Date: ASAP
π Duration: 6+ months
π§Ύ About the Client:
Our client is an innovative fintech company dedicated to optimizing payment transaction success rates. Their advanced technology integrates seamlessly into existing infrastructures, helping payment partners and platforms recover lost revenue by boosting transaction approval rates.π§ Project Stage: Ongoing development
πΌ What Youβll Be Doing:- Design and implement robust, scalable data pipelines and ETL workflows
- Develop comprehensive end-to-end data solutions to support analytics, product, and business needs
- Define data requirements, architect systems, and build reliable data models
- Integrate backend logic into data processes for actionable insights
- Optimize system performance, automate processes, and monitor for improvements
- Collaborate closely with cross-functional teams (Product, Engineering, Data Science)
π§ Must-Have Skills:- 5+ years of experience in data engineering
- Deep expertise in building data warehouses and BI ecosystems
- Strong experience with modern analytical databases (e.g., Snowflake, Redshift)
- Proficient with data transformation tools (e.g., dbt, Dataform)
- Familiar with orchestration tools (e.g., Airflow, Prefect)
- Skilled in Python or Java and advanced SQL (including performance tuning)
- Experience managing large-scale data systems in cloud environments
- Infrastructure as code and DevOps mindset
π€ Soft Skills:- High ownership and accountability
- Strong communication and collaboration abilities
- Experience in dynamic, startup-like environments
- Analytical thinker with a proactive mindset
- Comfortable working independently
- Fluent spoken and written English
π§ͺ Tech Stack:
Python or Java, SQL, Snowflake, Redshift, dbt, Dataform, Airflow
π Interview Process:- English Check (15 min)
- Technical Interview (1β1.5 hours)
- Final Interview (1 hour) β Client
-
Β· 107 views Β· 19 applications Β· 29d
Senior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· Upper-IntermediateOur long-standing client from the UK is looking for a Senior Data Engineer Project: Decommissioning legacy software and systems Tech stack: DBT, Snowflake, SQL, Python, Fivetran Requirements: Solid experience with CI/CD processes in SSIS Proven...Our long-standing client from the UK is looking for a Senior Data Engineer
Project: Decommissioning legacy software and systems
Tech stack:
DBT, Snowflake, SQL, Python, FivetranRequirements:
- Solid experience with CI/CD processes in SSIS
- Proven track record of decommissioning legacy systems and migrating data to modern platforms (e.g., Snowflake)
- Experience with AWS (preferred) or Azure
- Communicative and proactive team player β able to collaborate and deliver
- Independent and flexible when switching between projects
- English: Upper Intermediate or higher
-
Β· 73 views Β· 20 applications Β· 29d
Data Engineer to $4800
Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· Upper-IntermediateWe are currently seeking a skilled Data Engineer to join our team in the development and maintenance of robust data solutions. This role involves building and optimizing data pipelines, managing ETL processes, and supporting data visualization needs for...We are currently seeking a skilled Data Engineer to join our team in the development and maintenance of robust data solutions. This role involves building and optimizing data pipelines, managing ETL processes, and supporting data visualization needs for business-critical use cases.
As part of your responsibilities, you will design and implement cloud infrastructure on AWS using AWS CDK in Python, contribute to solution architecture, and develop reusable components to streamline delivery across projects. You will also implement data quality checks and design scalable data models leveraging both SQL and NoSQL technologies.
Project details:
- Start: ASAP
- Duration: Until 31.12.2026
- Location: Remote
- Language: English
Responsibilities:- Develop, monitor, and maintain efficient ETL pipelines and data workflows
- Build infrastructure on AWS using AWS CDK (Python)
- Design and implement reusable data engineering components and frameworks
- Ensure data quality through validation, testing, and monitoring mechanisms
- Contribute to solution architecture and technical design
- Create and optimize scalable data models in both SQL and NoSQL databases
- Collaborate with cross-functional teams including data scientists, analysts, and product owners
Requirements:
- Solid experience in building and maintaining ETL pipelines
- Hands-on experience with data visualization tools or integrations (e.g., Tableau, Power BI, or custom dashboards via APIs)
- Strong working knowledge of AWS services, especially with AWS CDK (Python)
- Good understanding of SQL and NoSQL database technologies
- Familiarity with version control systems (e.g., Git)
- Experience working in Agile environments
- Strong communication skills and ability to work autonomously in remote teams
-
Β· 64 views Β· 3 applications Β· 27d
Senior Data Engineer (Python) to $8000
Full Remote Β· Ukraine, Poland, Bulgaria, Portugal Β· 8 years of experience Β· Upper-IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.About the Role:
As a Data Engineer, you will operate at the intersection of data engineering, software engineering, and system architecture. This is a high-impact, cross-functional role where youβll take end-to-end ownership β from designing scalable infrastructure and writing robust, production-ready code to ensuring the reliability and performance of our systems in production.Key Responsibilities:
- Collaborate closely with software architects and DevOps engineers to evolve our AI training, inference, and delivery architecture and deliver resilient, scalable, production-grade machine learning pipelines.
- Design and implement scalable machine learning pipelines with Airflow, enabling efficient parallel execution.
- Enhance our data infrastructure by refining database schemas, developing and improving APIs for internal systems, overseeing schema migrations, managing data lifecycles, optimizing query performance, and maintaining large-scale data pipelines.
- Implement monitoring and observability, using AWS Athena and QuickSight to track performance, model accuracy, operational KPIs and alerts.
- Build and maintain data validation pipelines to ensure incoming data quality and proactively detect anomalies or drift.
- Represent the data science teamβs needs in cross-functional technical discussions and solutions design.
Required Competence and Skills:
- A Bachelorβs or higher in Computer Science, Software Engineering or a closely related technical field, demonstrating strong analytical and coding skills.
- 8+ years of experience as a data engineer, software engineer, or similar role, with a proven track record of using data to drive business outcomes.
- Strong Python skills, with experience building modular, testable, and production-ready code.
- Solid understanding of Databases and SQL, including indexing best practices, and hands-on experience working with large-scale data systems (e.g., Spark, Glue, Athena).
- Practical experience with Airflow or similar orchestration frameworks, including designing, scheduling, maintaining, troubleshooting, and optimizing data workflows (DAGs).
- A solid understanding of data engineering principles: ETL/ELT design, data integrity, schema evolution, and performance optimization.
- Familiarity with AWS cloud services, including S3, Lambda, Glue, RDS, and API Gateway.
Nice-to-Haves
- Experience with MLOps practices such as CI/CD, model and data versioning, observability, and deployment.
- Familiarity with API development frameworks (e.g., FastAPI).
- Knowledge of data validation techniques and tools (e.g., Great Expectations, data drift detection).
- Exposure to AI/ML system design, including pipelines, model evaluation metrics, and production deployment.
Why Us?
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).
We provide full accounting and legal support in all countries we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.
More -
Β· 42 views Β· 7 applications Β· 24d
Data Engineer
Countries of Europe or Ukraine Β· 4 years of experience Β· Upper-IntermediateWe are building a next-generation AI-powered platform designed for comprehensive observability of digital infrastructure, including mobile networks and data centers. By leveraging advanced analytics, automation, and real-time monitoring, we empower...We are building a next-generation AI-powered platform designed for comprehensive observability of digital infrastructure, including mobile networks and data centers. By leveraging advanced analytics, automation, and real-time monitoring, we empower businesses to optimize performance, enhance reliability, and prevent failures before they happen.
Our platform delivers deep insights, anomaly detection, and predictive intelligence, enabling telecom operators, cloud providers, and enterprises to maintain seamless connectivity, operational efficiency, and infrastructure resilience in an increasingly complex digital landscape.
We have offices in Doha, Qatar and Muscat, Oman. This position requires relocation to one of these offices.
Job Summary
As a Senior Data Engineer, you will be responsible for building and maintaining end-to-end data infrastructure that powers our AI-driven observability platform. You will work with large-scale datasets, both structured and unstructured, and design scalable pipelines that enable real-time monitoring, analytics, and machine learning. This is a hands-on engineering role requiring deep expertise in data architecture, cloud technologies, and performance optimization.
Key Responsibilities
Data Pipeline Development
- Design, develop, and maintain scalable ETL/ELT pipelines from scratch using modern data engineering tools
- Ingest and transform high-volume data from multiple sources, including APIs, telemetry, and internal systems
- Write high-performance code to parse and process large files (JSON, XML, CSV, etc.)
- Ensure robust data delivery for downstream systems, dashboards, and ML models
Infrastructure & Optimization
- Build and manage containerized workflows using Docker and Kubernetes
- Optimize infrastructure for performance, availability, and cost-efficiency
- Implement monitoring, alerting, and data quality checks across the data pipeline stack
Collaboration & Best Practices
- Work closely with AI/ML, backend, and platform teams to define and deliver on data requirements
- Enforce best practices in data modeling, governance, and engineering
- Participate in CI/CD processes, infrastructure automation, and documentation
Required Qualifications
Experience
- 4+ years of hands-on experience in data engineering or similar backend roles
- Proven experience designing and deploying production-grade data pipelines from scratch
Technical Skills
- Proficiency in Python or Scala for data processing
- Deep knowledge of SQL and noSQL systems (e.g., MongoDB, DynamoDB, Cassandra, Firebase)
- Hands-on experience with cloud platforms (AWS, GCP, or Azure)
- Familiarity with data tools like Apache Spark, Airflow, Kafka, and distributed systems
- Experience with CI/CD practices and DevOps for data workflows
Soft Skills
- Excellent communication skills and the ability to work independently in a fast-paced environment
- Strong analytical mindset and attention to performance, scalability, and system reliability
Preferred Qualifications
- Background in the telecom or IoT industry
- Certifications in cloud platforms or data technologies
- Experience with real-time streaming, event-driven architectures, or ML/Ops
- Familiarity with big data ecosystems (e.g., Hadoop, Cloudera)
- Knowledge of API development or experience with Flask/Django
- Experience setting up A/B test infrastructure and experimentation pipelines
Nice to have
Experience with the integration and maintenance of vector databases (e.g., Pinecone, Weaviate, Milvus, Qdrant) to support LLM workflows including embedding search, RAG, and semantic retrieval.
What We Offer
- Performance-Based Compensation: Tied to achieving and exceeding performance targets, with accelerators for surpassing goals
- Shares and Equity: Participation in our Employee Stock Option Plan (ESOP)
- Growth Opportunities: Sponsored courses, certifications, and continuous learning programs
- Comprehensive Benefits: Health insurance, pension contributions, and professional development support
- Annual Vacation: Generous paid annual leave
- Dynamic Work Environment: A culture of innovation, collaboration, and creative freedom
- Impact and Ownership: Shape the future of digital infrastructure and leave your mark
- Flexible Work Arrangements: Options to work remotely or from our offices
- A Mission-Driven Team: Join a diverse, passionate group committed to meaningful change
-
Β· 46 views Β· 3 applications Β· 23d
Senior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· Upper-IntermediateDataforest is seeking an experienced Senior Data Engineer to join our dynamic team. You will be responsible for developing and maintaining data-processing architecture, as well as optimizing and monitoring our internal systems. Requirements: - 3+ years of...Dataforest is seeking an experienced Senior Data Engineer to join our dynamic team. You will be responsible for developing and maintaining data-processing architecture, as well as optimizing and monitoring our internal systems.
Requirements:
- 3+ years of commercial experience with Python.
- Solid foundational knowledge of ElasticSearch, including:- Ability to perform batch updates using bulk operations.
- Understanding index mapping and how to adapt it for your projectβs needs.
(Nice to Have) Some exposure to vector search concepts.
- Experience working with PostgreSQL databases.
- Proven experience in setting up and managing monitoring systems with CloudWatch, Prometheus, and Grafana.
- Profound understanding of algorithms and their complexities, with the ability to analyze and optimize them effectively.
- Excellent programming skills in Python with a strong emphasis on optimization and code structuring.
- Solid understanding of ETL principles and best practices.
- Excellent collaborative and communication skills, with demonstrated ability to mentor and support team members.
- Experience working with Linux environments, cloud services (AWS), and Docker.
- Strong decision-making capabilities with the ability to work independently and proactively.
Will be a plus:
- Experience in web scraping, data extraction, cleaning, and visualization.
- Understanding of multiprocessing and multithreading, including process and thread management.
- Familiarity with Redis.
- Experience with Flask / Flask-RESTful for API development.
- Knowledge and experience with Kafka.
Key Responsibilities:
- Develop and maintain a robust data processing architecture using Python.
- Effectively utilize ElasticSearch and PostgreSQL for efficient data management.
- Design and manage data pipelines using Kafka and SQS.
- Optimize code structure and performance for maximum efficiency.
- Design and implement efficient ETL processes.
- Analyze and optimize algorithmic solutions for better performance and scalability.
- Collaborate within the AWS stack to ensure flexible and reliable data processing systems.
- Provide mentorship and guidance to colleagues, fostering a collaborative and supportive team environment.
- Independently make decisions related to software architecture and development processes to drive the project forward.
We offer:
- Great networking opportunities with international clients, challenging tasks;
- Building interesting projects from scratch using new technologies;
- Personal and professional development opportunities;
- Competitive salary fixed in USD;
- Paid vacation and sick leaves;
- Flexible work schedule;
- Friendly working environment with minimal hierarchy;
- Team building activities and corporate events.
More -
Β· 17 views Β· 0 applications Β· 22d
Senior Data Engineer
Full Remote Β· Ukraine Β· 7 years of experience Β· Upper-IntermediateProject description We are hiring a Senior Full-Stack Software Developer. Our client team consists of frontend and backend developers, data engineers, data scientists, QA engineers, cloud engineers, and project managers. Responsibilities Participate in...Project description
We are hiring a Senior Full-Stack Software Developer. Our client team consists of frontend and backend developers, data engineers, data scientists, QA engineers, cloud engineers, and project managers.
Responsibilities
Participate in requirements clarification and sprint planning sessions.
Design technical solutions and implement them, inc ETL Pipelines
Build robust data pipelines in PySpark to extract, transform, using PySpark
Optimize ETL Processes
Enhance and tune existing ETL processes for better performance, scalability, and reliability
Writing unit and integration tests.
Support QA teammates in the acceptance process.
Resolving PROD incidents as a 3rd line engineer.
Skills
Must have
Min 7 Years of experience in IT/Data
Bachelor in IT or related field.
Exceptional logical reasoning and problem-solving skills
Programming: Proficiency in PySpark for distributed computing and Python for ETL development.
SQL: Strong expertise in writing and optimizing complex SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake.
Data Warehousing: Experience working with data warehousing concepts and platforms, ideally DataBricks
ETL Tools: Familiarity with ETL tools & processes
Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design.
Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development.
Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance.
Data Quality Tools: Experience implementing data validation, cleansing, and quality framework
Nice to have
Understanding of Investment Data domain.
Languages
English: B2 Upper Intermediate
More -
Β· 56 views Β· 8 applications Β· 22d
Data Engineer
Full Remote Β· Worldwide Β· 4 years of experience Β· Upper-IntermediateAt Uvik Software, we are looking for a talented Data Engineer to join our team. If you are passionate about data, cloud technologies, and building scalable solutions, this role is for you! You will work on designing, developing, and optimizing data...At Uvik Software, we are looking for a talented π Data Engineer π to join our team. If you are passionate about data, cloud technologies, and building scalable solutions, this role is for you!
You will work on designing, developing, and optimizing data pipelines, implementing machine learning models, and leveraging cloud platforms like AWS(preferred), Azure, or GCP. Youβll collaborate with cross-functional teams to transform raw data into actionable insights, enabling smarter business decisions.
πKey Responsibilities:
- Develop and maintain scalable ETL/ELT pipelines for data processing.
- Design and optimize data warehouses and data lakes on AWS, Azure, or GCP.
- Implement machine learning models and predictive analytics solutions.
- Work with structured and unstructured data, ensuring data quality and integrity.
- Optimize query performance and data processing workflows.
- Collaborate with software engineers, analysts, and business stakeholders to deliver data-driven solutions.
πRequirements:
- 4+ years of experience as a Data Engineer.
- Strong proficiency in SQL and experience with relational and NoSQL databases.
- Hands-on experience with cloud services: AWS (preferred), Azure, or GCP.
- Proficiency in Python or Scala for data processing.
- Experience with Apache Spark, Kafka, Airflow, or similar tools.
- Solid understanding of data modeling, warehousing, and big data processing frameworks.
- Experience with machine learning frameworks (TensorFlow, Scikit-learn, PyTorch) is a plus.
- Familiarity with DevOps practices, CI/CD pipelines, and Infrastructure as Code (Terraform, CloudFormation) is an advantage.
-
Β· 27 views Β· 0 applications Β· 22d
Senior Data Engineer
Full Remote Β· Ukraine Β· 7 years of experience Β· Upper-IntermediateProject Description: We are hiring a Senior Full-Stack Software Developer. Our client team consists of frontend and backend developers, data engineers, data scientists, QA engineers, cloud engineers, and project managers. Responsibilities: β’...Project Description:
We are hiring a Senior Full-Stack Software Developer. Our client team consists of frontend and backend developers, data engineers, data scientists, QA engineers, cloud engineers, and project managers.
Responsibilities:
β’ Participate in requirements clarification and sprint planning sessions.
β’ Design technical solutions and implement them, inc ETL Pipelines - Build robust data pipelines in PySpark to extract, transform, using PySpark
β’ Optimize ETL Processes - Enhance and tune existing ETL processes for better performance, scalability, and reliability
β’ Writing unit and integration tests.
β’ Support QA teammates in the acceptance process.
β’ Resolving PROD incidents as a 3rd line engineer.
Mandatory Skills Description:
* Min 7 Years of experience in IT/Data
* Bachelor in IT or related field.
* Exceptional logical reasoning and problem-solving skills
* Programming: Proficiency in PySpark for distributed computing and Python for ETL development.
* SQL: Strong expertise in writing and optimizing complex SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake.
* Data Warehousing: Experience working with data warehousing concepts and platforms, ideally DataBricks
* ETL Tools: Familiarity with ETL tools & processes
* Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design.
* Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development.
* Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance.
* Data Quality Tools: Experience implementing data validation, cleansing, and quality framework
Nice-to-Have Skills Description:
Understanding of Investment Data domain.
-
Β· 25 views Β· 1 application Β· 22d
Senior Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-IntermediateN-iX is looking for a Senior Data Engineer (with Data Science/MLOps experience) to join our team! Our client: a global biopharmaceutical company. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining...N-iX is looking for a Senior Data Engineer (with Data Science/MLOps experience) to join our team!
Our client: a global biopharmaceutical company.
As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. Your background in machine learning and data science will be valuable in optimizing data workflows, enabling efficient model deployment, and supporting AI-driven initiatives. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.
Key Responsibilities:
- Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
- Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
- Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
- Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency.
- Collaborate with Data Scientists to facilitate model deployment and integration into production environments.
- Support the implementation of basic ML Ops practices, such as model versioning and monitoring.
- Assist in optimizing data pipelines to improve machine learning workflows.
- Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.
Tools and skills you will use in this role:
- Palantir Foundry
- Python
- PySpark
- SQL
TypeScript
Required:
- 5+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
- Strong proficiency in Python and PySpark;
- Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
- Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
- Expertise in data modeling, data warehousing, and ETL/ELT concepts;
- Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
- Proficiency in containerization technologies (e.g., Docker, Kubernetes);
- Familiarity with ML Ops concepts, including model deployment and monitoring.
- Basic understanding of machine learning frameworks such as TensorFlow or PyTorch.
- Exposure to cloud-based AI/ML services (e.g., AWS SageMaker, Azure ML, Google Vertex AI).
- Experience working with feature engineering and data preparation for machine learning models.
- Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities.
- Strong communication and teamwork abilities;
- Understanding of data security and privacy best practices;
Strong mathematical, statistical, and algorithmic skills.
Nice to have:
- Certification in Cloud platforms, or related areas;
- Experience with search engine Apache Lucene, Web Service Rest API;
- Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
- Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
Previous experience working with JavaScript and TypeScript.
We offer*:
- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
More -
Β· 25 views Β· 0 applications Β· 21d
Senior Data Engineer (Data Science/MLOps Background)
Full Remote Β· Ukraine Β· 5 years of experience Β· Upper-IntermediateΠur ClΡent Ρs seekΡng Π° prΠΎΠ°ctΡve SenΡΠΎr DΠ°tΠ° EngΡneer tΠΎ jΠΎΡn theΡr teΠ°m. Πs Π° SenΡΠΎr DΠ°tΠ° EngΡneer, yΠΎu wΡll plΠ°y Π° crΡtΡcΠ°l rΠΎle Ρn desΡgnΡng, develΠΎpΡng, Π°nd mΠ°ΡntΠ°ΡnΡng sΠΎphΡstΡcΠ°ted dΠ°tΠ° pΡpelΡnes, ΠntΠΎlΠΎgy Πbjects, Π°nd FΠΎundry FunctΡΠΎns wΡthΡn...Πur ClΡent Ρs seekΡng Π° prΠΎΠ°ctΡve SenΡΠΎr DΠ°tΠ° EngΡneer tΠΎ jΠΎΡn theΡr teΠ°m.
Πs Π° SenΡΠΎr DΠ°tΠ° EngΡneer, yΠΎu wΡll plΠ°y Π° crΡtΡcΠ°l rΠΎle Ρn desΡgnΡng, develΠΎpΡng, Π°nd mΠ°ΡntΠ°ΡnΡng sΠΎphΡstΡcΠ°ted dΠ°tΠ° pΡpelΡnes, ΠntΠΎlΠΎgy Πbjects, Π°nd FΠΎundry FunctΡΠΎns wΡthΡn PΠ°lΠ°ntΡr FΠΎundry.
YΠΎur bΠ°ckgrΠΎund Ρn mΠ°chΡne leΠ°rnΡng Π°nd dΠ°tΠ° scΡence wΡll be vΠ°luΠ°ble Ρn ΠΎptΡmΡzΡng dΠ°tΠ° wΠΎrkflΠΎws, enΠ°blΡng effΡcΡent mΠΎdel deplΠΎyment, Π°nd suppΠΎrtΡng ΠΠ-drΡven ΡnΡtΡΠ°tΡves.
The ΡdeΠ°l cΠ°ndΡdΠ°te wΡll pΠΎssess Π° rΠΎbust bΠ°ckgrΠΎund Ρn clΠΎud technΠΎlΠΎgΡes, dΠ°tΠ° Π°rchΡtecture, Π°nd Π° pΠ°ssΡΠΎn fΠΎr sΠΎlvΡng cΠΎmplex dΠ°tΠ° chΠ°llenges.
Key RespΠΎnsΡbΡlΡtΡes:
- CΠΎllΠ°bΠΎrΠ°te wΡth crΠΎss-functΡΠΎnΠ°l teΠ°ms tΠΎ understΠ°nd dΠ°tΠ° requΡrements, Π°nd desΡgn, Ρmplement Π°nd mΠ°ΡntΠ°Ρn scΠ°lΠ°ble dΠ°tΠ° pΡpelΡnes Ρn PΠ°lΠ°ntΡr FΠΎundry, ensurΡng end-tΠΎ-end dΠ°tΠ° ΡntegrΡty Π°nd ΠΎptΡmΡzΡng wΠΎrkflΠΎws.
- GΠ°ther Π°nd trΠ°nslΠ°te dΠ°tΠ° requΡrements ΡntΠΎ rΠΎbust Π°nd effΡcΡent sΠΎlutΡΠΎns, leverΠ°gΡng yΠΎur expertΡse Ρn clΠΎud-bΠ°sed dΠ°tΠ° engΡneerΡng. CreΠ°te dΠ°tΠ° mΠΎdels, schemΠ°s, Π°nd flΠΎw dΡΠ°grΠ°ms tΠΎ guΡde develΠΎpment.
- DevelΠΎp, Ρmplement, ΠΎptΡmΡze Π°nd mΠ°ΡntΠ°Ρn effΡcΡent Π°nd relΡΠ°ble dΠ°tΠ° pΡpelΡnes Π°nd ETL/ELT prΠΎcesses tΠΎ cΠΎllect, prΠΎcess, Π°nd ΡntegrΠ°te dΠ°tΠ° tΠΎ ensure tΡmely Π°nd Π°ccurΠ°te dΠ°tΠ° delΡvery tΠΎ vΠ°rΡΠΎus busΡness Π°pplΡcΠ°tΡΠΎns, whΡle ΡmplementΡng dΠ°tΠ° gΠΎvernΠ°nce Π°nd securΡty best prΠ°ctΡces tΠΎ sΠ°feguΠ°rd sensΡtΡve ΡnfΠΎrmΠ°tΡΠΎn.
- MΠΎnΡtΠΎr dΠ°tΠ° pΡpelΡne perfΠΎrmΠ°nce, ΡdentΡfy bΠΎttlenecks, Π°nd Ρmplement ΡmprΠΎvements tΠΎ ΠΎptΡmΡze dΠ°tΠ° prΠΎcessΡng speed Π°nd reduce lΠ°tency.
- CΠΎllΠ°bΠΎrΠ°te wΡth DΠ°tΠ° ScΡentΡsts tΠΎ fΠ°cΡlΡtΠ°te mΠΎdel deplΠΎyment Π°nd ΡntegrΠ°tΡΠΎn ΡntΠΎ prΠΎductΡΠΎn envΡrΠΎnments.
- SuppΠΎrt the ΡmplementΠ°tΡΠΎn ΠΎf bΠ°sΡc ML Πps prΠ°ctΡces, such Π°s mΠΎdel versΡΠΎnΡng Π°nd mΠΎnΡtΠΎrΡng.
- ΠssΡst Ρn ΠΎptΡmΡzΡng dΠ°tΠ° pΡpelΡnes tΠΎ ΡmprΠΎve mΠ°chΡne leΠ°rnΡng wΠΎrkflΠΎws.
- TrΠΎubleshΠΎΠΎt Π°nd resΠΎlve Ρssues relΠ°ted tΠΎ dΠ°tΠ° pΡpelΡnes, ensurΡng cΠΎntΡnuΠΎus dΠ°tΠ° Π°vΠ°ΡlΠ°bΡlΡty Π°nd relΡΠ°bΡlΡty tΠΎ suppΠΎrt dΠ°tΠ°-drΡven decΡsΡΠΎn-mΠ°kΡng prΠΎcesses.
- StΠ°y current wΡth emergΡng technΠΎlΠΎgΡes Π°nd Ρndustry trends, ΡncΠΎrpΠΎrΠ°tΡng ΡnnΠΎvΠ°tΡve sΠΎlutΡΠΎns ΡntΠΎ dΠ°tΠ° engΡneerΡng prΠ°ctΡces, Π°nd effectΡvely dΠΎcument Π°nd cΠΎmmunΡcΠ°te technΡcΠ°l sΠΎlutΡΠΎns Π°nd prΠΎcesses.
TΠΎΠΎls Π°nd skΡlls yΠΎu wΡll use Ρn thΡs rΠΎle:
- PΠ°lΠ°ntΡr FΠΎundry
- PythΠΎn
- PySpΠ°rk
- SQL
- TypeScrΡpt
RequΡred:
- 5+ yeΠ°rs ΠΎf experΡence Ρn dΠ°tΠ° engΡneerΡng, preferΠ°bly wΡthΡn the phΠ°rmΠ°ceutΡcΠ°l ΠΎr lΡfe scΡences Ρndustry;
- StrΠΎng prΠΎfΡcΡency Ρn PythΠΎn Π°nd PySpΠ°rk;
- PrΠΎfΡcΡency wΡth bΡg dΠ°tΠ° technΠΎlΠΎgΡes (e.g., ΠpΠ°che HΠ°dΠΎΠΎp, SpΠ°rk, KΠ°fkΠ°, BΡgQuery, etc.);
- HΠ°nds-ΠΎn experΡence wΡth clΠΎud servΡces (e.g., ΠWS Glue, Πzure DΠ°tΠ° FΠ°ctΠΎry, GΠΎΠΎgle ClΠΎud DΠ°tΠ°flΠΎw);
- ExpertΡse Ρn dΠ°tΠ° mΠΎdelΡng, dΠ°tΠ° wΠ°rehΠΎusΡng, Π°nd ETL/ELT cΠΎncepts;
- HΠ°nds-ΠΎn experΡence wΡth dΠ°tΠ°bΠ°se systems (e.g., PΠΎstgreSQL, MySQL, NΠΎSQL, etc.);
- PrΠΎfΡcΡency Ρn cΠΎntΠ°ΡnerΡzΠ°tΡΠΎn technΠΎlΠΎgΡes (e.g., DΠΎcker, Kubernetes);
- FΠ°mΡlΡΠ°rΡty wΡth ML Πps cΠΎncepts, ΡncludΡng mΠΎdel deplΠΎyment Π°nd mΠΎnΡtΠΎrΡng.
- BΠ°sΡc understΠ°ndΡng ΠΎf mΠ°chΡne leΠ°rnΡng frΠ°mewΠΎrks such Π°s TensΠΎrFlΠΎw ΠΎr PyTΠΎrch.
- ExpΠΎsure tΠΎ clΠΎud-bΠ°sed ΠΠ/ML servΡces (e.g., ΠWS SΠ°geMΠ°ker, Πzure ML, GΠΎΠΎgle Vertex ΠΠ).
- ExperΡence wΠΎrkΡng wΡth feΠ°ture engΡneerΡng Π°nd dΠ°tΠ° prepΠ°rΠ°tΡΠΎn fΠΎr mΠ°chΡne leΠ°rnΡng mΠΎdels.
- EffectΡve prΠΎblem-sΠΎlvΡng Π°nd Π°nΠ°lytΡcΠ°l skΡlls, cΠΎupled wΡth excellent cΠΎmmunΡcΠ°tΡΠΎn Π°nd cΠΎllΠ°bΠΎrΠ°tΡΠΎn Π°bΡlΡtΡes.
- StrΠΎng cΠΎmmunΡcΠ°tΡΠΎn Π°nd teΠ°mwΠΎrk Π°bΡlΡtΡes;
- UnderstΠ°ndΡng ΠΎf dΠ°tΠ° securΡty Π°nd prΡvΠ°cy best prΠ°ctΡces;
- StrΠΎng mΠ°themΠ°tΡcΠ°l, stΠ°tΡstΡcΠ°l, Π°nd Π°lgΠΎrΡthmΡc skΡlls.
NΡce tΠΎ hΠ°ve:
- CertΡfΡcΠ°tΡΠΎn Ρn ClΠΎud plΠ°tfΠΎrms, ΠΎr relΠ°ted Π°reΠ°s;
- ExperΡence wΡth seΠ°rch engΡne ΠpΠ°che Lucene, Web ServΡce Rest ΠPΠ;
- FΠ°mΡlΡΠ°rΡty wΡth VeevΠ° CRM, ReltΡΠΎ, SΠP, Π°nd/ΠΎr PΠ°lΠ°ntΡr FΠΎundry;
- KnΠΎwledge ΠΎf phΠ°rmΠ°ceutΡcΠ°l Ρndustry regulΠ°tΡΠΎns, such Π°s dΠ°tΠ° prΡvΠ°cy lΠ°ws, Ρs Π°dvΠ°ntΠ°geΠΎus;
- PrevΡΠΎus experΡence wΠΎrkΡng wΡth JΠ°vΠ°ScrΡpt Π°nd TypeScrΡpt.
CΠΎmpΠ°ny ΠΎffers:
- FlexΡble wΠΎrkΡng fΠΎrmΠ°t β remΠΎte, ΠΎffΡce-bΠ°sed ΠΎr flexΡble
- Π cΠΎmpetΡtΡve sΠ°lΠ°ry Π°nd gΠΎΠΎd cΠΎmpensΠ°tΡΠΎn pΠ°ckΠ°ge
- PersΠΎnΠ°lΡzed cΠ°reer grΠΎwth
- PrΠΎfessΡΠΎnΠ°l develΠΎpment tΠΎΠΎls (mentΠΎrshΡp prΠΎgrΠ°m, tech tΠ°lks Π°nd trΠ°ΡnΡngs, centers ΠΎf excellence, Π°nd mΠΎre)
- ΠctΡve tech cΠΎmmunΡtΡes wΡth regulΠ°r knΠΎwledge shΠ°rΡng
- EducΠ°tΡΠΎn reΡmbursement
- MemΠΎrΠ°ble Π°nnΡversΠ°ry presents
- CΠΎrpΠΎrΠ°te events Π°nd teΠ°m buΡldΡngs