Jobs

166
  • Β· 68 views Β· 10 applications Β· 12d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 2 years of experience Β· B1 - Intermediate
    Your expertise: Strong analytical skills and a proven ability to extract actionable insights from data Experience with at least one SQL-based database or data warehouse solution (e.g., MySQL, PostgreSQL, MSSQL, Snowflake, BigQuery, or Apache Iceberg-based...

    Your expertise:

    • Strong analytical skills and a proven ability to extract actionable insights from data
    • Experience with at least one SQL-based database or data warehouse solution (e.g., MySQL, PostgreSQL, MSSQL, Snowflake, BigQuery, or Apache Iceberg-based systems)
    • Solid understanding of ETL processes and experience building them from scratch (experience with AWS Glue or Apache Airflow is a plus)
    • Proficiency in Python
    • Experience in gathering and analyzing system requirements
    • Excellent communication skills
    • Intermediate level of English (B1 or above)


    Will definitely be a plus:

    • Cloud experience, particularly with AWS
    • Familiarity with BI tools such as Tableau, Power BI, or similar

    What’s in it for you? 

    • Opportunity to deal with top-notch technologies and approaches in a world-leader product company with millions of customers
    • Opportunity to make a difference for online privacy, freedom of speech, and net neutrality
    • Decent market rate compensation depending on experience and skills
    • Developed corporate culture: no micromanagement, culture based on principles of truth, trust, and transparency
    • Support of personal and professional development
      • coverage of costs of external trainings, conferences, professional literature
      • support of experienced colleagues
      • in-house events and trainings
      • regular knowledge sharing in teams
      • English classes and speaking clubs
    • Life-balance support
      • 25 working days of vacation
      • 5 days of paid sick leave per year without providing a medical certificate (no limitation on sick leaves with medical confirmation)
      • generous maternity / paternity leave program
    • Professionally strong environment, friendly and open atmosphere, ability to influence the product development and recognition for it

    You will be involved into:

    • Create sustainable analytics solutions for internal clients
    • Learn new technologies 
    • Creating and implementing tracking documents to meet stated requirements for metadata management, operational data stores and ETL environments
    • Collaborate with internal customers and IT partners, including system architects, software developers, database administrators, design analysts and information modeling experts to determine project requirements and capabilities, and strategize development and implementation timelines
    • Research new technologies, data modeling methods and information management systems to determine which ones should be incorporated into company data architectures, and develop implementation timelines and milestones

    About the company and project:

    Namecheap was founded in 2000 on the idea that all people deserve value-priced domains delivered through stellar service. Today, Namecheap is a leading ICANN-accredited domain name registrar and web hosting company with over 16 million customers and 19 million domains under management β€” and we’re just getting started.

    Our culture is built on the values that we live every day: the way we work, the way we collaborate with our global network of colleagues and the way we relentlessly innovate solutions that meet the emerging needs of our customers.

    We are Business Intelligence team solving business challenges with innovative technology solutions. Experienced in setting up BI, ETL, DW and ML solutions. Currently we are working on a new project and we are looking for a Data Engineer, who will take part in a building solution from the scratch.

    More
  • Β· 77 views Β· 5 applications Β· 12d

    Data Engineer

    Full Remote Β· EU Β· Product Β· 2 years of experience Β· B1 - Intermediate
    PIN-UP Global is an international holding company specializing in developing and implementing advanced technologies, B2B solutions and innovative products. We ensure certification and licensing of our products, providing customers and partners of the...

    PIN-UP Global is an international holding company specializing in developing and implementing advanced technologies, B2B solutions and innovative products.

     

    We ensure certification and licensing of our products, providing customers and partners of the holding company with high-quality and reliable solutions. PIN-UP Global is represented in Cyprus, Poland, Kazakhstan, Armenia, Peru, Malta. The holding’s headquarters is located in Cyprus.

     

    We’re looking for a sharp and driven Middle Data Engineer to join our team!

     

    Responsibilities:

    - Creating and maintaining data pipelines for reports and ML models. 

    - Deep understanding of data peculiarities you work with.
    - Implement and manage a comprehensive data quality framework that monitors data integrity and quality. 

    - Timely response to alerts and search for cause-and-effect relationships in data deviations.
    - Helping to support related infrastructure: setting up monitoring and alerting, communicating with responsible specialists and checking that they're doing their jobs.
    - Maintaining project documentation.

     

    Requirements:

    - 2+ years of experience as a Data Engineer in a product company.
    - Proficiency in Python and SQL for data manipulation and processing.
    - Familiarity with data orchestration tools such as Airflow for ELT/ETL.
    - Experience working with RDBMS (working with ClickHouse would be a plus).
    - Experience working with AWS services: S3, Athena.
    - Experience working with GCP: Cloud Storage, BigQuery.
    - Experience with Git & GitLab for version control and collaboration. 

    - Understanding of CI/CD.
    - Understanding data modeling concepts.
    - Experience working with data visualization tools (Grafana, Metabase, etc.).
    - Understanding how BI systems work.

     

    Will be plus:

    - Experience working with key-value stores: Redis / Amazon DynamoDB.
    - Experience working with AWS: Lambda, Sagemaker, CloudWatch, IAM, IAM Identity Center.
    - Experience working on API integrations.
    - Working experience as a data analyst.
    - Ability to complete tasks end to end: write SQL queries from scratch based on formal descriptions or study new data -> configure effective processing, storage, and updating -> cover with tests.
    - Experience in developing and optimizing data pipelines when hardware/software resources can be limited.
    - Experience with real-time data processing.
    - Understanding of Feature Stores concepts.

     

     

    What are the conditions and bonuses?

    πŸ€An exciting and challenging job in a fast-growing product holding, the opportunity to be part of a multicultural team of top professionals in Development, Engineering and Architecture, Management, Operations, Marketing, etc;

    🀝Great working atmosphere with passionate IT experts and leaders, sharing a friendly culture and a success-driven mindset is guaranteed;

    πŸ“Beautiful offices in Warsaw, Limassol, Yerevan β€” work with comfort and enjoy the opportunity to build a network of connections with IT professionals day by day;

    πŸ§‘β€πŸ’»Laptop & all necessary equipment for work according to the holding standards;

    πŸ–Paid vacations, personal events days, days off;

    πŸ«–Paid sick leave;

    πŸ‘¨β€βš•Medical insurance;

    πŸ’΅Referral program β€” enjoy cooperation with your colleagues and get a bonus;

    πŸ“šEducational support by our L&D team: internal and external trainings and conferences, courses on Udemy;

    πŸ—£Free internal English courses;

    πŸ€Έβ€β™€Sport benefit;

    πŸ¦„Multiple internal activities: online platform with newsletters, quests, gamification, and presents for collecting bonuses, PIN-UP talks club for movie and book lovers, board games cozy evenings, special office days dedicated to holidays, etc;

    🎳Company events, team buildings.

    More
  • Β· 16 views Β· 1 application Β· 12d

    Mediation System Engineer

    Full Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    Your Project As a Mediation System Engineer, your mission is to design, develop, and optimize robust mediation solutions that enable accurate and efficient data processing across telecom networks. Your Role Good exposure and Hands on experience on Nokia...

    Your Project

    As a Mediation System Engineer, your mission is to design, develop, and optimize robust mediation solutions that enable accurate and efficient data processing across telecom networks.

    Your Role

    • Good exposure and Hands on experience on Nokia Mediation (or) any other mediation product/system installation, migration and troubleshooting is also a plus;
    • Good exposure and knowledge in C, C++, Perl, Java and XML;
    • Working experience in Business Logic Tool (BLT), TimesTen and other product specific features (Nokia Mediation specific) is also a plus;
    • Scripting knowledge in Shell, Bash, Python and Ansible;
    • Exposure and experience in test automation tools also a plus;
    • Knowledge on Kubernetes and docker container is a plus;
    • Platform (VNF,BM, CNF) + CNF Design & Development (Java,C++,Perl);
    • Both skill on SW Engineering & Platform.
    •  

    Your Profile

    • Bachelor’s degree in computer science, electronics and communications or relevant field;
    • Graduate – 2 years of minimum experience in Nokia Mediation or any other mediation product/system;
    • Working knowledge of Nokia Mediation (or) any other mediation product/system with ability to independently work, write code, test and deliver good quality workflows/nodes/streams and functional tests;
    • Working experience on Nokia Mediation product fComptel’s EventLink 7.x and above or Nokia Data Refinery 19.x and above is also a plus;
    • Good background and understanding of telecom domain concepts;
    • Good debugging, analytical and problem-solving skills;
    • Good communications skills (written and verbal);
    • Good exposure and working experience in Oracle SQL, PL/SQL, PostgreSQL, Unix.
    More
  • Β· 46 views Β· 1 application Β· 10d

    Middle Data Engineer

    Full Remote Β· Ukraine Β· 3 years of experience Β· C1 - Advanced
    We are looking for a talented and driven Data Engineer to join our customer’s team. It’s a global leader in industry, worker safety, and consumer goods. Headquartered in Maplewood, Minnesota, this multinational powerhouse produces over 60,000 innovative...

    We are looking for a talented and driven Data Engineer to join our customer’s team. It’s a global leader in industry, worker safety, and consumer goods. Headquartered in Maplewood, Minnesota, this multinational powerhouse produces over 60,000 innovative products, ranging from adhesives, abrasives, and laminates to personal protective equipment, window films, car-care products, and cutting-edge electronic and optical materials.

     

    In this role, you will play a key part in building and maintaining robust data pipelines, transforming raw information into valuable insights that power analytics and business intelligence across the organization. This is your chance to work on impactful projects, sharpen your technical skills, and contribute to a company that’s shaping the future of industry and innovation.

     

    If you are passionate about data, love solving complex problems, and want to be part of a team where your work truly makes a difference, this is the opportunity for you!

     

    Requirements

    • 3+ years of professional experience in data engineering or a related role.
    • Solid proficiency with Python for data processing and automation, with at least 2-3 years of hands-on experience.
    • Strong SQL skills for querying and manipulating complex datasets.
    • Experience with cloud data services, preferably Azure (Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Data Lake Storage).
    • Hands-on experience with big data processing frameworks like Spark (PySpark) and platforms such as Databricks.
    • Good understanding of data warehousing concepts, ETL processes, and data integration techniques.
    • Experience in applying data quality assessment and improvement techniques.
    • Experience working with various data formats, including structured, semi-structured, and unstructured data (e.g., CSV, JSON, Parquet).
    • Familiarity with Agile and Scrum methodologies and project management tools (e.g., Azure DevOps, Jira).
    • Good communication skills and the ability to work effectively as part of a team.

     

    Preferred Qualifications & Skills

    • Knowledge of DevOps methodologies and CI/CD practices for data pipelines.
    • Familiarity with modern data platforms like Microsoft Fabric for data modeling and integration.
    • Experience with consuming data from REST APIs.
    • Experience with database design concepts and performance tuning.
    • Knowledge of dimensional data modeling concepts (Star Schema, Snowflake Schema).
    • Awareness of modern data architecture concepts such as Data Mesh.
    • Experience in supporting production data pipelines.

     

    Job responsibilities

    • Develop & Maintain Data Pipelines: Develop, test, and maintain robust and efficient data pipelines using Python, SQL, and Spark on the Azure cloud platform.
    • Implement Data Solutions: Implement and support end-to-end data solutions, from data ingestion and processing to storage in our data lake (Azure Data Lake Storage, Delta Lake) and data warehouse.
    • Utilize Cloud Data Services: Work with Azure services like Azure Data Factory, Databricks, and Azure SQL Database to build and manage data workflows.
    • Ensure Data Quality: Implement data quality checks, including data profiling, cleansing, and validation routines, to help ensure the accuracy and reliability of our data.
    • Performance Tuning: Assist in monitoring and optimizing data pipelines for performance and scalability under the guidance of senior engineers.
    • Code Reviews & Best Practices: Actively participate in code reviews and adhere to team best practices in data engineering and coding standards.
    • Stakeholder Collaboration: Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and assist in delivering effective solutions.
    • Troubleshooting: Provide support for production data pipelines by investigating and resolving data-related issues.
    More
  • Β· 37 views Β· 1 application Β· 9d

    Senior/Mid Data Engineer

    Full Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    Our client is a Fortune 500 company and one of the largest global manufacturing organizations operating in the fields of industrial systems, worker safety, healthcare, and consumer goods. The company is dedicated to creating technology and products that...

    Our client is a Fortune 500 company and one of the largest global manufacturing organizations operating in the fields of industrial systems, worker safety, healthcare, and consumer goods. The company is dedicated to creating technology and products that advance every business, improve every home, and enhance every life.

    The product we are building supports automotive repair centers in various areas, including inventory tracking, collision estimation, invoicing, and analytics. Within the project, the team focuses on new functionality analysis and development, as well as maintenance and enhancement of existing web applications.

     

    Job Responsibilities

     

    • Implement Data Warehouses using data models and building data pipelines in Azure Fabric
    • Conduct research and make recommendations on database products, services, protocols, and standards to support procurement and development efforts
    • Install and configure database components, ensuring access, consistency, and integrity
    • Monitor, optimize, and allocate physical data storage for all database systems
    • Perform scaling and performance optimization

     

     

    Job Requirements

     

    • 3+ years of experience as a Data Engineer
    • Knowledge and experience with Python, SQL, Azure Synapse, Azure OneLake, Data Factory, and Spark jobs
    • Experience in building Data Warehouses
    • Awareness of data processing flowcharting, relational and dimensional data modeling, as well as translating logical models into physical models
    • Hands-on experience with Azure and good knowledge of its sub-components and services is a plus
    • Understanding of database structures, theories, principles, and practices
    • Technical experience in configuring and supporting database systems
    • Hands-on experience in database tuning and troubleshooting (indexing, statistics, and query optimization)
    • Experience with Agile projects
    • Good understanding of the Software Development Lifecycle
    • Excellent communication and collaboration skills
    • Upper-Intermediate English or higher

     

    More
  • Β· 16 views Β· 0 applications Β· 9d

    Senior Cloud Infrastructure Architect

    Hybrid Remote Β· Ukraine (Dnipro, Kyiv, Lviv + 3 more cities) Β· 5 years of experience Β· B2 - Upper Intermediate
    Our client is a leading Fortune 500 financial technology company that provides comprehensive payment solutions and financial services across multiple continents. They process billions of transactions annually and serve millions of customers worldwide. ...

    Our client is a leading Fortune 500 financial technology company that provides comprehensive payment solutions and financial services across multiple continents. They process billions of transactions annually and serve millions of customers worldwide.

     

    You'll collaborate with a world-class team of senior data scientists, ML engineers, and technology consultants from leading organizations in the fintech and cloud computing space. This diverse group brings together deep technical expertise, industry knowledge, and proven experience delivering mission-critical solutions at enterprise scale.

     

    We are seeking an experienced Senior Cloud Infrastructure Architect with deep expertise in AI/ML infrastructure implementations. This role is designed for seasoned cloud architecture professionals who have successfully designed and deployed enterprise-scale AI environments - not for those simply exploring cloud technologies.

     

    Technology stack

     

    AWS Bedrock, SageMaker, and comprehensive AI/ML service ecosystem
    Vector databases and advanced RAG architectures
    Enterprise-scale data processing and real-time model deployment systems
    Automated CI/CD pipelines specifically designed for ML workflows

     

     

    Responsibilities

    • Design and architect AI/ML environments using AWS Bedrock, SageMaker, and vector database infrastructure
    • Implement enterprise-grade networking solutions for AI workloads and data processing pipelines
    • Architect and deploy database and storage solutions optimized for GenAI applications
    • Develop Infrastructure as Code (IaC) using Terraform and CloudFormation for AI platform deployment
    • Design and implement serverless architectures supporting scalable AI/ML workflows
    • Establish security best practices and compliance frameworks for AI infrastructure
    • Optimize performance and tuning of AI environments for enterprise-scale operations
    • Ensure high availability, disaster recovery, and scalability of AI platform infrastructure

    Requirements

    • Hands-on experience architecting and deploying AI environments using AWS Bedrock, SageMaker, and vector databases
    • Advanced knowledge of cloud networking concepts, VPC design, and secure connectivity for AI workloads
    • Proven experience with database and storage deployments optimized for AI/ML applications and large-scale data processing
    • Deep understanding of security best practices and implementation in regulated financial services environments
    • Proficiency in developing IaC solutions using Terraform and CloudFormation for AI platform automation
    • Hands-on experience architecting and deploying serverless solutions supporting AI/ML workflows
    • Demonstrated skills in performance tuning and optimization of cloud environments at enterprise scale
    • Proven track record of designing and managing mission-critical AI infrastructure in production environments
    • Minimum 7+ years cloud infrastructure experience with 3+ years dedicated AI/ML infrastructure architecture experience
    • Availability during US Eastern Time (ET) business hours to collaborate with onsite team

    Nice to have

    • Bachelor's degree in Computer Science, Engineering, Information Technology, or related technical field (Master's preferred)
    • AWS certifications (Solutions Architect Professional, Security Specialty, etc.)
    • Experience with financial services infrastructure and compliance requirements
    • Knowledge of regulatory frameworks (PCI DSS, SOX, etc.) and their infrastructure implications

     

     

    Working Time-zone

    US/Canada (GMT-7)

    More
  • Β· 27 views Β· 1 application Β· 9d

    Data Engineer

    Full Remote Β· EU Β· 5 years of experience Β· C1 - Advanced
    We are looking for an experienced Data Engineer and Report Developer to design, develop, and optimize enterprise reporting and planning systems across SAP Datasphere, BW/4HANA, SAP Analytics Cloud, and Qlik Sense. The ideal candidate combines strong data...

    We are looking for an experienced Data Engineer and Report Developer to design, develop, and optimize enterprise reporting and planning systems across SAP Datasphere, BW/4HANA, SAP Analytics Cloud, and Qlik Sense. The ideal candidate combines strong data modeling and analytical skills with the ability to build scalable reporting pipelines and deliver insights that drive business decisions.

    Details
    Location: Remote within the EU
    Employment Type: Full-time, Contract (6–8 months)
    Start Date: ASAP
    Language Requirements: Fluent English

    Key Responsibilities
    Build and support reporting and planning solutions using SAP Datasphere, BW/4HANA, SAP Analytics Cloud (SAC), and Qlik Sense.
    Making reports from SAP environment from different SAP modules and DB.
    Design, implement, and maintain data pipelines for enterprise reporting and analytics.
    Develop ABAP CDS views (SQL, AMDP Script) as backends for Fiori, SAC, and Analysis for Microsoft Office Excel.
    Collaborate with cross-functional teams to deliver high-quality data models and reports.
    Ensure data accuracy, consistency, and performance optimization across reporting layers.
    Support integration of business intelligence tools and assist in troubleshooting and performance tuning.

    Requirements
    5+ years of professional experience in data engineering, reporting, or analytics roles.
    Solid SQL skills and hands-on experience in ABAP CDS view development (SQL, AMDP Script).
    Practical experience in data modeling using SAP DSP or similar data warehouse technologies.
    Strong working knowledge of SAP Analytics Cloud (SAC) reporting (planning experience is a plus).
    Experience building Qlik Sense dashboards for visualization and analytics.
    Strong analytical thinking, problem-solving ability, and independence in execution.

    Nice to Have
    SAP HANA DB experience: performance monitoring, error analysis, and optimization.
    Knowledge of Fiori app creation for transient queries.
    Experience integrating AI solutions into business analytics workflows.

    More
  • Β· 31 views Β· 6 applications Β· 9d

    Senior Data Engineer (ETL, ML Experience)

    Full Remote Β· Worldwide Β· 6 years of experience Β· C1 - Advanced
    Senior Data Engineer (ETL, ML Experience) Location: Remote (Europe preferred) Contract Type: B2B Experience: 7+ years as a Data Engineer English Level: C1 (Advanced) Compensation: Gross (to be specified) Holidays: 10 public holidays per year (vacation and...

    Senior Data Engineer (ETL, ML Experience)
    Location: Remote (Europe preferred)
    Contract Type: B2B
    Experience: 7+ years as a Data Engineer
    English Level: C1 (Advanced)
    Compensation: Gross (to be specified)
    Holidays: 10 public holidays per year (vacation and sick days unpaid)

    About the Role
    We are seeking a Senior Data Engineer with strong experience in ETL pipeline design, data analytics, and exposure to machine learning workflows. You will play a key role in designing, developing, and maintaining scalable data solutions to support analytics, reporting, and ML-driven decision-making.

    You will work closely with data scientists, analysts, and software engineers to ensure data integrity, performance, and accessibility across the organization.

    Key Responsibilities

    • Design, build, and maintain ETL/ELT pipelines for large-scale data processing, including Elasticsearch.
    • Develop, optimize, and manage data models, data warehouses, and data lakes.
    • Collaborate with cross-functional teams to define data architecture, governance, and best practices.
    • Implement and maintain CI/CD workflows using AWS CodePipeline.
    • Work with Python and .NET for automation, data integration, and application-level data handling.
    • Support data-driven decision-making through analytics and reporting.
    • Troubleshoot and optimize database performance and data processing pipelines.
    • Implement data quality and validation frameworks to ensure reliable data flow.

    Required Skills & Experience

    • 7+ years of professional experience as a Data Engineer or similar role.
    • Strong expertise in ETL development and orchestration, including Elasticsearch.
    • Python β€” Expert level (data processing, automation, APIs, ML pipeline integration).
    • ETL Tools / Frameworks β€” Expert level (custom and/or AWS-native).
    • Data Analytics & Reporting β€” Expert level (data modeling, KPI dashboards, insights generation).
    • DBA experience β€” Experienced (database design, tuning, and maintenance).
    • AWS CodePipeline β€” Experienced (CI/CD for data workflows).
    • .NET β€” Experienced (integration, backend data logic).
    • Experience with data warehousing solutions (e.g., Redshift, Snowflake, BigQuery) is a plus.
    • Familiarity with machine learning data pipelines (feature engineering, data prep, model serving) is a plus.

    Nice to Have

    • Experience with Airflow, DBT, or other orchestration tools.
    • Familiarity with Terraform or AWS CloudFormation.
    • Exposure to ML Ops and productionizing ML models.
    • Knowledge of data governance, security, and compliance standards.
    More
  • Β· 31 views Β· 1 application Β· 9d

    Strong Middle/Senior Data Engineer

    Full Remote Β· Bulgaria, Romania Β· 4 years of experience Β· C1 - Advanced
    Role: Data Engineer Description: company in the security market that develops analytics products for information security. The company's clients include government security organizations. Client with a security element, however, is able to use...

    Role: Data Engineer

    Description: company in the security market that develops analytics products for information security. The company's clients include government security organizations.

    Client with a security element, however, is able to use outsourcing services in some parts.
     

    Data Engineer

    4+ years of experience as a Data Engineer
    3+ years of hands-on experience with Snowflake
    Deep hands-on experience with dbt (Core / Cloud)
    High proficiency in SQL
    Business understanding capabilities
    Experience with modern Architectures, Data Modeling and best practice

    Fluent English
    Can-do approach and strong problem-solving skills
    Ability to work well independently and as part of an agile team
    Degree in Computer Science / Software Engineering / other relevant engineering degrees
    Experience with CI/CD procedures and best practice – major advantage
     

    Main tasks:
    Design and develop Analytics solutions using Snowflake and dbt
    Design and develop Data Pipelines
    Collaborate with the Israeli Data and Analysts teams on projects and tasks.
     

    Start date - immediately (ASAP)
     

    Interview process:

    1. Interpersonal + professional interview
    2. Technical interview with up to 1 hour
    3. Security check - due to the domain
    More
  • Β· 29 views Β· 6 applications Β· 8d

    Data Engineer

    Full Remote Β· EU Β· 5 years of experience Β· C1 - Advanced
    Role Overview We are strengthening our Business Intelligence (BI) platform and we want to hire an experienced Data Engineer to join our team and take ownership of scaling, optimizing, and innovating our BI platform. The right candidate will combine strong...

    Role Overview

    We are strengthening our Business Intelligence (BI) platform and we want to hire an

    experienced Data Engineer to join our team and take ownership of scaling, optimizing, and innovating

    our BI platform. The right candidate will combine strong technical expertise with a passion for delivering

    reliable, high-quality data systems that empower analytics and decision-making across the company.

     

    Responsibilities

    β€’ Design, build, and maintain scalable data pipelines and ETL/ELT processes.

    β€’ Manage and optimize data warehouses to ensure accuracy, consistency, and performance.

    β€’ Partner with data analysts, BI developers, and business stakeholders to enable actionable insights.

    β€’ Develop high-quality, maintainable code in Go and Python for data processing and integration.

    β€’ Ensure best practices in data governance, security, and compliance are upheld.

    β€’ Integrate and support analytics platforms (e.g., Tableau) to deliver impactful dashboards and

    reporting.

    β€’ Continuously evaluate and implement new tools/technologies to improve the BI platform.

     

    Qualifications

    β€’ 5+ years of proven experience in data engineering or related fields in the Fintech domain.

    β€’ Strong background in data warehouses, data modeling, and large-scale data processing.

    β€’ Proficiency in Go and Python with a strong software engineering mindset.

    β€’ Experience working with analytics platforms (ideally Tableau).

    β€’ Solid understanding of database systems (SQL and NoSQL).

    β€’ Knowledge of cloud platform (AWS) is a strong plus.

    β€’ Strong problem-solving skills and a collaborative, solution-oriented approach


     

    More
  • Β· 45 views Β· 1 application Β· 8d

    Junior/Middle Data Engineer IRC280154

    Hybrid Remote Β· Ukraine (Lviv) Β· 2 years of experience Β· B2 - Upper Intermediate
    Description Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services. This position collaborates with a geographically diverse...

    Description

    Customer is one of the biggest companies on the market of home entertainment consumer electronics devices that strives to provide their clients with high-quality products and services.

    This position collaborates with a geographically diverse team to develop, deliver, and maintain systems for digital subscription and transactional products across the Customer’ SVOD portfolio.
     

    Requirements

    – 2+ years of python development (middle level, pandas, bulding APIs)

    – Beginner to middle SQL
    – Experience building ETLs in python
    – Experience with data tools (ex.: Airflow, Grafana, Redash, Plotly, AWS Glue, AWS Athena)
    – Other AWS experience
    – Advanced skills in Excel
    – Agile SDLC knowledge
    – Detail oriented
    – Data-focused
    – Strong verbal/written communication, including an ability to effectively communicate with both business and technical teams
    – An ability and interest in working in a fast-paced and rapidly changing environment
    – Be self-driven and show ability to deliver on ambiguous projects with incomplete or dirty data

     

    Would be a plus:

    – Understanding of basic SVOD store purchase workflows

    – Background in supporting data scientists in conducting data analysis / modelling to support business decision making

    – Expereince with mixpanel, mparticle, youbora and similar systems
     

    Job responsibilities

    – Building Pyhton APIs as a data instrumentation
    – ETL buildouts for data reconciliation
    – Creation of automatically-running audit tools
    – Interactive log auditing to look for potential data problems
    – Help in troubleshooting customer support team cases
    – Troubleshooting and analyzing subscriber reporting issues:
          Answer management questions related to subscriber count trends
          App purchase workflow issues
          Audit/reconcile store subscriptions vs userdb

    More
  • Β· 114 views Β· 19 applications Β· 8d

    Data Engineer (Python)

    Full Remote Β· Countries of Europe or Ukraine Β· 1 year of experience Β· B1 - Intermediate
    We are looking for a motivated Data Engineer (Python) to join our data team. You will work on building and improving data ingestion and transformation pipelines, ensuring that data is reliable, high-quality, and ready for analytics and reporting. This is...

    We are looking for a motivated Data Engineer (Python) to join our data team. You will work on building and improving data ingestion and transformation pipelines, ensuring that data is reliable, high-quality, and ready for analytics and reporting.
    This is a great opportunity for someone who already has hands-on experience in data engineering and wants to grow in a professional environment with real production tasks.

    Responsibilities

    • Develop and maintain data ingestion and transformation pipelines.
    • Work with various data sources (APIs, databases, cloud storage).
    • Automate and schedule workflows using Apache Airflow.
    • Ensure data accuracy and reliability across all processes.
    • Collaborate with senior engineers and analysts to support business data needs.
    • Contribute to improving existing data processes and documentation.

       

    Requirements

    • 1+ year of experience as a Data Engineer or in a similar position.
    • Solid knowledge of Python (pandas, data processing, scripting).
    • Basic experience with Apache Airflow or other workflow orchestration tools.
    • Understanding of ETL/ELT processes and data transformation concepts.
    • Experience with SQL and relational databases (PostgreSQL, MySQL, etc.).
    • English level B1+ (ability to communicate in a professional environment).
    • Eagerness to learn and improve data engineering skills.

       

    Nice to Have

    • Experience with AWSGCP, or Azure cloud platforms.
    • Familiarity with Docker or other containerization tools.
    • Basic knowledge of data warehousing concepts.

       

    We Offer

    • Competitive compensation according to your experience.
    • Remote work and flexible schedule.
    • Mentorship and professional development support.
    • Opportunity to work with modern tools and technologies.
    • Friendly and supportive team.
    More
  • Β· 49 views Β· 13 applications Β· 8d

    Senior/Lead Data Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· B2 - Upper Intermediate
    Binariks is looking for a highly motivated and skilled Data Engineer to join the team. About the project: A platform is in the healthcare domain which gives the opportunity to manage appointments, communication between doctors and patients, billing,...

    Binariks is looking for a highly motivated and skilled Data Engineer to join the team. 

    About the project: A platform is in the healthcare domain which gives the opportunity to manage appointments, communication between doctors and patients, billing, insurance, etc.

    What We’re Looking For

    • 5+ years of experience as a Data Engineer
    • Highly proficient in Azure-Proven work experience 
    • Technical expertise with data models, data mining, and segmentation techniques
    • Knowledge of programming languages (e.g. Java, Python, .NET)
    • Hands-on experience with Azure Database for PostgreSQL
    • Ability to diagnose and troubleshoot basic technical issues
    • Experience with Azure Data Factory, Azure DevOps, Azure Data Factory / ETL, Azure Synapse, Azure blob storages
    • Azure SQL Server, and other database experience including graphs, and documents in addition to relational is required
    • Experience in documenting requirements and specifications
    • People-oriented and a team player
    • Excellent written and verbal communication skills with customers
    • Upper-intermediate level of spoken and written English
    • Degree in applied computer science 

    Your Responsibilities
     

    • Design, develop and maintain scalable and efficient data pipelines to collect, process and store data from various sources (e.g., databases, APIs, logs).
    • Implement data transformation processes to cleanse, enrich and transform raw data into usable formats for analytics and reporting.
    • Participate in data modeling, design, development, technical due diligence, setup
    • Report writing SQL repositories
    • Report system design, data staging, report/query development, maintenance
    • Performance tuning of data repositories
    • Data and Database maintenance
    • Help to develop quality checks
    • Assist in the overall deployment and test strategies alongside QA to assist in environment setups, migrations, and overall testability
    • Respond to support tickets impacting the data or data systems
    • Maintain high-fidelity documentation and system diagrams to effectively communicate processes,  entities, relationships, interactions
    • Design and develop HIPAA-compliant interop as necessary based on the integration type

    Will be a plus
     

    • Experience with Azure PaaS products 
    • Experience with projects in healthcare domain
       

    We provide the following for our employees:

    • 18 working days of paid vacation
    • 10 working days of sick leave annually (5 days paid at 100% and 5 days at 75% rate of your average monthly salary)
    • 50% cost compensation for English courses
    • Flexible work schedule
    • Additional days off for special occasions, national holidays off
    • A competitive and rewarding salary based on performance appraisals/knowledge evaluation
    • Possibility to share and gain knowledge on regular tech talks
    • Friendly and professional team
    • Innovative projects with advanced technologies
    • Remote work
    • Accounting service
    More
  • Β· 34 views Β· 1 application Β· 8d

    Middle Big Data Engineer (#4225)

    Full Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    We are seeking a proactive Middle Data Engineer to join our vibrant team. As a Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir...

    We are seeking a proactive Middle Data Engineer to join our vibrant team. As a Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.

     

    Key Responsibilities:

    • Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
    • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
    • Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
    • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency. 
    • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
    • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.

     

    Tools and skills you will use in this role:

    • Palantir Foundry
    • Python
    • PySpark
    • SQL
    • TypeScript

     

    Required:

    • 3+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
    • -Strong proficiency in Python and PySpark;
    • Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
    • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
    • Expertise in data modeling, data warehousing, and ETL/ELT concepts;
    • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
      Proficiency in containerization technologies (e.g., Docker, Kubernetes);
    • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities;
    • Strong communication and teamwork abilities;
    • Understanding of data security and privacy best practices;
    • Strong mathematical, statistical, and algorithmic skills.

     

    Nice to have:

    • Certification in Cloud platforms, or related areas;
    • Experience with search engine Apache Lucene, Webservice Rest API;
    • Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
    • Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
    • Previous experience working with JavaScript and TypeScript.

     

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits
    More
  • Β· 40 views Β· 8 applications Β· 8d

    Senior Backend / Data Engineer (Python, FastAPI, Data Flows)

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    About Us ChatRevenue.ai was founded by Ratmir Timashev (founder of Veeam) and Vlad Voskresensky (founder of Revenue Grid) to create the next generation of CRM systems. We’re building an AI-native CRM platform that completely redefines how sales automation...

    About Us

    ChatRevenue.ai was founded by Ratmir Timashev (founder of Veeam) and Vlad Voskresensky (founder of Revenue Grid) to create the next generation of CRM systems.

    We’re building an AI-native CRM platform that completely redefines how sales automation works.
    By structuring data for intelligent automation, we enable agentic AI to design workflows, assist Sales Leaders and Reps in real time, and generate predictable sales funnels – all through a simple, chat-driven interface.

    Our mission: to build a CRM that works for you, not the other way around.

     

    About the Role

    We’re looking for a Senior Backend / Data Engineer who thrives in dynamic environments, enjoys solving complex data challenges, and wants to shape a product from its early stages.

    At ChatRevenue, we focus on goals, not ceremonies – no endless grooming sessions or heavy planning rituals.
    You’ll have a direct influence on architecture, product decisions, and data design. Decisions are made quickly through open discussion and collaboration, without corporate bureaucracy.

     

    What You’ll Do

    • Develop and maintain backend services using Python / FastAPI
    • Design, optimize, and maintain data flows, ETL/ELT pipelines, and integrations between internal and external systems
    • Work with structured and unstructured data, designing models and queries in PostgreSQL and Pandas
    • Build integrations with CRM and communication platforms (Salesforce, Google Meet, etc.)
    • Implement AI-driven features into backend logic and APIs
    • Ensure DevOps best practices (testing, CI/CD, security, Docker/Kubernetes)
    • Collaborate closely with product, data, and frontend engineers to deliver business values fast

     

    What We’re Looking For

    • Strong Python experience, ideally with FastAPI, async programming, proven capacity to write production-ready code with speed and precision.
    • Experience with SQL (PostgreSQL) and data processing frameworks (Pandas, ETL/ELT)
    • Hands-on experience with Docker, Kubernetes, and Azure Cloud
    • Understanding of data architecture, performance optimization, and scalability
    • Experience with CI/CD pipelines, Git, and code review culture
    • Excellent teamwork skills – constructive feedback, collaboration, and ownership mindset

     

    Nice to have

    • Experience with OpenSearch, Kafka, or Azure Data Services
    • Exposure to DevSecOps and advanced debugging
    • Knowledge of Generative AI integration into backend systems

     

    Why Join Us

    • Remote-first culture – work from anywhere
    • Flexible schedule aligned with European time zone
    • Direct impact β€“ your work is visible in the product from day one
    • Professional growth – as we scale, you can own a technical domain or lead a module
    • Core team – become one of the people who know the product best and grow together with it
    • Competitive compensation

     

    If you want to build an AI-native CRM from the ground up and be part of the core team shaping the product, we’d love to hear your story.

    Apply or reach out directly – let’s build something truly transformative together.

    More
Log In or Sign Up to see all posted jobs