Jobs

166
  • Β· 54 views Β· 5 applications Β· 5d

    Senior Data Engineer

    Full Remote Β· Ukraine, Poland, Romania, Croatia Β· 5 years of experience Β· B2 - Upper Intermediate
    Description Our customer (originally the Minnesota Mining and Manufacturing Company) is an American multinational conglomerate operating in the fields of industry, worker safety, and consumer goods. Based in the Saint Paul suburb of Maplewood, the company...

    Description

    Our customer (originally the Minnesota Mining and Manufacturing Company) is an American multinational conglomerate operating in the fields of industry, worker safety, and consumer goods. Based in the Saint Paul suburb of Maplewood, the company produces over 60,000 products, including adhesives, abrasives, laminates, passive fire protection, personal protective equipment, window films, paint protection film, electrical, electronic connecting, insulating materials, car-care products, electronic circuits, and optical films.

     

    Requirements

    We are looking for a highly skilled and experienced Senior Data Engineer to join our team. In this role, you will be a key player in designing, building, and optimizing our data architecture and pipelines. You will be working on a complex data project, transforming raw data into reliable, high-quality assets ready for analytics, data science, and business intelligence. As a senior member of the team, you will also be expected to help junior/middle engineers, drive technical best practices, and contribute to the strategic direction of our data platform.

     

    Required Qualifications & Skills

    • 5+ years of professional experience in data engineering or a related role.
    • A minimum of 3 years of deep, hands-on experience using Python for data processing, automation, and building data pipelines.
    • A minimum of 3 years of strong, hands-on experience with advanced SQL for complex querying, data manipulation, and performance tuning.
    • Proven experience with cloud data services, preferably Azure (Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Data Lake Storage).
    • Hands-on experience with big data processing frameworks like Spark (PySpark) and platforms such as Databricks.
    • Solid experience working with large, complex data environments, including data processing, data integration, and data warehousing.
    • Proficiency in data quality assessment and improvement techniques.
    • Experience working with and cleansing a variety of data formats, including unstructured and semi-structured data (e.g., CSV, JSON, Parquet, XML).
    • Familiarity with Agile and Scrum methodologies and project management tools (e.g., Azure DevOps, Jira).
    • Excellent problem-solving skills and the ability to communicate complex technical concepts effectively to both technical and non-technical audiences.

    Preferred Qualifications & Skills

    • Knowledge of DevOps methodologies and CI/CD practices for data pipelines.
    • Experience with modern data platforms like Microsoft Fabric for data modeling and integration.
    • Experience with consuming data from REST APIs.
    • Experience with database design, optimization, and performance tuning for software application backends.
    • Knowledge of dimensional data modeling concepts (Star Schema, Snowflake Schema).
    • Familiarity with modern data architecture concepts such as Data Mesh.
    • Real-world experience supporting and troubleshooting critical, end-to-end production data pipelines.

     

    Job responsibilities

    Key Responsibilities

    • Architect & Build Data Pipelines: Design, develop, and maintain robust, scalable, and reliable data pipelines using Python, SQL, and Spark on the Azure cloud platform.
    • End-to-End Data Solutions: Architect and implement end-to-end data solutions, from data ingestion and processing to storage in our data lake (Azure Data Lake Storage, Delta Lake) and data warehouse.
    • Cloud Data Services Management: Utilize Azure services like Azure Data Factory, Databricks, and Azure SQL Database to build, orchestrate, and manage complex data workflows.
    • Data Quality & Governance: Implement and enforce comprehensive data quality frameworks, including data profiling, cleansing, and validation routines to ensure the highest levels of data integrity and trust.
    • Performance Optimization: Analyze and optimize data pipelines for performance, scalability, and cost-efficiency, ensuring our systems can handle growing data volumes.
    • Mentorship & Best Practices: Mentor and provide technical guidance to junior and mid-level data engineers. Lead code reviews and champion best practices in data engineering, coding standards, and data modeling.
    • Stakeholder Collaboration: Work closely with data analysts, data scientists, and business stakeholders to understand data requirements, provide technical solutions, and deliver actionable data products.
    • System Maintenance: Support and troubleshoot production data pipelines, identify root causes of issues, and implement effective, long-term solutions.
    More
  • Β· 20 views Β· 1 application Β· 6d

    Middle/Senior/Lead Python Cloud Engineer (IRC280058)

    Hybrid Remote Β· Ukraine (Vinnytsia, Zhytomyr, Ivano-Frankivsk + 8 more cities) Β· 5 years of experience Β· B2 - Upper Intermediate
    Job Description β€’ Terraform β€’ AWS Platform: Working experience with AWS services - in particular serverless architectures (S3, RDS, Lambda, IAM, API Gateway, etc.) supporting API development in a microservices architecture β€’ Programming Languages: Python...

    Job Description

    β€’ Terraform

    β€’ AWS Platform: Working experience with AWS services - in particular serverless architectures (S3, RDS, Lambda, IAM, API Gateway, etc.) supporting API development in a microservices architecture

    β€’ Programming Languages: Python (strong programming skills)

    β€’ Data Formats: Experience with JSON, XML, and other relevant data formats

    β€’ CI/CD Tools: experience setting up and managing CI/CD pipelines using GitLab CI, Jenkins, or similar tools

    β€’ Scripting and automation: experience in scripting languages such as Python, PowerShell, etc.

    β€’ Monitoring and Logging: Familiarity with monitoring & logging tools like CloudWatch, ELK, Dynatrace, Prometheus, etc…

    β€’ Source Code Management: Expertise with git commands and associated VCS (Gitlab, Github, Gitea, or similar)

     

     

    NICE TO HAVE

    β€’ Strongly Preferred: Infrastructure as Code: Experience with Terraform and CloudFormation - Proven ability to write and manage Infrastructure as Code (IaC)
    β€’ Documentation: Experience with markdown and, in particular, Antora for creating technical documentation
    β€’ Experience working with Healthcare Data, including HL7v2, FHI,R and DICOM
    β€’ FHIR and/or HL7 Certifications
    β€’ Building software classified as Software as a Medical Device (SaMD)
    β€’ Understanding of EHR technologies such as EPIC, Cerner, e.c.
    β€’ Experience in implementing enterprise-grade cyber security & privacy by design into software products
    β€’ Experience working in Digital Health software
    β€’ Experience developing global applications
    β€’ Strong understanding of SDLC – Waterfall & Agile methodologies
    β€’ Software estimation
    β€’ Experience leading software development teams onshore and offshore

    Job Responsibilities

    β€’ Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.
    β€’ Involved in planning of system and development deployment, as well as responsible for meeting compliance and security standards.
    β€’ API development using AWS services
    β€’ Experience with Infrastructure as Code (IaC)
    β€’ Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.
    β€’ May document testing and maintenance of system updates, modifications, and configurations.
    β€’ May act as a liaison with key technology vendor technologists or other business functions.
    β€’ Function Specific: Strategically design technology solutions that meet the needs and goals of the company and its customers/users.
    β€’ Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or customization solution would be required.
    β€’ Test the quality of a product and its ability to perform a task or solve a problem.
    β€’ Perform basic maintenance and performance optimization procedures in each of the primary operating systems.
    β€’ Ability to document detailed technical system specifications based on business system requirements
    β€’ Ensures system implementation compliance with global & local regulatory and security standards (i.e. , HIPAA, SOCII, ISO27001, etc.)

     

    Department/Project Description

    The Digital Health organization is a technology team that focuses on next-generation Digital Health capabilities, which deliver on the Medicine mission and vision to deliver Insight Driven Care. This role will operate within the Digital Health Applications & Interoperability subgroup of the broader Digital Health team, focused on patient engagement, care coordination, AI, healthcare analytics & interoperability amongst other advanced technologies which enhance our product portfolio with new services, while improving clinical & patient experiences.

     

    Authorization and Authentication platform & services for Digital Health

     

    Secure cloud platform for storing and managing medical images (DICOM compliant). Leverages AWS for cost-effective storage and access, integrates with existing systems (EHR, PACS), and offers a customizable user interface.

    More
  • Β· 26 views Β· 1 application Β· 20d

    Data Modeler with expertise in Snowflake and SQLdbm

    Hybrid Remote Β· Ukraine (Dnipro, Ivano-Frankivsk, Kyiv + 4 more cities) Β· 4 years of experience Β· B1 - Intermediate
    Client Our client revolutionizes the retail direct store delivery model by addressing key challenges like communication gaps, out-of-stocks, invoicing errors, and price inconsistencies to boost sales, profits, and customer loyalty with innovative...

    Client

     

    Our client revolutionizes the retail direct store delivery model by addressing key challenges like communication gaps, out-of-stocks, invoicing errors, and price inconsistencies to boost sales, profits, and customer loyalty with innovative technology and partnerships.

     

     

    Position overview

     

    We are seeking an experienced Data Modeler to lead logical and physical data modeling efforts across Raw, Conformed, and CDM layers within a medallion architecture. The ideal candidate will design dimensional (star/snowflake) and 3NF schemas where appropriate, implement and enforce modeling standards (naming conventions, data types, SCD strategies), and maintain thorough documentation using SqlDBM or equivalent tools.

     

     

    Responsibilities

    • Lead logical and physical data modeling across Raw, Conformed, and CDM layers following medallion architecture
    • Design dimensional (star/snowflake) and 3NF schemas where appropriate
    • Implement and enforce modeling standards, including naming conventions, data types, and SCD strategies
    • Document models and processes using SqlDBM or equivalent tools
    • Translate business requirements into canonical data models and source-to-target mappings
    • Collaborate closely with engineers and analysts to ensure alignment and accuracy
    • Design and optimize Snowflake schemas focusing on DDL, micro-partitions, clustering, RBAC, and cost-efficient patterns
    • Manage Postgres schema design, perform tuning, and optimize queries
    • Review and refactor PL/pgSQL code (bonus)
    • Maintain strong data governance practices including version control, peer reviews, and data lineage tracking

     

    Requirements

    • Proven expertise in data modeling across multiple layers of medallion architecture
    • Strong understanding of data modeling concepts (OLTP, Star Schema, Medallion architecture)
    • Strong experience with dimensional modeling (star/snowflake) and 3NF design
    • Skilled in Snowflake schema design, performance tuning, and security best practices
    • Experience with Postgres schema design and query optimization
    • Familiarity with data governance standards and tools
    • Proficiency with SqlDBM/ERwin or equivalent modeling/documentation platforms
    More
  • Β· 24 views Β· 0 applications Β· 20d

    Senior Data Engineer

    Hybrid Remote Β· Ukraine (Dnipro, Kyiv, Lviv + 2 more cities) Β· 7 years of experience Β· B2 - Upper Intermediate
    Client Our client is a hedge fund sponsor that mainly manages pooled investment vehicles and typically invests in fixed income, private equity, rates, credit, and foreign exchange. The company operates offices in London, New York, and Hong Kong. Join...

    Client

     

    Our client is a hedge fund sponsor that mainly manages pooled investment vehicles and typically invests in fixed income, private equity, rates, credit, and foreign exchange. The company operates offices in London, New York, and Hong Kong.

     

    Join a great company, not merely an individual project

    Position overview

     

    We are seeking an experienced Senior Data Engineer with 7+ years in asset management or financial services to join our team. The ideal candidate will have expertise handling diverse datasets via batch files, APIs, and streaming from both internal and external sources.

     

    Responsibilities

    • Onboard new datasets and develop data models using Snowflake and DBT
    • Build and maintain data transformation pipelines
    • Design and manage data orchestration and ETL workflows with Azure Data Factory
    • Optimize queries and apply data warehousing best practices for large and complex datasets
    • Collaborate with development teams using agile methodologies, DevOps, Git, and CI/CD pipelines
    • Support cloud-based services, especially Azure Functions, KeyVault, and LogicApps
    • Optionally develop APIs to serve data to internal or external stakeholders
       

    Requirements

    • 7+ years as a Data Engineer in asset management or financial services
    • Expertise in Snowflake, DBT, and data pipeline orchestration tools (Azure Data Factory)
    • Strong knowledge of SQL, Python, data modeling, and warehousing principles
    • Familiarity with DevOps practices including CI/CD and version control (Git)
    • Experience with Azure cloud services
       

    Nice to have

    • Industry knowledge of Security Master, IBOR, and Portfolio Management
    More
  • Β· 22 views Β· 1 application Β· 20d

    Senior Data Engineer (Databricks)

    Hybrid Remote Β· Portugal Β· 5 years of experience Β· B2 - Upper Intermediate
    We’re looking for a Senior Data Engineer in Portugal to join a long-term project with an immediate start. (but candidates from other EU countries are welcome to apply - as long as you have all the required skills) About the project You’ll be working on...

    We’re looking for a Senior Data Engineer in Portugal to join a long-term project with an immediate start. (but candidates from other EU countries are welcome to apply - as long as you have all the required skills)
     

    About the project

    You’ll be working on a large-scale data migration from Microsoft SQL Server to Databricks, focusing on data structure optimization, performance tuning, and implementing best practices in data engineering.

     

    Requirements

    • Databricks certification (mandatory)
    • Strong hands-on experience with Apache Spark
    • Excellent SQL skills (Microsoft SQL Server)
    • Proven experience in data migration and transformation
    • Good communication and problem-solving skills

     

    Details

    • Type: Outstaff (long-term)
    • Start date: Immediate
    • Location: Portugal (to visit office 1 time per week) ot any other EU country in case you have all nesseccary skills. 
    More
  • Β· 11 views Β· 1 application Β· 19d

    Salesforce Consumer Goods Cloud (CGC)

    Full Remote Β· Worldwide Β· 3 years of experience Β· B2 - Upper Intermediate
    1. Job Description: Salesforce Consumer Goods Cloud (CGC) Subject Matter Expert (SME) About the Role We are seeking a highly skilled Salesforce Consumer Goods Cloud (CGC) Subject Matter Expert (SME) to serve as the key consultant for our FMCG/CPG clients....

    1. Job Description: Salesforce Consumer Goods Cloud (CGC) Subject Matter Expert (SME)

    About the Role

    We are seeking a highly skilled Salesforce Consumer Goods Cloud (CGC) Subject Matter Expert (SME) to serve as the key consultant for our FMCG/CPG clients. Your mission is to ensure that CGC implementations perfectly align with the client’s best business practices in retail and distribution. You will act as the bridge between complex business processes (e.g., Retail Execution, Trade Promotion Management) and standard (out-of-the-box) Salesforce functionality.

    Key Responsibilities

    • Conduct a Business Process Audit to identify misalignments between the client’s current crippled processes and native CGC capabilities.
    • Consult clients on CGC best practices for Retail Execution, Trade Promotion Management (TPM), Order Management, and Direct Store Delivery (DSD).
    • Develop "De-Customization" strategies to replace complex, inefficient custom logic with standard Salesforce features.
    • Collaborate with Solution Architects and Developers to ensure the technical design aligns with business requirements and the CGC data model.
    • Participate in the Discovery and Gap Analysis phases, providing clear, prioritized recommendations to restore value to the implementation.
    • Support sales efforts and develop Statements of Work (SOW) for Phase 2 (Remediation Project).

    Requirements

    • Minimum 3+ years of experience working with Salesforce Consumer Goods Cloud (or deep experience in the FMCG/CPG segment with Salesforce).
    • Profound understanding of core CGC features: Visit Management, Retail Execution, Pricing & Promotions, Store/Route Planning.
    • Possession of Salesforce certifications, specifically Salesforce Certified Consumer Goods Cloud Accredited Professional or Salesforce Certified Sales Cloud Consultant (preferred).
    • Excellent communication and presentation skills for effective engagement with client executives.
    • Ability to translate complex business problems into clear, actionable CGC-based solutions.

     

     

    What We Offer (Benefits)

    1. Competitive Salary: Attractive, competitive salary and bonus structure commensurate with your experience and contribution.
    2. Professional and Supportive Team: Join a team of highly skilled Salesforce experts focused on shared success and continuous improvement.
    3. Flexibility and Remote Work: Opportunity to work fully remotely or with a flexible hybrid schedule, allowing you to balance work and personal life effectively.
    More
  • Β· 265 views Β· 58 applications Β· 19d

    Data Engineer (Junior/Middle)

    Full Remote Β· Worldwide Β· Product Β· 1 year of experience
    We operate an integrated sushi-restaurant business and require a Data Engineer to design and implement a centralised, well-governed data warehouse; develop and automate data pipelines that support critical reporting, including multi-platform...

    We operate an integrated sushi-restaurant business and require a Data Engineer to design and implement a centralised, well-governed data warehouse; develop and automate data pipelines that support critical reporting, including multi-platform customer-order analytics, marketing performance metrics, executive dashboards, and other business-essential analyses and collaborate on internal machine-learning projects by providing reliable, production-ready data assets.

     

     

    Our requirements:

     

    1. Professional experience (1–3 years) in data engineering, with demonstrable ownership of end-to-end ETL/ELT pipelines in production.
    2. Strong SQL and Python proficiency, including performance tuning, modular code design, and automated testing of data transformations.
    3. Hands-on expertise with modern data-stack components (e.g., Airflow, dbt, Spark, or comparable orchestration and processing frameworks).
    4. Cloud-native skills on AWS or Azure, covering at least two services from Glue, Athena, Lambda, Databricks, Data Factory, or Snowflake, plus cost- and performance-optimization best practices.
    5. Solid understanding of dimensional modelling, data-quality governance, and documentation standards, ensuring reliable, audited data assets for analytics and machine-learning use cases.

     

     

    Your responsibilities:

     

    • Designing, developing, and maintaining scalable data pipelines and ETL.
    • Optimizing data processing workflows for performance, reliability, and cost-efficiency.
    • Ensuring compliance with data quality standards and implementing governance best practices.
    • Driving and supporting the migration of on-premise data products to warehouse.
    More
  • Β· 48 views Β· 2 applications Β· 16d

    Senior Data Engineer to $6000

    Full Remote Β· Countries of Europe or Ukraine Β· 6 years of experience Β· B2 - Upper Intermediate
    What is the purpose of this position? We are looking for an enthusiastic, Senior Data Engineer with 6 years of commercial experience to contribute to the Python services that power our data platform. Working alongside other backend and data engineers,...

    What is the purpose of this position?

     

    We are looking for an enthusiastic, Senior Data Engineer with 6 years of commercial experience to contribute to the Python services that power our data platform. Working alongside other backend and data engineers, you'll make sure data flows smoothly from external sources into our internal data platform.

     

     

    Qualifications you`ll need to bring:
     

    • 6+ years experience with Python for data-intensive applications
    • Proficiency in Modern Python (3.10 +) with asyncio, typing, dependency injection, and packaging best practices
    • Good experience with ETL (Kafka, RabbitMQ, or similar tools)
    • Excellent knowledge of Postgres (Alembic, Django ORM)
    • Hands-on experience with Airflow, Dagster, or Celery for scheduling and monitoring jobs
    • Working experience with REST/GraphQL APIs
    • Familiarity with Pytest, Docker, Kubernetes, Terraform, and GitHub Actions
    • ​​Master's degree in Computer Science, Software Engineering, or a related field
    • Upper-Intermediate English level 

     

    Nice-to-have:

     

    • Experience with TimescaleDB, ClickHouse, InfluxDB
    • Knowledge of financial market-data protocols (FIX, FAST) or regulatory feeds


    In this role, you will be in charge of:
     

    • Implement, test, and improve scalable ETL workflows that handle large volumes of data
    • Develop and maintain task queues with Async tools
    • Build REST endpoints with FastAPI Framework
    • Add structured logs, traces, and metrics
    • Write type-safe, well-tested code, participate in code reviews, and help maintain CI/CD workflows
    • Work closely with data scientists, and product managers to  translate data requirements into robust backend capabilities

     

    We offer:

    • Remote Work Environment. Work from anywhere and be part of a geographically diverse team
    • Stay Ahead of the Curve. Work with cutting-edge technologies
    • Possibility of payment by PE account, Payoneer, or Wise
    • Bonus & Referral system
    • Paid 16 vacations and 10 sick leaves per year
    • Flexible hours 
    • Compensation program for purchasing new laptops
    • Paid professional certifications & educational courses
    • Paid English classes
    • Friendly atmosphere with quarterly team building
    • No bureaucracy

     

    We respect your time, so the hiring process takes no more than 5 days. There are 2 interview steps: HR interview, technical interview, and an interview with a client. 

    Ready to try your hand? Do not pull the cat’s tail, send your CV without a doubt!

    More
  • Β· 18 views Β· 0 applications Β· 16d

    Palantir Foundry Engineer

    Full Remote Β· Ukraine Β· 10 years of experience Β· C1 - Advanced
    Project description We are seeking a Palantir Foundry & AIP Engineer with hands-on experience across the full Foundry ecosystem and Palantir's Artificial Intelligence Platform (AIP). This role goes beyond data engineering: you will design, build, and...

    Project description

    We are seeking a Palantir Foundry & AIP Engineer with hands-on experience across the full Foundry ecosystem and Palantir's Artificial Intelligence Platform (AIP). This role goes beyond data engineering: you will design, build, and operationalize AI-powered workflows, agents, and applications that drive tangible business outcomes. The ideal candidate is a self-starter, able to translate complex business needs into scalable technical solutions, and confident working directly with stakeholders to maximize the value of Foundry and AIP.

    Responsibilities

    Data & Workflow Engineering: Design, develop, and maintain scalable pipelines, transformations, and applications within Palantir Foundry.

    AIP & AI Enablement:

    Support the design and deployment of AIP use cases such as copilots, retrieval workflows, and decision-support agents.

    Ground agents and logic flows using RAG (retrieval‐augmented generation) by connecting to relevant data sources, embedding/vector search, ontology content.

    Use Ontology-Augmented Generation (OAG) when needed: operational decision making where logic, data, actions and relationships are embedded in the Ontology.

    Collaborate with senior engineers on agent design, instructions, and evaluation using AIP's native features.

    End-to-End Delivery: Work with stakeholders to capture requirements, design solutions, and deliver working applications.

    User Engagement: Provide training and support for business teams adopting Foundry and AIP.

    Governance & Trust: Ensure solutions meet standards for data quality, governance, and responsible use of AI.

    Continuous Improvement: Identify opportunities to expand AIP adoption and improve workflow automation.

    Skills

    Must have

    Required Qualifications:

    10+ years of overall experience as a Data and AI Engineer;

    2+ years of professional experience with the Palantir Foundry ecosystem (data integration, ontology, pipelines, applications).

    Strong technical skills in Python, PySpark, SQL, and data modelling.

    Practical experience using or supporting AIP features such as RAG workflows, copilots, or agent-based applications.

    Ability to work independently and engage directly with non-technical business users.

    Strong problem-solving mindset and ownership of delivery.

    Preferred Qualifications:

    Familiarity with AIP Agent Studio concepts (agents, instructions, tools, testing).

    Exposure to AIP Evals and evaluation/test-driven approaches.

    Experience with integration patterns (APIs, MCP, cloud services).

    Consulting or applied AI/ML background.

    Experience in Abu Dhabi or the broader MENA region.

    More
  • Β· 30 views Β· 1 application Β· 16d

    Data Engineer

    Hybrid Remote Β· Poland Β· 4 years of experience Β· B2 - Upper Intermediate
    At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment,...

    At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.
     

    Hybrid work: 2-3 days from office
    Location: Warsaw


    Key Responsibilities:

    • Design, develop, and maintain ETL/ELT pipelines using Apache Airflow and AWS Glue.
    • Build robust and scalable data architectures leveraging AWS services such as S3, Lambda, CloudWatch, Kinesis, and Redshift.
    • Integrate real-time and batch data pipelines using Kafka and AWS streaming solutions.
    • Ensure data quality, reliability, and performance through effective monitoring, debugging, and optimization.
    • Collaborate with cross-functional teams to understand data requirements and deliver efficient solutions.
    • Manage version control and collaborative workflows using Git.
    • Implement infrastructure as code solutions with Terraform and Ansible to automate deployments.
    • Establish CI/CD pipelines to streamline testing, deployment, and versioning processes.
    • Document data models, workflows, and architecture to support transparency and scalability.

       

      Key Requirements:

    • Core Languages:
      Proficient in SQL, Python, and PySpark.
    • Framework Knowledge:
      Experience with Apache Airflow, AWS Glue, Kafka, and Redshift.
    • Cloud & DevOps:
      Strong hands-on experience with the AWS stack (Lambda, S3, CloudWatch, SNS/SQS, Kinesis).
    • Infrastructure Automation:
      Practical experience with Terraform and Ansible.
    • Version Control & CI/CD:
      Skilled in Git and familiar with continuous integration and delivery pipelines.
    • Debugging & Monitoring:
      Proven ability to maintain, monitor, and optimize ETL pipelines for performance and reliability.

       

      Qualifications:

    • Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Engineering, or a related field.
    • 4+ years of hands-on experience as a Data Engineer.
    • Strong understanding of data modeling, warehousing, and distributed systems.
    • Excellent English communication skills, both written and verbal.
       
    More
  • Β· 39 views Β· 0 applications Β· 1d

    Senior Data Engineer

    Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    We are looking for a Senior Data Engineer to join our client’s internal IT organization. The team is lean and highly collaborative, working closely with a BI Developer (Qlik Sense, Tableau) and a Client Lead, while partnering with an external Israeli...

    We are looking for a Senior Data Engineer to join our client’s internal IT organization. The team is lean and highly collaborative, working closely with a BI Developer (Qlik Sense, Tableau) and a Client Lead, while partnering with an external Israeli services provider. The role is hands-on, with a strong focus on Snowflake data models and AWS Glue pipelines, supporting mission-critical reporting and analytics.


    You will maintain and enhance the existing data landscape, ensure smooth ETL/ELT operations, and support the client’s evolving analytics initiatives. Future responsibilities may include designing new datasets and supporting migration toward Power BI.

     

    Responsibilities

    • Data Modeling & Maintenance
    • Enhance and optimize existing Snowflake data models.
      Ensure data structures support reporting needs (Qlik Sense, Tableau, and future Power BI).
    • Pipeline Development & Operations
    • Design, implement, and maintain ETL/ELT pipelines using AWS Glue and other AWS services.
      Manage ingestion from diverse sources (ERP, CRM/Salesforce, transactional DB replication, IoT/mobile app logs, Google Analytics, JSON/Parquet files).

     

    Collaboration & Support

    • Work closely with the BI Developer and Client Lead to translate requirements into scalable data solutions.
    • Provide timely support for production issues, ensuring SLA-critical systems remain stable.
    • Collaborate with external service partners when necessary.

     

    Future Initiatives

    • Contribute to new data design projects and upcoming Power BI migration.
    • Participate in requirements discussions, documentation, and solution design.

     

    Requirements

    Must-Have Skills:

    • 5+ years of experience as a Data Engineer or similar role.
    • Strong expertise with Snowflake (data modeling, performance tuning, optimization).
    • Hands-on experience with AWS Glue (ETL/ELT pipeline development).
    • Proficiency in SQL and at least one programming language (Python preferred).
    • Solid understanding of data integration from multiple sources (ERP, CRM, APIs, flat files, etc.).
    • Experience troubleshooting and maintaining production pipelines.

     

    Nice-to-Have Skills:

    • Familiarity with Tableau, Qlik Sense, or Power BI.
    • Exposure to Microsoft SQL Server and cross-cloud data flows (e.g., GCP-hosted systems).
    • Knowledge of AWS ecosystem (S3, Lambda, Step Functions, IAM).
    • Previous work with IoT or web/mobile analytics data.

     

    Working Setup:

    • Flexible working hours aligned with European time zones.
    • No DevOps responsibilities (dedicated infra team in place).
    • Support coverage: Must respond promptly to production issues as needed.

     

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 19 views Β· 0 applications Β· 16d

    Data Solution Architect

    Full Remote Β· Ukraine Β· 6 years of experience Β· C1 - Advanced
    Akvelon is a known USA company, with offices in places like Seattle, Mexico, Ukraine, Poland, and Serbia. Our company is an official vendor of Microsoft and Google. Our clients also include Amazon, Evernote, Intel, HP, Reddit, Pinterest, AT&T, T-Mobile,...

    Akvelon is a known USA company, with offices in places like Seattle, Mexico, Ukraine, Poland, and Serbia. Our company is an official vendor of Microsoft and Google. Our clients also include Amazon, Evernote, Intel, HP, Reddit, Pinterest, AT&T, T-Mobile, Starbucks, and LinkedIn. To work with Akvelon means to be connected with the best and brightest engineering teams from around the globe and working with an actual technology stack building Enterprise, CRM, LOB, Cloud, AI and Machine Learning, Cross-Platform, Mobile, and other types of applications customized to client’s needs and processes.

     

    We are hiring a Data Solution Architect for a U.S.-based technology-driven marketing and customer acquisition company.

     

    Requirements:

    • Strong experience with Google Cloud Platform (GCP), including BigQuery, DataFlow, and Cloud Storage.
    • Hands-on expertise in ETL processes using Talend or similar ETL tools.
    • Solid understanding of AWS services, particularly S3 and general cloud architecture best practices.
    • Experience designing and implementing scalable data pipelines and data integration solutions.
    • Knowledge of data modeling, warehousing, and analytics in cloud environments.
    • Ability to architect end-to-end data solutions and collaborate with cross-functional teams.
    • Strong problem-solving skills and experience with cloud migration or hybrid cloud setups.
    • Familiarity with data governance, security, and compliance best practices.
    • Excellent communication skills and ability to work with international clients, preferably in the U.S. market.

     

    Responsibilities:

    • Analyze the current data model and queries in BigQuery to identify bottlenecks, inefficiencies, and optimization opportunities.
    • Review and evaluate existing data pipelines to ensure scalability, reliability, and maintainability.
    • Plan and design data lake migrations from AWS to Google Cloud, ensuring minimal disruption and optimal architecture.
    • Create a comprehensive list of recommended improvements to enhance the performance, reliability, and scalability of the data platform.
    • Provide estimates and feasibility analysis for all suggested improvements, including effort, cost, and impact.
    • Collaborate with cross-functional teams to define implementation strategies and best practices for data platform optimization.
    • Ensure adherence to data governance, security, and compliance standards throughout all data solutions.

       

    Overlap time requirements: up to 12PM PST

     

    Working conditions and benefits:

    • Paid vacation, sick leave (without sickness list)
    • Official state holidays β€” 11 days considered public holidays
    • Professional growth while attending challenging projects and the possibility to switch your role, master new technologies and skills with company support
    • Flexible working schedule: 8 hours per day, 40 hours per week
    • Personal Career Development Plan (CDP)
    • Employee support program (Discount, Care, Health, Legal compensation)
    • Paid external training, conferences, and professional certification that meets the company’s business goals
    • Internal workshops & seminars

    Grow your expertise and make an impact with our team! πŸš€

    More
  • Β· 18 views Β· 1 application Β· 15d

    Senior Data Engineer (Python)

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Job Description Strong Python skills Strong working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases A successful history of manipulating, processing and...

    Job Description

    ● Strong Python skills

    ● Strong working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases

    ● A successful history of manipulating, processing and extracting value from large disconnected datasets

    ● Experience building and optimizing data pipelines, architectures and data sets

    ● Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement

    ● Strong analytic skills related to working with unstructured datasets

     

    Technologies:

    • Python
    • AWS: SES, RDS (MySQL), EC2, ECS, VPC, etc
    • Snowflake
    • Docker
    • dbt
    • Airflow

     

    Job Responsibilities

    ● Manage and optimize core data infrastructure

    ● Develop custom data infrastructure not available off-the-shelf

    ● Build monitoring infrastructure to give visibility into the pipeline’s status

    ● Monitor all jobs for impact on cluster performance

    ● Run maintenance routines regularly

    ● Tune table schemas to minimize costs and maximize performance

    ● Build and maintaining custom ingestion pipelines

    ● Build non-SQL transformation pipelines

    ● Build processes supporting data transformation, data structures, metadata, dependency and workload management

    Department/Project Description

    GlobalLogic is partnering with a US-based digital health company, whose mission is to connect people in social interactions for the health conversations, build online health communities to support each other and share knowledge about how to deal with the disease

    About client:
    Our Client is the fast-paced digital healthcare company which creates the growing portfolio of online health communities for people with chronic conditions. Company’s mission is to improve patient’s quality of life by connecting them with each other, with caregivers and healthcare industry partners, building beneficial social interactions and meaningful health conversations. 

    About project:
    The core of the project is working with the data; the target is to re-design and enhance data-driven solutions. The team is responsible for expanding and optimizing the data flow and data pipeline architecture, model and analyze data, interpret trends or patterns in complex data sets and translate them into product and marketing insights. If you are excited by the prospect of optimizing or even re-designing our data architecture to support our next generation of products and data initiatives, we would be thrilled to have you apply!

    More
  • Β· 15 views Β· 0 applications Β· 15d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    Job Description Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics Proficiency in data engineering with Apache Spark,...

    Job Description

    • Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
    • Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
    • Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
    • Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
    • Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
    • Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
    • Strong understanding of data modeling, schema design, and database performance optimization
    • Practical experience working with various file formats, including JSON, Parquet, and ORC
    • Familiarity with machine learning and AI integration within the data platform context
    • Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
    • Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
    • Strong analytical and problem-solving skills with attention to detail
    • Excellent teamwork and communication skills
    • Upper-Intermediate English (spoken and written)

    Job Responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
    • Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
    • Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
    • Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
    • Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, estimation, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment

    Department/Project Description

    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
    You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
    We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people's lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.

    More
  • Β· 26 views Β· 0 applications Β· 15d

    Data Engineering Tech Lead (Databricks)

    Full Remote Β· Poland Β· 5 years of experience Β· B2 - Upper Intermediate
    Join our team to work on enhancing a robust data pipeline that powers our SaaS product, ensuring seamless contextualization, validation, and ingestion of customer data. Collaborate with product teams to unlock new user experiences by leveraging data...

    Join our team to work on enhancing a robust data pipeline that powers our SaaS product, ensuring seamless contextualization, validation, and ingestion of customer data. Collaborate with product teams to unlock new user experiences by leveraging data insights. Engage with domain experts to analyze real-world engineering data and build data quality solutions that inspire customer confidence. Additionally, identify opportunities to develop self-service tools that streamline data onboarding and make it more accessible for our users.

     

    Our Client was established with the mission to fundamentally transform the execution of capital projects and operations. Designed by industry experts for industry experts, Client’s platform empowers users to digitally search, visualize, navigate, and collaborate on assets. Drawing on 30 years of software expertise and 180 years of industrial legacy as part of the renowned Scandinavian business group, Client plays an active role in advancing the global energy transition. The company operates from Norway, the UK, and the U.S.

     

    Lead responsibilities:

     

    • Ensure the team follows best practices in coding, data modeling, and system integration.
    • Act as the primary point of contact for resolving complex technical challenges.
    • Onboard new hires by providing hands-on guidance during their initial weeks, ensuring a smooth transition into the team (4-6 engineers).

     

    Key Responsibilities:

     

    • Design, build, and maintain data pipelines using Python.
    • Collaborate with an international team to develop scalable data solutions.
    • Develop and maintain smart documentation for process consistency, including the creation and refinement of checklists and workflows.
    • Set up and configure new tenants, collaborating closely with team members to ensure smooth onboarding.
    • Write integration tests to ensure the quality and reliability of data services.
    • Work with Gitlab to manage code and collaborate with team members.
    • Utilize Databricks for data processing and management.

     

    Requirements:

     

    • Programming: Minimum of 5 years as data engineer, or in a relevant field.
    • Python Proficiency: Advanced experience in Python, particularly in delivering production-grade data pipelines and troubleshooting code-based bugs.
    • Data Skills: Structured approach to data insights.
    • Cloud: Familiarity with cloud platforms (preferably Azure).
    • Data Platforms: Experience with Databricks, Snowflake, or similar data platforms.
    • Database Skills: Knowledge of relational databases, with proficiency in SQL.
    • Big Data: Experience using Apache Spark.
    • Documentation: Experience in creating and maintaining structured documentation.
    • Testing: Proficiency in utilizing testing frameworks to ensure code reliability and maintainability.
    • Version Control: Experience with Gitlab or equivalent tools.
    • English Proficiency: B2 level or higher.
    • Interpersonal Skills: Strong collaboration abilities, experience in an international team environment, willing to learn new skills and tools, adaptive and exploring mindset.

     

    Nice to have:

     

    • Experience with Docker and Kubernetes.
    • Experience with document and graph database.
    • Ability to travel abroad twice a year for an on-site workshops.


     

    More
Log In or Sign Up to see all posted jobs