Jobs Data Engineer

163
  • Β· 43 views Β· 1 application Β· 9d

    Data Engineer (Relocation to Spain)

    Office Work Β· Spain Β· Product Β· 3 years of experience Β· English - None
    Do you know that your professional skills can ensure the liquidity of a cryptocurrency exchange? We are looking for a Data Engineer with ETL/ELT for the Spanish office of the most famous Ukrainian company. Working with big data, strong team, assistance...

    Do you know that your professional skills can ensure the liquidity of a cryptocurrency exchange?
    We are looking for a Data Engineer with ETL/ELT for the Spanish office of the most famous Ukrainian company.

    Working with big data, strong team, assistance with family relocation, TOP conditions.

     

    Main Responsibilities

    β€” Design, build, and maintain scalable and resilient data pipelines (batch and real-time)
    β€” Develop and support data lake/data warehouse architectures
    β€” Integrate internal and external data sources/APIs into unified data systems
    β€” Ensure data quality, observability, and monitoring of pipelines
    β€” Collaborate with backend and DevOps engineers on infrastructure and deployment
    β€” Optimize query performance and data processing latency across systems
    β€” Maintain documentation and contribute to internal data engineering standards
    β€” Implement data access layers and provide well-structured data for downstream teams

     

    Mandatory Requirements

    β€” 3+ years of experience as a Data Engineer in high-load or data-driven environments
    β€” Proficient in Python for data processing and automation (pandas, pyarrow, sqlalchemy, etc.)
    β€” Advanced knowledge of SQL: query optimization, indexes, partitions, materialized views
    β€” Hands-on experience with ETL/ELT orchestration tools (e.g., Airflow, Prefect)
    β€” Experience with streaming technologies (e.g., Kafka, Flink, Spark Streaming)
    β€” Solid background in data warehouse solutions: ClickHouse, BigQuery, Redshift, or Snowflake
    β€” Familiarity with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code principles
    β€” Experience with containerization and deployment tools (e.g., Docker, Kubernetes, CI/CD)
    β€” Understanding of data modeling, data versioning, and schema evolution (e.g., dbt, Avro, Parquet)
    β€” English β€” at least intermediate (for documentation & communication with tech teams)

     

    We offer

    Immerse yourself in Crypto & Web3:
    β€” Master cutting-edge technologies and become an expert in the most innovative industry.
    Work with the Fintech of the Future:
    β€” Develop your skills in digital finance and shape the global market.
    Take Your Professionalism to the Next Level:
    β€” Gain unique experience and be part of global transformations.
    Drive Innovations:
    β€” Influence the industry and contribute to groundbreaking solutions.
    Join a Strong Team:
    β€” Collaborate with top experts worldwide and grow alongside the best.
    Work-Life Balance & Well-being:
    β€” Modern equipment.
    β€” Comfortable working conditions, and an inspiring environment to help you thrive.
    β€” 30 calendar days of paid leave.
    β€” Additional days off for national holidays.

     

    With us, you’ll dive into the world of unique blockchain technologies, reshape the crypto landscape, and become an innovator in your field. If you’re ready to take on challenges and join our dynamic team, apply now and start a new chapter in your career!
     

    More
  • Β· 37 views Β· 1 application Β· 9d

    Data Engineer (with Azure)

    Full Remote Β· EU Β· 3 years of experience Β· English - B1
    Main Responsibilities: Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements. You will work on cutting-edge cloud technologies,...

    Main Responsibilities:

    Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements.

     

    You will work on cutting-edge cloud technologies, including Microsoft Fabric, Azure Synapse Analytics, Apache Spark, Data Lake, Data Bricks, Data Factory, Cosmos DB, HD Insights, Stream Analytics, Event Grid in the implementation projects for corporate clients all over EU, CIS, United Kingdom, Middle East.

    Our ideal candidate is a professional passionated with technologies, a curious and self-motivated person.

     

    Responsibilities revolve around DevOps and include implementing ETL pipelines, monitoring/maintaining data pipeline performance, model optimization

     

    Mandatory Requirements:

    – 3+ years of experience, ideally within a Data Engineer role.

    – understanding of data modeling, data warehousing concepts, and ETL processes

    – 2+ years of experience with Azure Cloud technologies

    – experience in distributed computing principles and familiarity with key architectures, broad experience across a set of data stores (Azure Data Lake Store, Azure Synapse Analytics, Apache Spark, Azure Data Factory)

    – Understanding of landing, staging area, data cleansing, data profiling, data security and data architecture concepts (DWH, Data Lake, Delta Lake/Lakehouse, Datamart)

    – SQL-skills

    – communication and interpersonal skills

    – English β€”Π’2

     

    Will be beneficial if a candidate has experience in SQL migration from on-premises to cloud, data modernization and migration, advanced analytics projects, and/or professional certification in data&analytics.

     

    We offer:

    – professional growth and international certification

    – free of charge technical and business trainings and the best bootcamps (worldwide, including HQ Microsoft- Redmond courses)

    – innovative data & analytics projects, practical experience with cutting-edge Azure data&analytics technologies at various customers’ projects

    – great compensation and individual bonus remuneration

    – medical insurance

    – long-term employment

    – ondividual development plan

    More
  • Β· 31 views Β· 0 applications Β· 9d

    Senior Data Engineer

    Ukraine Β· 4 years of experience Β· English - B2
    We are a global audience and location intelligence company that helps marketers connect the digital and physical world. We provide data-driven solutions to enhance marketing campaigns by leveraging location and audience data to reveal consumer behavior...

    We are a global audience and location intelligence company that helps marketers connect the digital and physical world. We provide data-driven solutions to enhance marketing campaigns by leveraging location and audience data to reveal consumer behavior and enable more precise targeting and measurement. We work on high-end / high-performance / high-throughput systems for in-time analysis of data for autonomous driving and other big data applications e.g. for E-commerce.


    Job Description

    You have 4+ years of experience on similar position.

    You have significant experience with Python. Familiarity with Java or Scala is a plus.

    Hands-on experience building scalable solutions in AWS.

    Proficiency in NoSQL and SQL databases and in high-throughput data-related architecture and technologies (e.g. Kafka, Spark, Hadoop, MongoDB, AWS Batch, AWS Glue, Athena, Airflow, dbt).

    Excellent SQL and data transformation skills.

    Excellent written and verbal communication skills with an ability to simplify complex technical information.

    Experience guiding and mentoring junior team members in a collaborative environment.


     

    Job Responsibilities

    Work in a self-organised agile team with a high level of autonomy, and you will actively shape your team's culture.

    Design, build, and standardise privacy-first big data architectures, large-scale data pipelines, and advanced analytics solutions in AWS.

    Develop complex integrations with third-party partners, transferring terabytes of data.

    Align with other Data experts on data (analytics) engineering best practices and standards, and introduce those standards and data engineering expertise to the team in order to enhance existing data pipelines and build new ones.

    Successfully partner up with the Product team to constantly develop further and improve our platform features.

    More
  • Β· 37 views Β· 7 applications Β· 9d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· English - B1
    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an...

    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
    You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
    We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people’s lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.



    Requirements

    • Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
    • Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
    • Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
    • Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
    • Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
    • Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
    • Strong understanding of data modeling, schema design, and database performance optimization
    • Practical experience working with various file formats, including JSON, Parquet, and ORC
    • Familiarity with machine learning and AI integration within the data platform context
    • Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
    • Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
    • Strong analytical and problem-solving skills with attention to detail
    • Excellent teamwork and communication skills
    • Upper-Intermediate English (spoken and written)

    Job responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
    • Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
    • Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
    • Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
    • Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, estimation, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
    More
  • Β· 40 views Β· 2 applications Β· 9d

    Data Engineer ( with Snowflake and insurance companies experience)

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    Senior Data Engineer with experience in insurance & Snowflake. On behalf of our Client from the USA, Mobilunity is looking for a Senior Data Engineer. Our client is a well-established US-based financial services organization with a long history in the...

    Senior Data Engineer with experience in insurance & Snowflake.
    On behalf of our Client from the USA, Mobilunity is looking for a Senior Data Engineer.
    Our client is a well-established US-based financial services organization with a long history in
    the insurance and long-term financial security space. The company operates as a member-oriented, non-profit institution, focusing on life insurance, retirement programs, and community-driven initiatives. The product ecosystem is mature, data-heavy, and highly
    regulated, with a strong emphasis on reliability, accuracy, and compliance.
    We are looking for a Senior Data Engineer with deep Snowflake expertise and hands-on experience in insurance companies. 

    This domain background is a must-have requirement.
     

    Requirements:
    - 5+ years of experience as a Data Engineer.
    - Strong hands-on experience with Snowflake (data modeling, optimization, ELT/ETL pipelines).
    - Mandatory experience in the insurance domain (policies, claims, underwriting, actuarial, or related datasets).
    - Solid experience with data warehousing and analytical platforms.
    - Proven ability to build and maintain scalable, reliable data pipelines.
    - Advanced SQL skills.
    - Experience working with cloud platforms (AWS / GCP / Azure).
    - Upper-Intermediate+ / Advanced English – direct communication with US stakeholders.
     

    Nice to have:
    - Experience with dbt, Airflow, or similar orchestration tools.
    - Background in regulated or compliance-heavy environments.
    - Previous experience working with US clients or distributed teams.
     

    In return we offer
    The friendliest community of like-minded IT-people.
    Open knowledge-sharing environment – exclusive access to a rich pool of colleagues willing to share their endless insights into the broadest variety of modern technologies.


    Languages
    English - B2-Π‘1

    More
  • Β· 7 views Β· 0 applications Β· 9d

    Infrastructure Engineer with Java (hybrid work in Warsaw)

    Office Work Β· Poland Β· 5 years of experience Β· English - B2
    The product we are working on is one of TOP-3 navigation systems, complex web services, and other solutions related to it. The web and mobile apps handle information at a massive scale and extend well beyond the search, giving people and companies a lot...

    The product we are working on is one of TOP-3 navigation systems, complex web services, and other solutions related to it. The web and mobile apps handle information at a massive scale and extend well beyond the search, giving people and companies a lot of new, useful options.

    This role focuses on executing critical migration projects within the backend infrastructure of the project. The Backend Infrastructure team is undertaking several large-scale migrations to modernize its systems, improve reliability, and reduce maintenance overhead. This TVC position will be instrumental in performing the hands-on work required for these migrations, working closely with the infrastructure team and other Backend teams.
     

    Responsibilities:
     

    • Execute Migrations: Actively participate in and drive the execution of large-scale code and system migrations across various backend services. Some examples include:
      • migrating event processing systems from custom infrastructure to managed infrastructure solutions;
      • Transitioning services from custom OpenCensus metrics collection to OpenTelemetry;
      • migrating custom metrics to standard OpenTelemetry metrics.
    • Code Modification and Updates: Update and refactor existing codebases (primarily Java) to align with new libraries, platforms, and infrastructure.
    • Testing: Work with the Infrastructure team to create a testing plan for migrations to ensure that changes do not break running services and execute the test plans.
    • Collaboration: Work closely with the Backend Infrastructure team and other software engineers to understand migration requirements, plan execution strategies, and ensure smooth transitions with minimal disruption.
    • Problem Solving: Investigate, debug, and resolve technical issues and complexities encountered during the migration processes.
    • Documentation: Maintain clear and concise documentation for migration plans, processes, changes made, and outcomes.
    • Best Practices: Adhere to software development best practices, ensuring code quality, and follow established guidelines for infrastructure changes.

       

    Requirements:

    • 5+ years of hands-on experience in backend software development.
    • Strong proficiency in Java programming.
    • Strong communication and interpersonal skills, with the ability to collaborate effectively within a technical team environment.
    • Bachelor’s degree in Computer Science, Software Engineering, or a related technical field, or equivalent practical experience.
    • Good spoken and written English level β€” Upper-Intermediate or higher.
       

    Nice to have:

    • Experience with observability frameworks such as OpenTelemetry or OpenCensus.
    • Familiarity with gRPC.
    • Knowledge of Google Cloud Platform (GCP) services, particularly data processing services like Dataflow.
       

    We offer:

    • Opportunities to develop in various areas;
    • Compensation package (20 paid vacation days, paid sick leaves);
    • Flexible working hours;
    • Medical insurance;
    • English courses with a native speaker, yoga (Zoom);
    • Paid tech training and other activities for professional growth;
    • Hybrid work mode (∼3 days in the office);
    • International business trips
    • Comfortable office.

       

    If your qualifications and experience match the requirements of the position, our recruitment team will reach out to you in a week maximum. Please rest assured that we carefully consider each candidate, but due to the amount of applications, the review and further processing of your candidacy may take some time.

    More
  • Β· 49 views Β· 1 application Β· 9d

    Data Engineer to $4300

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· English - B2
    CrunchCode β€” ΠΌΡ–ΠΆΠ½Π°Ρ€ΠΎΠ΄Π½Π° сСрвісна Π†Π’-компанія Π· досвідом близько 7 Ρ€ΠΎΠΊΡ–Π² Ρƒ Ρ€ΠΎΠ·Ρ€ΠΎΠ±Ρ†Ρ– вСбсСрвісів Ρ– вСбзастосунків. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Ρƒ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°Ρ… staff augmentation (outstaff) Ρ‚Π° outsourcing Ρ– ΠΏΡ–Π΄ΠΊΠ»ΡŽΡ‡Π°Ρ”ΠΌΠΎ спСціалістів Π΄ΠΎ ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–Π² ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² Ρƒ довгостроковій ΠΌΠΎΠ΄Π΅Π»Ρ–...

    CrunchCode β€” ΠΌΡ–ΠΆΠ½Π°Ρ€ΠΎΠ΄Π½Π° сСрвісна Π†Π’-компанія Π· Π΄ΠΎΡΠ²Ρ–Π΄ΠΎΠΌ близько 7 Ρ€ΠΎΠΊΡ–Π² Ρƒ Ρ€ΠΎΠ·Ρ€ΠΎΠ±Ρ†Ρ– вСбсСрвісів Ρ– Π²Π΅Π±Π·Π°ΡΡ‚осунків. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Ρƒ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°Ρ… staff augmentation (outstaff) Ρ‚Π° outsourcing Ρ– ΠΏΡ–Π΄ΠΊΠ»ΡŽΡ‡Π°Ρ”ΠΌΠΎ спСціалістів Π΄ΠΎ ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–Π² ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² Ρƒ Π΄ΠΎΠ²Π³ΠΎΡΡ‚Ρ€ΠΎΠΊΠΎΠ²Ρ–ΠΉ ΠΌΠΎΠ΄Π΅Π»Ρ– співпраці.

    Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ ΠΏΠ΅Ρ€Π΅Π²Π°ΠΆΠ½ΠΎ Π· ΠΏΡ€ΠΎΡ”ΠΊΡ‚Π°ΠΌΠΈ Π² Π΄ΠΎΠΌΠ΅Π½Π°Ρ… логістики (Π²ΠΊΠ»ΡŽΡ‡Π½ΠΎ Π· last mile),e-commerce, fintech Ρ‚Π° Π±Π°Π½ΠΊΡ–Π½Π³Ρƒ, Π° Ρ‚Π°ΠΊΠΎΠΆ enterprise-Ρ€Ρ–ΡˆΠ΅Π½Π½ΡΠΌΠΈ.
    Для нас Π²Π°ΠΆΠ»ΠΈΠ²ΠΎ, Ρ‰ΠΎΠ± ΠΏΡ€ΠΎΡ”ΠΊΡ‚ Π±ΡƒΠ² β€œΡ‡ΠΈΡΡ‚ΠΈΠΌβ€ Ρ– Π·Ρ€ΠΎΠ·ΡƒΠΌΡ–Π»ΠΈΠΌ Π· Ρ‚ΠΎΡ‡ΠΊΠΈ Π·ΠΎΡ€Ρƒ Π΅Ρ‚ΠΈΠΊΠΈ Ρ‚Π° Ρ†Ρ–нності для користувачів.

    Ми ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΎΠ²ΠΎ Π½Π΅ Π±Π΅Ρ€Π΅ΠΌΠΎ ΠΏΡ€ΠΎΡ”ΠΊΡ‚ΠΈ, пов’язані Π·:
    ● gambling / Π³Π΅ΠΌΠ±Π»Ρ–Π½Π³ΠΎΠΌ,
    ● adult-ΠΊΠΎΠ½Ρ‚Π΅Π½Ρ‚ΠΎΠΌ Ρ‚Π° ΠΏΠΎΡ€Π½ΠΎΠ³Ρ€Π°Ρ„Ρ–Ρ”ΡŽ,
    ● ΡˆΠ°Ρ…Ρ€Π°ΠΉΡΡ‚Π²ΠΎΠΌ Π°Π±ΠΎ Π±ΡƒΠ΄ΡŒ-якою Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΎΡŽ, Ρ‰ΠΎ ΡΠΏΡ€ΡΠΌΠΎΠ²Π°Π½Π° Π½Π° ΠΎΠ±ΠΌΠ°Π½ Ρ‡ΠΈ ΠΌΠ°Π½Ρ–пуляції.

    What We Offer:
    ● Fully remote work
    ● Long-term, stable project
    ● High level of autonomy and trust
    ● Minimal bureaucracy
    ● Direct impact on business-critical logistics systems
    ● Long-term engagement, not a short-term contract.

    Project Overview:
    The project is a cloud-based analytics platform designed for commercial real estate. It provides tools for data analysis, portfolio management, financial insights, and lease tracking, helping owners, property managers, and brokers make informed, data-driven decisions.

    Requirements (Must-have):
    - Strong English communication skills (B2)

    - PowerBI skills:
    β€’ Able to understand the data sources and relevant data for analysis
    β€’ Design and refine data models, familiarity with a dimensional model
    β€’ Develop interactive reports and dashboards
    β€’ Knowledge of DAX

    - Azure and DB skills:
    β€’ Proficiency in ETL/ELT design, development and support
    β€’ Strong hands-on experience in Azure Data Factory
    β€’ Experience in Azure Functions
    β€’ Stored Procedures writing and optimization
    β€’ Telerik .NET Reporting experience (Nice to have)

    Responsibilities:
    Continue improving existing data reporting tools. List of existing integrations (where data comes from):
    - Procore
    - DealPath
    - Yardi
    - MRI
    - JDE
    - VTS
    - OneSite
    - CoStar
    - Argus
    - Salesforce
    - RealPage

    Nice to Have:
    - Basic Python skills

    Required: 
    The client is in the PST timezone and is available for communication from 5-6pm UA time. Specialist should be available until 7 pm UA time.

    Hiring Process:
    - Intro call
    - Technical discussion (focused on real experience)
    - Offer
    Start: ASAP

    More
  • Β· 62 views Β· 6 applications Β· 10d

    Senior Data Engineer (for Ukrainians in EU)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 6 years of experience Β· English - B2
    About our Customer It's a European company turning bold ideas into reality. We build innovative products for startups and guide established companies on their journey to data-driven innovation and AI-powered solutions. Our expertise spans EnergyTech,...

    About our Customer
    It's a European company turning bold ideas into reality.  We build innovative products for startups and guide established companies on their journey to data-driven innovation and AI-powered solutions. Our expertise spans EnergyTech, FinTech, ClimateTech, SocialTech, PropTech , and more.
     

    Founded in Ukraine with a Scandinavian-inspired culture.
     

    We value skills, passion, excellence, equality, openness, mutual respect, and trust. You’ll join a growing company, work with creative, inspiring colleagues, explore cutting-edge technologies, and build AI-driven solutions that make a real impact.
     

    Project
    Our client is an Icelandic energy company  providing electricity, geothermal water, cold water, carbon storage, and optic networks.
     

    We are looking for a Senior Data Engineer ready to dive deep into data, solve challenging problems, and create maximum value for internal stakeholders. You’ll handle complex issues, design long-term improvements, and develop new data pipelines as part of an enthusiastic and collaborative Data Engineering team.
     

    Tech Stack:
    πŸ—„οΈ MS SQL Server | Azure/Databricks | Power BI, Tableau | Microsoft BI stack (SSRS, SSIS, SSAS) | TimeXtender | exMon
     

    Responsibilities:

    • Develop & maintain enterprise data warehouse, data marts, staging layers, and transformation logic
    • Design, implement & optimize ETL/ELT pipelines (SQL Server, Azure, Databricks)
    • Build & maintain robust data models (dimensional/star-schema, semantic layers, analytical datasets)
    • Improve BI environment and ensure data is reliable and actionable
    • Implement controlled data delivery processes to analysts & BI specialists
    • Support data quality frameworks, testing & validation procedures
    • Investigate 3rd-line operational issues & guide 2nd-line support
    • Run stakeholder workshops to translate business needs into elegant technical solutions
    • Identify opportunities to improve data usability, value, and automation
    • Document all processes, models, and pipelines in Confluence
    • Collaborate with on-site Team Lead for sprint planning, backlog refinement, and prioritization
       

    Requirements

    • Bachelor’s or Master’s in Computer Science or related field
    • 6+ years of experience with DWH solutions & data pipelines
    • Strong SQL development skills (MS SQL Server preferred)
    • ETL/ELT workflow experience using:
      • Databricks
      • Azure Data Factory / cloud orchestration tools
      • Azure data platform services (storage, compute, data lake)
    • Solid understanding of data warehouse architectures & dimensional modeling
    • Experience with data quality checks, validation, and monitoring
    • Understanding of BI concepts & ability to prepare user-friendly datasets
    • Strong communication, able to explain data concepts to stakeholders
    • Willingness to document solutions and share knowledge
    • Experience in distributed, cross-cultural Agile environments
    • English: upper-intermediate / advanced


    πŸ”Ή Bonus / Nice to Have

    • Python or similar for data processing
    • Performance tuning for SQL or data pipelines
    • Interest in visual clarity & usability of data models
    More
  • Β· 54 views Β· 4 applications Β· 10d

    Data Engineer_support

    Full Remote Β· EU Β· 3 years of experience Β· English - B2
    OUR COMPANY HBM is a European company building exciting new products from scratch for startups and helping mature companies in their journey towards data-driven innovation and AI based solutions. Our expertise refers to EnergyTech, FinTech, ClimateTech,...

    OUR COMPANY  

    HBM is a European company building exciting new products from scratch for startups and helping mature companies in their journey towards data-driven innovation and AI based solutions. Our expertise refers to EnergyTech, FinTech, ClimateTech, SocialTech, PropTech, etc. 

    Founded in Ukraine and developed based on Scandinavian culture, HBM is hiring both in Ukraine and the EU for our customers located in Europe and USA.  

      

    Our values include skills, passion, excellence, equality, openness, mutual respect, and trust. 

      

    At HBM, you can become a part of growing company, work with creative colleagues, and enjoy modern technologies and creating AI-based solutions. You’ll be part of a strong corporate culture combined with the agility and flexibility of a start-up backed by proven outsourcing and development practices, a human-oriented leadership team, an entrepreneurial mindset, and an approach to work-life balance. 

      

    PROJECT 

    Our customer is an Icelandic energy company providing electricity, geothermal water, cold water, carbon storage and optic network.  

    We are looking for a Data Engineer with strong technical troubleshooting skills to be responsible for monitoring, investigating, and resolving operational issues related to data warehouse and data pipelines. The qualified candidate will work as part of the Data Engineering team and will handle incoming 2nd-line support tickets (primarily task failures, timeouts, execution errors, and data inconsistencies in scheduled processes). 

    The role ensures that daily operational data flows run reliably and that incidents are triaged and resolved efficiently. 

    Tech stack: MS SQL Server, Azure/Databricks, Power BI, Tableau, Microsoft BI stack (SSRS, SSIS,SSAS [Olap and Tabular]) , TimeXtender, exMon. 

     

    WE PROVIDE YOU WITH THE FOLLOWING CHALLENGES 

    • Troubleshooting failed scheduled tasks (e.g., ETL pipelines that time out, fail on specific datasets, or produce partial/incomplete outputs) 
    • Investigating recurring timeout issues in ETL jobs (e.g., exMon timeout while running data extraction from in-house systems) 
    • Resolving warnings raised by the monitoring system (exMon) 
    • Identifying and escalating data quality inconsistencies (e.g., discrepancies in RG41 SCADA data, mismatches in business-critical tables) 
    • Running or re-running failed jobs when appropriate 
    • Correcting configuration issues in pipeline parameters, schedule triggers, or source/target connections 
    • Cooperating closely with on-site team (status meeting, sprint planning, etc) 
    • Collaborating closely with the Data Engineering Team Lead for priorities and escalations 
    • Updating Jira tickets in English with clear problem descriptions and resolutions 
    • Gradually take on more data engineering tasks (beside support) 

      

    WE EXPECT FROM YOU 

    • Degree (bachelor or master) in computer science or a comparable course of study 
    • 3+ years of experience working with DWH solutions and data pipelines 
    • Strong SQL debugging skills (preferably MS SQL Server) 
    • Experience with ETL / ELT workflows (SSIS, ADF, custom pipelines, or similar) 
    • Familiarity with data warehouse concepts (fact tables, dimensions, staging layers) 
    • Ability to parse log outputs, identify root causes, and correct configuration or code-level issues in data jobs 
    • Experience with job scheduling/monitoring systems (e.g., exMon or equivalents) 
    • Excellent communication skills, ability to communicate to stakeholders on multiple levels 
    • Action and quality-oriented 
    • Experience of work the distributed, cross-culture Agile environment 
    • English: upper-intermediate / advanced 

     

    WOULD BE A PLUS 

    • Experience with Python or similar languages for data processing 

     

      WE OFFER YOU 

      

    • Modern technologies, new products development, different business domains. 
    • Start-up agility combined with mature delivery practices and management team. 
    • Strong focus on your technical and personal growth. 
    • Transparent career development and individual development plan. 
    • Flexible working mode (remote/work from office), full remote possibility. 
    • Competitive compensation and social package 
    • Focus on the well-being and human touch. 
    • Flat organization where everyone is heard and is invited to contribute. 
    • Work-life balance approach to work. 
    • Passion and Fun in everything we do. 
    More
  • Β· 43 views Β· 1 application Β· 10d

    Senior Data Engineer

    Full Remote Β· EU Β· 6 years of experience Β· English - B2
    OUR COMPANY HBM is a European company building exciting new products from scratch for startups and helping mature companies in their journey towards data-driven innovation and AI based solutions. Our expertise refers to EnergyTech, FinTech, ClimateTech,...

    OUR COMPANY  

    HBM is a European company building exciting new products from scratch for startups and helping mature companies in their journey towards data-driven innovation and AI based solutions. Our expertise refers to EnergyTech, FinTech, ClimateTech, SocialTech, PropTech, etc. 

    Founded in Ukraine and developed based on Scandinavian culture, HBM is hiring both in Ukraine and the EU for our customers located in Europe and USA.  

      

    Our values include skills, passion, excellence, equality, openness, mutual respect, and trust. 

      

    At HBM, you can become a part of growing company, work with creative colleagues, and enjoy modern technologies and creating AI-based solutions. You’ll be part of a strong corporate culture combined with the agility and flexibility of a start-up backed by proven outsourcing and development practices, a human-oriented leadership team, an entrepreneurial mindset, and an approach to work-life balance. 

      

    PROJECT 

    Our customer is an Icelandic energy company providing electricity, geothermal water, cold water, carbon storage and optic network.  

    We are looking for a Senior Data Engineer who will be responsible for developing, enhancing, and maintaining enterprise data warehouse, data platform, and analytical data flows. The role supports all company’s subsidiaries and contributes to creating maximum value from data for internal stakeholders. 

    The qualified candidate will work as part of the Data Engineering team and will handle complex 3rd-line issues, long-term improvements, and new data development. The work will be aligned with the team’s structured 3-week planning cycles, and tight collaboration with the on-site Team Lead is expected. 

    Tech stack: MS SQL Server, Azure/Databricks, Power BI, Tableau, Microsoft BI stack (SSRS, SSIS,SSAS [Olap and Tabular]) , TimeXtender, exMon. 

     

    WE PROVIDE YOU WITH THE FOLLOWING EXCITING CHALLENGES 

    • Develop and maintain the enterprise data warehouse, data marts, staging layers, and transformation logic 
    • Design, implement, and optimize ETL/ELT pipelines (SQL Server, Azure data components, Databricks, etc.) 
    • Build and maintain robust data models (dimensional/star-schema, semantic layers, analytical datasets) 
    • Develop and improve the BI environment and the underlying data processes used by analysts across the company 
    • Implement processes for controlled, reliable data delivery to BI specialists, analysts, and modelling teams (e.g., forecasting, scenario modelling) 
    • Support data quality frameworks and implement testing/validation procedures 
    • Investigate and resolve escalated 3rd-line operational issues and guide 2nd-line support in root cause analysis 
    • Conduct stakeholder workshops to understand business requirements and translate them into technical data solutions 
    • Identify opportunities to improve data usability, analytical value, and process automation 
    • Document data processes, models, pipelines, and architectural decisions in Confluence 
    • Collaborate with the on-site Team Lead during sprint planning, backlog refinement, and prioritization. 

     

      

    WE EXPECT FROM YOU 

    • Degree (bachelor or master) in computer science or a comparable course of study 
    • 6+ years of experience working with DWH solutions and data pipelines 
    • Strong SQL development skills, preferably in MS SQL Server 
    • Experience building and maintaining ETL/ELT workflows using: 
    • Databricks 
    • Azure Data Factory or similar cloud-based data orchestration tools 
    • Azure data platform services (e.g., storage, compute, data lake formats) 
    • Solid understanding of data warehouse architectures and dimensional modelling 
    • Experience with data quality checks, validation frameworks, and monitoring 
    • Understanding of BI concepts and ability to prepare user-friendly analytical datasets 
    • Experience collaborating with business stakeholders and capturing analytical or operational data requirements 
    • Strong communication skills and the ability to explain data concepts clearly 
    • Willingness to document solutions and share knowledge within the team 
    • Excellent communication skills, ability to communicate to stakeholders on multiple levels 
    • Action and quality-oriented 
    • Experience of work the distributed, cross-culture Agile environment 
    • English: upper-intermediate / advanced 

     

    WOULD BE A PLUS 

    • Experience with Python or similar languages for data processing 
    • Experience with performance tuning for SQL or data pipelines 
    • Interest in visual clarity, usability of data models, and BI-driven design 

     

     

     WE OFFER YOU 

      

    • Modern technologies, new products development, different business domains. 
    • Start-up agility combined with mature delivery practices and management team. 
    • Strong focus on your technical and personal growth. 
    • Transparent career development and individual development plan. 
    • Flexible working mode (remote/work from office), full remote possibility. 
    • Competitive compensation and social package 
    • Focus on the well-being and human touch. 
    • Flat organization where everyone is heard and is invited to contribute. 
    • Work-life balance approach to work. 
    • Passion and Fun in everything we do. 
    More
  • Β· 40 views Β· 5 applications Β· 10d

    Senior Data Platform Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 7 years of experience Β· English - B2
    What You’ll Actually Do Architect and run high-load, production-grade data pipelines where correctness and latency matter. Design systems that survive schema changes, reprocessing, and partial failures. Own data availability, freshness, and trust - not...

    🎯 What You’ll Actually Do

    • Architect and run high-load, production-grade data pipelines where correctness and latency matter.
    • Design systems that survive schema changes, reprocessing, and partial failures.
    • Own data availability, freshness, and trust - not just pipeline success.
    • Make hard calls: accuracy vs cost, speed vs consistency, rebuild vs patch.
    • Build guardrails so downstream consumers (Analysts, Product, Ops) don’t break.
    • Improve observability: monitoring, alerts, data quality checks, SLAs.
    • Partner closely with backend engineers, data analysts, and Product - no handoffs, shared ownership.
    • Debug incidents, own RCA, and make sure the same class of failure doesn’t return.

    This is a hands-on IC role with platform-level responsibility.

     

    🧠 What You Bring

    • 5+ years in data or backend engineering on real production systems.
    • Strong experience with columnar analytical databases (ClickHouse, Snowflake, BigQuery, similar).
    • Experience with event-driven / streaming systems (Kafka, pub/sub, CDC, etc.).
    • Strong SQL + at least one general-purpose language (Python, Java, Scala).
    • You think in failure modes, not happy paths.
    • You explain why something works - and when it shouldn’t be used.

    Bonus: You’ve rebuilt or fixed a data system that failed in production.

     

    πŸ”§ How We Work

    • Reliability > elegance. Correct data beats clever data.
    • Ownership > tickets. You run what you build.
    • Trade-offs > dogma. Context matters.
    • Direct > polite. We fix problems, not dance around them.
    • One team, one system. No silos.
    •  

    πŸ”₯ What We Offer

    • Fully remote.
    • Unlimited vacation + paid sick leave.
    • Quarterly performance bonuses.
    • Medical insurance for you and your partner.
    • Learning budget (courses, conferences, certifications).
    • High trust, high autonomy.
    • Zero bureaucracy. Real engineering problems.

       

    πŸ‘‰ Apply if you see data platforms as systems to be engineered - not pipelines to babysit.

    More
  • Β· 35 views Β· 10 applications Β· 10d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 7 years of experience Β· English - B2
    What You’ll Actually Do Design and run high-throughput, production-grade data pipelines. Own data correctness, latency, and availability end to end. Make hard trade-offs: accuracy vs speed, cost vs freshness, rebuild vs patch. Design for change - schema...

    🎯 What You’ll Actually Do

    • Design and run high-throughput, production-grade data pipelines.
    • Own data correctness, latency, and availability end to end.
    • Make hard trade-offs: accuracy vs speed, cost vs freshness, rebuild vs patch.
    • Design for change - schema evolution, reprocessing, and new consumers.
    • Protect BI, Product, and Ops from breaking changes and silent data issues.
    • Build monitoring, alerts, and data quality checks that catch problems early.
    • Work side-by-side with Product, BI, and Engineering β€” no handoffs, shared ownership.
    • Step into incidents, own RCA, and make sure the same class of failure never repeats.

    This is a hands-on senior IC role with real accountability.

     

     

    🧠 What You Bring (Non-Negotiable)

    • 5+ years in data or backend engineering on real production systems.
    • Strong experience with analytical databases
      (ClickHouse, Snowflake, BigQuery, or similar).
    • Experience with event-driven or streaming systems
      (Kafka, CDC, pub/sub).
    • Solid understanding of:
      • at-least-once vs exactly-once semantics
      • schema evolution & backfills
      • mutation and reprocessing costs
    • Strong SQL and at least one programming language
      (Python, Java, Scala, etc.).
    • You don’t just ship - you own what happens after.

       

    πŸ”§ How We Work

    • Reliability > cleverness.
    • Ownership > process.
    • Impact > output.
    • Direct > polite.
    • One team, one system.

       

    πŸ”₯ What We Offer

    • Fully remote (Europe).
    • Unlimited vacation + paid sick leave.
    • Quarterly performance bonuses.
    • Medical insurance for you and your partner.
    • Learning budget (courses, conferences, certifications).
    • High trust, high autonomy.
    • No bureaucracy. Real data problems.

       

    πŸ‘‰ Apply if you treat data like production software - and feel uncomfortable when numbers can’t be trusted.

    More
  • Β· 48 views Β· 9 applications Β· 10d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - None
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently fostered the largest Data Science Community in Eastern Europe, boasting a network of over 30,000 AI top engineers.

    About the client:
    We are working with a new generation of data service provider, specializing in data consulting and data-driven digital marketing, dedicated to transforming data into business impact across the entire value chain of organizations. The company’s data-driven services are built upon the deep AI expertise the company’s acquired with a 1000+ client base around the globe. The company has 1000 employees across 20 offices who are focused on accelerating digital transformation.

    About the role:
    We are seeking a Senior Data Engineer (Azure) to design and maintain data pipelines and systems for analytics and AI-driven applications. You will work on building reliable ETL/ELT workflows and ensuring data integrity across the organization.

    Required skills:
    - 6+ years of experience as a Data Engineer, preferably in Azure environments.
    - Proficiency in Python, SQL, NoSQL, and Cypher for data manipulation and querying.
    - Hands-on experience with Airflow and Azure Data Services for pipeline orchestration.
    - Strong understanding of data modeling, ETL/ELT workflows, and data warehousing concepts.
    - Experience in implementing DataOps practices for pipeline automation and monitoring.
    - Knowledge of data governance, data security, and metadata management principles.
    - Ability to work collaboratively with data science and analytics teams.
    - Excellent problem-solving and communication skills.

    Responsibilities:
    - Transform data into formats suitable for analysis by developing and maintaining processes for data transformation;
    - Structuring, metadata management, and workload management.
    - Design, implement, and maintain scalable data pipelines on Azure.
    - Develop and optimize ETL/ELT processes for various data sources.
    - Collaborate with data scientists and analysts to ensure data readiness.
    - Monitor and improve data quality, performance, and governance.

    More
  • Β· 66 views Β· 3 applications Β· 10d

    Data Engineer

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - None
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel the Ukrainian LLM and NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling the Data Scientists and ML Engineers to develop cutting-edge language models.

    You will work at the intersection of data engineering and machine learning, ensuring that the datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context.

    Requirements:
    - Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
    - NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    Understanding of FineWeb2 or a similar processing pipeline approach.
    - Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
    - Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
    - Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as the NLP applications may require embedding storage and fast similarity search.
    - Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
    - Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
    - Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.

    Nice to have:
    - Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
    - Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
    - CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
    - Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
    - Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimizing existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve the workflows.

    Responsibilities:
    - Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information.
    - Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
    - Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to the language modeling efforts.
    - Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
    - Automate data processing workflows and ensure their scalability and reliability.
    - Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
    - Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs.
    - Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
    - Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models.
    - Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
    - Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
    - Manage data security, access, and compliance.
    - Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 18 views Β· 1 application Β· 10d

    Senior Snowflake Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data...

    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data infrastructure through the transition to Snowflake as a priority, as it will enhance capabilities for implementing advanced AI solutions and unlock numerous opportunities for innovation and growth.

    We are seeking a highly skilled Snowflake Data Engineer to design, build, and optimize scalable data pipelines and cloud-based solutions across AWS, Azure, and GCP. The ideal candidate will have strong expertise in Snowflake, ETL Tools like DBT, Python, visualization tools like Tableau and modern CI/CD practices, with a deep understanding of data governance, security, and role-based access control (RBAC). Knowledge of data modeling methodologies (OLTP, OLAP, Data Vault 2.0), data quality frameworks, Stream lit application development and SAP integration and infrastructure-as-code with Terraform is essential. Experience working with different file formats such as JSON, Parquet, CSV, and XML is highly valued.

    • Responsibilities:

      β€’ In-depth knowledge of Snowflake's data warehousing capabilities.
      β€’ Understanding of Snowflake's virtual warehouse architecture and how to optimize performance
      and cost.
      β€’ Proficiency in using Snowflake's data sharing and integration features for seamless collaboration.
      β€’ Develop and optimize complex SQL scripts, stored procedures, and data transformations.
      β€’ Work closely with data analysts, architects, and business teams to understand requirements and
      deliver reliable data solutions.
      β€’ Implement and maintain data models, dimensional modeling for data warehousing, data marts,
      and star/snowflake schemas to support reporting and analytics.
      β€’ Integrate data from various sources including APIs, flat files, relational databases, and cloud
      services.
      β€’ Ensure data quality, data governance, and compliance standards are met.
      β€’ Monitor and troubleshoot performance issues, errors, and pipeline failures in Snowflake and
      associated tools.
      β€’ Participate in code reviews, testing, and deployment of data solutions in development and production environments.

    • Mandatory Skills Description:

      β€’ 5+ years of experience
      β€’ Strong proficiency in Snowflake (Snowpipe, RBAC, performance tuning).
      β€’ Ability to write complex SQL queries, stored procedures, and user-defined functions.
      β€’ Skills in optimizing SQL queries for performance and efficiency.
      β€’ Experience with ETL/ELT tools and techniques, including Snowpipe, AWS Glue, openflow, fivetran
      or similar tools for real-time and periodic data processing.
      β€’ Proficiency in transforming data within Snowflake using SQL, with Python being a plus.
      β€’ Strong understanding of data security, compliance and governance.
      β€’ Experience with DBT for database object modeling and provisioning.
      β€’ Experience in version control tools, particularly Azure DevOps.
      β€’ Good documentation and coaching practice.

    More
Log In or Sign Up to see all posted jobs