Jobs Data Engineer

162
  • Β· 206 views Β· 9 applications Β· 5d

    Google Cloud Solutions Architect / Pre-Sales Engineer

    Full Remote Β· Ukraine Β· Product Β· 1 year of experience Β· English - B1
    Google Cloud Solutions Architect / Pre-Sales Engineer Company: Softprom Solutions Location: Remote (Ukraine) Employment: Full-time, FOP (contractor) About the role Softprom Solutions is looking for a Google Cloud Solutions Architect / Pre-Sales...

    Google Cloud Solutions Architect / Pre-Sales Engineer

     

    Company: Softprom Solutions
    Location: Remote (Ukraine)
    Employment: Full-time, FOP (contractor)

     

    About the role

    Softprom Solutions is looking for a Google Cloud Solutions Architect / Pre-Sales Engineer to join our Cloud team.

    This role is ideal for a specialist with hands-on experience in Google Cloud Platform and an active Google Cloud certification, who wants to work on real commercial projects, participate in pre-sales activities, and grow in cloud architecture.

     

    Responsibilities

    • Design and document Google Cloud architectures according to the Google Cloud Architecture Framework
    • Participate in pre-sales activities:
      • technical discovery with customers
      • solution design
      • participation in demos and presentations
    • Deployment and configuration of core GCP services, including:
      • Compute Engine, Cloud Storage, Cloud SQL
      • VPC, IAM, Load Balancing
      • Cloud Functions / Cloud Run
    • Design and configure GCP networking:
      • VPC networks, subnets
      • Firewall rules
      • Routes
    • Implement and support Infrastructure as Code (IaC) using Terraform
    • Create technical and solution documentation
    • Act as a technical point of contact for sales and customers

     

    Requirements (Must have)

    • Active Google Cloud certification
      (Associate Cloud Engineer or Professional Cloud Architect)
    • Experience as Cloud Engineer / Solutions Architect / Pre-Sales Engineer
    • Practical understanding of Google Cloud core services:
      • Compute, Storage, Databases, Networking, Security
    • Solid networking knowledge:
      • TCP/IP (L3, L4)
      • IP addressing, subnetting
      • DNS
    • Understanding of cloud principles:
      • High Availability
      • Fault Tolerance
      • Scalability
      • Security
    • Linux skills (command line)

     

    Nice to have

    • Experience in pre-sales or customer-facing roles
    • Experience with Terraform
    • Basic scripting skills (Python, Bash)
    • Experience with Docker, Kubernetes (GKE), Cloud Run
    • Experience with AWS or multi-cloud environments

     

    Soft skills

    • Ability to communicate technical solutions to non-technical audiences
    • Structured and analytical thinking
    • Proactivity and ownership
    • Ukrainian β€” fluent
    • English β€” reading and basic spoken (technical / pre-sales)

     

    We offer

    • Compensation: $2000 USD + project-based bonuses
    • FOP cooperation
    • Full-time workload
    • Real commercial Google Cloud projects
    • Participation in pre-sales and architecture design
    • Professional growth in Cloud & Solutions Architecture
    • International projects and vendors
    • Strong cloud team and mentorship
    More
  • Β· 26 views Β· 4 applications Β· 5d

    AWS Cloud Engineer

    Full Remote Β· Worldwide Β· Product Β· 2 years of experience Β· English - B2
    AWS Cloud Engineer Softprom Solutions Azerbaijan | Remote / Hybrid Full-time | Contractor (Individual Entrepreneur) Contract with Austria About Softprom Softprom Solutions is an international IT distributor and solutions provider working with leading...

    AWS Cloud Engineer

    Softprom Solutions
    Azerbaijan | Remote / Hybrid
    Full-time | Contractor (Individual Entrepreneur)
    Contract with Austria

     

    About Softprom

    Softprom Solutions is an international IT distributor and solutions provider working with leading global vendors in Cloud, Cybersecurity, Infrastructure, and Enterprise IT.

    We are expanding our Cloud Practice and are looking for an AWS Cloud Engineer in Azerbaijan who wants to grow professionally, work on real customer projects, and collaborate with international teams under an Austrian contract.

     

    About the role

    This role is ideal for an AWS-certified engineer with solid fundamentals who wants to deepen hands-on experience in cloud architecture, deployments, automation, and customer-facing work.

    You will work closely with senior architects, sales teams, and customers, participating in both technical delivery and pre-sales activities.

     

    Responsibilities

    • Support the design and documentation of AWS cloud architectures following the AWS Well-Architected Framework
       
    • Participate in deployment and configuration of core AWS services, including
      VPC, EC2, S3, RDS, IAM, Lambda, Load Balancers
       
    • Assist with AWS networking configuration:
      Subnets, Route Tables, Security Groups, NACLs
       
    • Contribute to automation and Infrastructure as Code (IaC) initiatives using
      Terraform and/or AWS CloudFormation
       
    • Create and maintain technical documentation for architectures and configurations
       
    • Participate in customer meetings, presentations, and demos, explaining AWS solutions and capabilities

       

    Requirements (Must have)

    • Active AWS Certified Solutions Architect – Associate (SAA-C03)
       
    • Solid theoretical understanding of AWS core services:
      Compute, Storage, Databases, Networking, Security
       
    • Basic networking knowledge:
      TCP/IP (L3, L4), IP addressing, subnetting, DNS
       
    • Understanding of cloud principles:
      High Availability, Fault Tolerance, Scalability, Security
       
    • Basic Linux skills (command line)
       
    • Ability to read and understand technical documentation in English

       

    Nice to have

    • Hands-on experience with Terraform or CloudFormation
       
    • Basic scripting skills (Python, Bash)
       
    • Familiarity with Docker, Kubernetes, ECS
       
    • Personal, educational, or non-commercial projects deployed on AWS

       

    Soft skills

    • Strong motivation to learn and grow in cloud engineering
       
    • Structured and analytical thinking
       
    • Clear and confident communication
       
    • Ukrainian or Russian β€” fluent
       
    • English β€” reading technical documentation (spoken English is a plus)

       

     

    We offer
     

    • Individual Entrepreneur (FOP) / contractor model
       
    • Official contract with Austria
       
    • Full-time workload
       
    • Real commercial AWS projects (not internal labs)
       
    • Participation in architecture design and pre-sales activities
       
    • Professional growth within Cloud & Solutions Architecture
       
    • International customers and vendors
       
    • Supportive, senior cloud team and mentorship
    More
  • Β· 29 views Β· 6 applications Β· 5d

    Senior Data Engineer (Capacity and Forecasting Systems)

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    We at Sigma Software are looking for a skilled Senior Data Engineer to join an exciting short-term project with a US-based customer. This is a remote position, offering you the flexibility to work from anywhere while contributing to a high-impact data...

    We at Sigma Software are looking for a skilled Senior Data Engineer to join an exciting short-term project with a US-based customer. This is a remote position, offering you the flexibility to work from anywhere while contributing to a high-impact data platform.

    In this role, you will take ownership of designing and optimizing the data foundation for an automated storage capacity forecasting platform. You’ll work with modern technologies, collaborate with experienced engineers, and have the opportunity to influence both technical and process decisions.

    Why join us? At Sigma Software, you’ll work in a culture that values innovation, encourages knowledge sharing, and offers the chance to make a real impact on projects used by thousands of businesses worldwide.

    CUSTOMER
    Our customer is ConnectWise β€” a US-based software company providing business automation solutions for Managed Service Providers (MSPs). ConnectWise offers a suite of tools for IT service management, cybersecurity, remote monitoring, and business process automation. Their solutions are used globally by thousands of MSPs to streamline operations, improve service delivery, and enhance security for small and medium-sized businesses (SMBs).

    PROJECT
    The project focuses on building an automated storage capacity forecasting platform for MSPs. The platform will model historical infrastructure data, enable predictive insights, and support lifecycle planning for hardware and storage resources.
    It will integrate PostgreSQL, Python-based ETL pipelines, and PowerBI analytics to deliver accurate capacity forecasts and actionable reports for an 18-month planning horizon. The work environment encourages technical ownership, process improvement, and collaborative problem-solving with the customer’s engineering team.

    Project duration is 3-4 months

     

    RESPONSIBILITIES

    • Design and optimize PostgreSQL data models for historical capacity and lifecycle tracking
    • Build and maintain robust ETL pipelines using Python for structured and semi-structured (JSON) data
    • Aggregate and structure data by Region, Node Type, and time dimensions
    • Support time-series analysis and capacity forecasting use cases
    • Develop and enable PowerBI datasets, models, and reports based on clean, reliable data
    • Ensure data quality, performance, and scalability across the pipeline
    • Translate infrastructure and business requirements into scalable data solutions
    • Collaborate closely with software developers and stakeholders on end-to-end data workflows

       

    REQUIREMENTS

    • At least 5 years of experience as a Data Engineer or in a similar data-focused role
    • Strong proficiency in SQL and relational databases, preferably PostgreSQL
    • Solid experience with Python for data transformation and pipeline development
    • Hands-on experience working with JSON and semi-structured data formats
    • Proven track record of building and optimizing ETL processes
    • Practical experience with PowerBI, including dataset modeling and report creation
    • Experience working with time-series and historical datasets
    • Strong understanding of data modelling principles for analytics and forecasting
    • Upper-Intermediate level of English 

       

    WILL BE A PLUS:

    • Experience with Kibana or other BI/visualization tools
    • Familiarity with monitoring, infrastructure, or capacity planning data
    • Exposure to forecasting techniques or growth trend analysis
    • Experience integrating data from metrics and inventory systems
    More
  • Β· 33 views Β· 7 applications Β· 5d

    Senior Data Engineer

    Full Remote Β· Worldwide Β· Product Β· 5 years of experience Β· English - B2
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently fostered the largest Data Science Community in Eastern Europe, boasting a network of over 30,000 AI top engineers.

    About the client:
    We are working with a new generation of data service provider, specializing in data consulting and data-driven digital marketing, dedicated to transforming data into business impact across the entire value chain of organizations. The company’s data-driven services are built upon the deep AI expertise the company’s acquired with a 1000+ client base around the globe. The company has 1000 employees across 20 offices who are focused on accelerating digital transformation.

    About the role:
    We are seeking a Senior Data Engineer (Azure) to design and maintain data pipelines and systems for analytics and AI-driven applications. You will work on building reliable ETL/ELT workflows and ensuring data integrity across the organization.

    Required skills:
    - 6+ years of experience as a Data Engineer, preferably in Azure environments.
    - Proficiency in Python, SQL, NoSQL, and Cypher for data manipulation and querying.
    - Hands-on experience with Airflow and Azure Data Services for pipeline orchestration.
    - Strong understanding of data modeling, ETL/ELT workflows, and data warehousing concepts.
    - Experience in implementing DataOps practices for pipeline automation and monitoring.
    - Knowledge of data governance, data security, and metadata management principles.
    - Ability to work collaboratively with data science and analytics teams.
    - Excellent problem-solving and communication skills.

    Responsibilities:
    - Transform data into formats suitable for analysis by developing and maintaining processes for data transformation;
    - Structuring, metadata management, and workload management.
    - Design, implement, and maintain scalable data pipelines on Azure.
    - Develop and optimize ETL/ELT processes for various data sources.
    - Collaborate with data scientists and analysts to ensure data readiness.
    - Monitor and improve data quality, performance, and governance.

    More
  • Β· 35 views Β· 3 applications Β· 5d

    Senior Data Engineer

    Full Remote Β· Poland, Romania, Ukraine Β· 6 years of experience Β· English - B2
    Transcenda is a global provider of design and engineering services. We put people first and strive to be agents of change by building a better future through technology. We are dedicated to empowering organizations to rapidly scale, digitally transform,...

    Transcenda is a global provider of design and engineering services. We put people first and strive to be agents of change by building a better future through technology. We are dedicated to empowering organizations to rapidly scale, digitally transform, and bring new products to market.

    Recognized by Newsweek as one of America’s greatest workplaces of 2025, Transcenda is home to 200+ engineers, designers, analysts, and advisors solving complex business challenges through technology. By approaching our work through a variety of cultures and perspectives, we take calculated risks to design and develop innovative solutions that will have a positive impact tomorrow.

     

    Interesting Facts:

    • Over 200 team members
    • Fully remote β€” we let people work where they work best.
    • We work with clients who value our opinion and thought leadership, and where we can make a meaningful contribution to architectural decisions, engineering decisions, and product decisions.
    • We have a strong social agenda and promote diversity and inclusion, and participate in a variety of charity initiatives throughout the year.
    • We have fun team-building activities.
    • Since we are rapidly growing, the ability to grow and advance your career is available and at a fairly quick rate.


    Must Haves:

    • Strong experience with Python, Java, or other programming languages
    • Advanced knowledge of SQL, including complex queries, query modularization, and optimization for performance and readability
    • Familiarity with the modern data stack and cloud-native data platforms, such as Snowflake, BigQuery, or Amazon Redshift
    • Hands-on experience with dbt (data build tool) for data modeling and transformations
    • Experience with data orchestration tools, such as Airflow or Dagster

    ‍

    Nice to Have:

    • Experience with GitOps, continuous delivery for data pipelines
    • Experience with Infrastructure-as-Code tooling (Terraform)

    ‍

    Key Responsibilities:

    • Design and build a data platform that standardizes data practices across multiple internal teams
    • Support the entire data lifecycle
    • Build and maintain integrations across data processing layers, including ingestion, orchestration, transformation, and consumption
    • Collaborate closely with cross-functional teams to understand data needs and ensure the platform delivers value
    • Document architectures, solutions, and integrations to promote best practices, maintainability, and usability
    More
  • Β· 32 views Β· 2 applications Β· 5d

    Sr Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    You’ll take ownership of a large-scale AWS data platform powering analytics for thousands of hotels and restaurants worldwide. This is a hands-on role where your work directly impacts business decisions across the hospitality industry β€” not internal...

    You’ll take ownership of a large-scale AWS data platform powering analytics for thousands of hotels and restaurants worldwide. This is a hands-on role where your work directly impacts business decisions across the hospitality industry β€” not internal dashboards nobody uses.

    We’re looking for someone who doesn’t just build pipelines β€” but runs them, fixes them, and makes them bulletproof.

     

    About the Product

    A hospitality technology company operating a data analytics platform serving:

    • 2,500+ hotels
    • 500+ restaurants

    The system processes operational and performance data, delivering insights to product and analytics teams who rely on it daily.

     

    Your Mission

    Own and operate the AWS data infrastructure:

    • Build scalable, production-grade data pipelines
    • Ensure reliability, performance, and cost-efficiency
    • Keep everything running smoothly in real production environments

    This is not a β€œdesign slides and disappear” role β€” it’s real ownership of real data systems.

     

    What You’ll Be Doing

    Data Engineering & Pipelines

    • Build and operate Spark / PySpark workloads on EMR and Glue
    • Design end-to-end pipelines:
      API / DB / file ingestion β†’ transformation β†’ delivery to analytics consumers
    • Implement data validation, monitoring, and quality checks
    • Optimize pipelines for performance, cost, and scalability

     

    Infrastructure & Operations

    • Manage AWS infrastructure using Terraform
    • Monitor via CloudWatch
    • Debug production failures and implement preventive solutions
    • Maintain IAM and security best practices

     

    Collaboration

    • Work closely with product and analytics teams
    • Define clear data contracts
    • Deliver reliable datasets for BI and analytics use cases

     

    Must-Have Experience

    • 5+ years of hands-on data engineering in production
      (actual pipelines running in production, not only architecture work)
    • Strong Spark / PySpark
    • Advanced Python
    • Advanced SQL
    • AWS data stack: EMR, Glue, S3, Redshift (or similar), IAM, CloudWatch
    • Infrastructure as Code with Terraform
    • Experience debugging and stabilizing production data systems

     

    Nice to Have

    • Kafka or Kinesis (streaming)
    • Airflow or similar orchestration tools
    • Experience supporting BI tools and analytics teams

     

    What We Care About

    • You’ve handled pipeline failures in production β€” and learned from them
    • You prioritize data correctness, not just speed
    • You write maintainable, readable code
    • You understand AWS cost and scaling trade-offs
    • You avoid over-engineering β€” and ship what delivers value
    More
  • Β· 60 views Β· 16 applications Β· 5d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· English - B2
    We are seeking a skilled Data Engineer to join our team and contribute to the development of large- scale analytics platforms. The ideal candidate will have strong experience in cloud ecosystems such as Azure and AWS, as well as expertise in AI and...

    We are seeking a skilled Data Engineer to join our team and contribute to the development of large- scale analytics platforms. The ideal candidate will have strong experience in cloud ecosystems such as Azure and AWS, as well as expertise in AI and machine learning applications. Knowledge of the healthcare industry and life sciences is a plus.

    Key Responsibilities

    • Design, develop, and maintain scalable data pipelines for large-scale analytics platforms.
    • Implement cloud-based solutions using Azure and AWS, ensuring reliability and performance.
    • Work closely with data scientists and AI/ML teams to optimize data workflows.
    • Ensure data quality, governance, and security across platforms.
    • Collaborate with cross-functional teams to integrate data solutions into business processes.

    Required Qualifications

    • Bachelor's degree (or higher) in Computer Science, Engineering, or a related field.
    • 3+ years of experience in data engineering, big data processing, and cloud-based architecture.
    • Strong proficiency in cloud services (Azure, AWS) and distributed computing frameworks.
    • Mandatory hands-on experience with Databricks (UC, DLTs, Delta Sharing, etc.)
    • Expertise in SQL and database management systems (SQL Server, MySQL, etc.).
    • Experience with data modeling, ETL processes, and data warehousing solutions.
    • Knowledge of AI and machine learning concepts and their data requirements.
    • Proficiency in Python, Scala, or similar programming languages.
    • Basic knowledge of C# and/or Java programming.
    • Familiarity with DevOps, CI/CD pipelines.
    • High-level proficiency in English (written and spoken).

    Preferred Qualifications

    • Experience in the healthcare or life sciences industry.
    • Understanding of regulatory compliance related to healthcare data (HIPAA, GDPR, etc.).
    • Familiarity with interoperability standards such as HL7, FHIR, and EDI.
    More
  • Β· 79 views Β· 1 application Β· 8d

    Data Engineer

    Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - B2
    We’re looking for a highly skilled Data Expert! Product | Remote ​​We’re looking for a data expert who bridges technical depth with curiosity. You’ll help Redocly turn data into insight β€” driving smarter product, growth, and business decisions. ...

     

    πŸ”₯ We’re looking for a highly skilled Data Expert!πŸ”₯

     

    Product | Remote

     

    ​​We’re looking for a data expert who bridges technical depth with curiosity. You’ll help Redocly turn data into insight β€” driving smarter product, growth, and business decisions.

     

    This role combines data governance and development. You’ll build reliable data pipelines, improve observability, and uncover meaningful patterns that guide how we grow and evolve.

     

    You’ll work closely with product and technical teams to support data collection, processing, and consistency across systems.

     

    What you’ll do 

    • Analyze product and user behavior to uncover trends, bottlenecks, and opportunities.
    • Build and maintain data pipelines and ETL processes.
    • Design and optimize data models for new features and analytics (e.g., using dbt).
    • Work with event-driven architectures and standards like AsyncAPI and CloudEvents.
    • Collaborate with engineers to improve data quality, consistency, and governance across systems.
    • Use observability and tracing tools (e.g., OpenTelemetry) to monitor and improve performance.
    • Support existing frontend and backend systems related to analytics and data processing.
    • Build and maintain datasets for analytics and reporting.

     

    You’re a great fit if you have 

    • 5+ years of software engineering experience, with 3+ years focused on data engineering.
    • Strong SQL skills and experience with data modeling (dbt preferred).
    • Strong proficiency with Node.js, React, JavaScript, and TypeScript.
    • Proven experience in data governance and backend systems.
    • Familiarity with columnar databases or analytics engines (ClickHouse, Postgres, etc.).
    • Strong analytical mindset, attention to detail, and clear communication.
    • Passionate about clarity, simplicity, and quality in both data and code.
    • English proficiency: Upper-Intermediate or higher.

     

    How you’ll know you’re doing a great job

    • Data pipelines are trusted, observable, and performant.
    • Metrics and dashboards are used across teams β€” not just built once.
    • Teams make better product decisions, faster, because of your insights.
    • Data pipelines are trusted, observable, and performant.
    • You’re the go-to person for clarity when questions arise about β€œwhat the data says.”

     

    About Redocly

    Redocly builds tools that accelerate API ubiquity. Our platform helps teams create world-class developer experiences β€” from API documentation and catalogs to internal developer hubs and public showcases. We're a globally distributed team that values clarity, autonomy, and craftsmanship. You'll work alongside people who love developer experience, storytelling, and building tools that make technical work simpler and more joyful.

    Headquarter – Austin, Texas, US. There is also an office in Lviv, Ukraine.

     

    Redocly is trusted by leading tech, fintech, telecom, and enterprise teams to power API documentation and developer portals. Redocly’s clients range from startups to Fortune 500 enterprises.

    https://redocly.com/

     

    Working with Redocly

    • Team: 4-6 people (middle-seniors)
    • Team’s location: Ukraine&Europe
    • There are functional, product, and platform teams and each has its own ownership, and line structure, and teams themselves decide when to have weekly meetings.
    • Cross-functional teams are formed for each two-month cycle, giving team members the opportunity to work across all parts of the product.
    • Methodology: Shape Up

     

    Perks

    • Competitive salary based on your expertise 
    • Full remote, though you’re welcome to come to the office occasionally if you wish.
    • Cooperation on a B2B basis with a US-based company (for EU citizens) or under a gig contract (for Ukraine).
    • After a year of working with the company, you can buy a certain number of company’s shares
    • Around 30 days of vacation (unlimited,  but let’s keep it reasonable)
    • 10 working days of sick leave per year
    • Public holidays according to the standards
    • No trackers and screen recorders
    • Working hours – EU/UA timezone. Working day – 8 hours. Mostly they start working from 10-11 am
    • Equipment provided – MacBooks (M1 – M4)
    • Regular performance reviews

     

    Hiring Stages

    • Prescreening (30-45 min)
    • HR Call (45 min)
    • Initial Interview (30 min)
    • Trial Day (paid)
    • Offer

     

    If you are an experienced Data Scientist, and you want to work on impactful data-driven projects, we’d love to hear from you! 


    Apply now to join our team!

    More
  • Β· 50 views Β· 5 applications Β· 8d

    Lead Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 7 years of experience Β· English - B2
    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data...

    We are seeking a highly skilled Lead Data Engineer to design, develop, and optimize our Data Warehouse solutions. The ideal candidate will have extensive experience in ETL/ELT development, data modeling, and big data technologies, ensuring efficient data processing and analytics. This role requires strong collaboration with Data Analysts, Data Scientists, and Business Stakeholders to drive data-driven decision-making.

     

    Does this relate to you?

    • 7+ years of experience in Data Engineering field
    • At least 1+ year of experience as Lead\Architect
    • Strong expertise in SQL and data modeling concepts.
    • Hands-on experience with Airflow.
    • Experience working with Redshift.
    • Proficiency in Python for data processing.
    • Strong understanding of data governance, security, and compliance.
    • Experience in implementing CI/CD pipelines for data workflows.
    • Ability to work independently and collaboratively in an agile environment.
    • Excellent problem-solving and analytical skills.

       

    A new team member will be in charge of:

    • Design, develop, and maintain scalable data warehouse solutions.
    • Build and optimize ETL/ELT pipelines for efficient data integration.
    • Design and implement data models to support analytical and reporting needs.
    • Ensure data integrity, quality, and security across all pipelines.
    • Optimize data performance and scalability using best practices.
    • Work with big data technologies such as Redshift.
    • Collaborate with cross-functional teams to understand business requirements and translate them into data solutions.
    • Implement CI/CD pipelines for data workflows.
    • Monitor, troubleshoot, and improve data processes and system performance.
    • Stay updated with industry trends and emerging technologies in data engineering.

       

    Already looks interesting? Awesome! Check out the benefits prepared for you:

    • Regular performance reviews, including remuneration
    • Up to 25 paid days off per year for well-being
    • Flexible cooperation hours with work-from-home
    • Fully paid English classes with an in-house teacher
    • Perks on special occasions such as birthdays, marriage, childbirth
    • Referral program implying attractive bonuses
    • External & internal training and IT certifications
    More
  • Β· 53 views Β· 0 applications Β· 8d

    Big Data Engineer to $8000

    Full Remote Β· Bulgaria, Poland, Romania Β· Product Β· 5 years of experience Β· English - B2
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: The product is an enterprise-grade digital experience...

    Who we are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.

     

    About the Product:

    The product is an enterprise-grade digital experience platform that provides real-time visibility into system performance, application stability, and end-user experience across on-premises, virtual, and cloud environments. It ingests large volumes of telemetry from distributed agents on employee devices and infrastructure, processes and enriches data through streaming pipelines, detects anomalies, and stores analytical data for monitoring and reporting. The platform serves a global customer base with high throughput and strict requirements for security, correctness, and availability. Rapid adoption has driven significant year-over-year growth and demand from large, distributed teams seeking to secure and stabilize digital environments without added complexity.

     

    About the Role:

    This is a true Big Data engineering role focused on designing and building real-time data pipelines that operate at scale in production environments serving real customers. You will join a senior, cross-functional platform team responsible for the end-to-end data flow: ingestion, processing, enrichment, anomaly detection, and storage. You will own both architecture and delivery, collaborating with Product Managers to translate requirements into robust, scalable solutions and defining guardrails for data usage, cost control, and tenant isolation. The platform is evolving from distributed, product-specific flows to a centralized, multi-region, highly observable system designed for rapid growth, advanced analytics, and future AI-driven capabilities. Strong ownership, deep technical expertise, and a clean-code mindset are essential.

     

    Key Responsibilities:

    • Design, build, and maintain high-throughput, low-latency data pipelines handling large volumes of telemetry.
    • Develop real-time streaming solutions using Kafka and modern stream-processing frameworks (Flink, Spark, Beam, etc.).
    • Contribute to the architecture and evolution of a large-scale, distributed, multi-region data platform.
    • Ensure data reliability, fault tolerance, observability, and performance in production environments.
    • Collaborate with Product Managers to define requirements and translate them into scalable, safe technical solutions.
    • Define and enforce guardrails for data usage, cost optimization, and tenant isolation within a shared platform.
    • Participate actively in system monitoring, troubleshooting incidents, and optimizing pipeline performance.
    • Own end-to-end delivery: design, implementation, testing, deployment, and monitoring of data platform components.

     

    Required Competence and Skills:

    • 5+ years of hands-on experience in Big Data or large-scale data engineering roles.
    • Strong programming skills in Java or Python, with a willingness to adopt Java and frameworks like Vert.x or Spring.
    • Proven track record of building and operating production-grade data pipelines at scale.
    • Solid knowledge of streaming technologies such as Kafka, Kafka Streams, Flink, Spark, or Apache Beam.
    • Experience with cloud platforms (AWS, Azure, or GCP) and designing distributed, multi-region systems.
    • Deep understanding of production concerns: availability, data loss prevention, latency, and observability.
    • Hands-on experience with data stores such as ClickHouse, PostgreSQL, MySQL, Redis, or equivalents.
    • Strong system design skills, able to reason about trade-offs, scalability challenges, and cost efficiency.
    • Clean code mindset, solid OOP principles, and familiarity with design patterns.
    • Experience with AI-first development tools (e.g., GitHub Copilot, Cursor) is a plus.

     

    Nice to Have:

    • Experience designing and operating globally distributed, multi-region data platforms.
    • Background in real-time analytics, enrichment, or anomaly detection pipelines.
    • Exposure to cost-aware data architectures and usage guardrails.
    • Experience in platform or infrastructure teams serving multiple products.

     

    Why Us:

    - We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).

    - We provide full accounting and legal support in all countries we operate.

    - We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

    - We offer a highly competitive package with yearly performance and compensation reviews.

    More
  • Β· 22 views Β· 3 applications Β· 8d

    Senior Data Engineer

    Full Remote Β· EU Β· 3 years of experience Β· English - B2
    We are looking for an experienced Data Engineer to join a long-term B2C project. The main focus is on building Zero ETL pipelines, as well as maintaining and improving existing ones. Responsibilities: - Build and maintain scalable Zero ETL pipelines. -...

    We are looking for an experienced Data Engineer to join a long-term B2C project. The main focus is on building Zero ETL pipelines, as well as maintaining and improving existing ones.

    Responsibilities:
    - Build and maintain scalable Zero ETL pipelines.
    - Design and optimize data warehouses and data lakes on AWS (Glue, Firehose, Lambda, SageMaker).
    - Work with structured and unstructured data, ensuring quality and accuracy.
    - Optimize query performance and data processing workflows (Spark, SQL, Python).
    - Collaborate with engineers, analysts, and business stakeholders to deliver data-driven solutions.

    Requirements:
    - 5+ years of experience in Data Engineering.
    - Advanced proficiency in Spark, Python, SQL.
    - Expertise with AWS Glue, Firehose, Lambda, SageMaker.
    - Experience with ETL tools (dbt, Airflow etc.).
    - Background in B2C companies is preferred.
    - JavaScript and Data Science knowledge are a plus.
    - Degree in Computer Science (preferred, not mandatory).

    We offer:
    - remote time job, B2B contract
    - 12 sick leaves and 18 paid vacation business days per year
    - Comfortable work conditions (including MacBook Pro and Dell monitor on each workplace)
    - Smart environment
    - Interesting projects from renowned clients
    - Flexible work schedule
    - Competitive salary according to the qualifications
    - Guaranteed full workload during the term of the contract
     

    More
  • Β· 41 views Β· 4 applications Β· 8d

    Senior Data Engineer (Data Competency Center)

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    Are you a Senior Data Engineer passionate about building scalable, secure, and high-performance data solutions? Join our Data Engineering Center of Excellence at Sigma Software and work on diverse projects that challenge your skills and inspire...

    Are you a Senior Data Engineer passionate about building scalable, secure, and high-performance data solutions? Join our Data Engineering Center of Excellence at Sigma Software and work on diverse projects that challenge your skills and inspire innovation.

     

    At Sigma Software, we value expertise, continuous learning, and a supportive environment where your career path is shaped around your strengths. You’ll be part of a collaborative team, gain exposure to cutting-edge technologies, and work in an inclusive culture that fosters growth and innovation.

    Project

    Our Data Engineering Center of Excellence (CoE) is a specialized unit focused on designing, building, and optimizing data platforms, pipelines, and architectures. We work across diverse industries, leveraging modern data stacks to deliver scalable, secure, and cost-efficient solutions.

    Job Description

    • Collaborate with clients and internal teams to clarify technical requirements and expectations
    • Implement architectures using Azure or AWS cloud platforms
    • Design, develop, optimize, and maintain squad-specific data architectures and pipelines
    • Discover, analyze, and organize disparate data sources into clean, understandable data models
    • Evaluate new tools for analytical data engineering or data science
    • Suggest and contribute to training and improvement plans for analytical data engineering skills, standards, and processes

    Qualifications

    • 5+ years of experience with Python and SQL
    • Hands-on experience with AWS services (API Gateway, Kinesis, Athena, RDS, Aurora)
    • Proven experience building ETL pipelines for analytics/internal operations
    • Experience developing and integrating APIs
    • Solid understanding of Linux OS
    • Familiarity with distributed applications and DevOps tools
    • Strong troubleshooting/debugging skills
    • English level: Upper-Intermediate
    • WILL BE A PLUS:
    • 2+ years with Hadoop, Spark, or Airflow
    • Experience with DAGs/orchestration tools
    • Experience with Snowflake-based data warehouses
    • Experience developing event-driven data pipelines
    • Personal Profile

    PERSONAL PROFILE:

    • Passion for data processing and continuous learning
    • Strong problem-solving skills and analytical thinking
    • Ability to mentor and guide team members
    • Effective communication and collaboration skills
    More
  • Β· 72 views Β· 3 applications Β· 8d

    Data Engineer β€” Research Data

    Full Remote Β· EU Β· 2 years of experience Β· English - B1
    Data Engineer β€” Research Data (E-sports) We are building a quantitative research platform from scratch in the e-sports domain and are looking for a Data Engineer to join the team at an early stage. This role focuses on designing and operating the...

    Data Engineer β€” Research Data (E-sports)

     

    We are building a quantitative research platform from scratch in the e-sports domain and are looking for a Data Engineer to join the team at an early stage.

    This role focuses on designing and operating the research data layer that supports modeling, analysis, and feature development. You’ll ensure raw match data is transformed into reliable, reusable, research-ready inputs.

    This is a foundational role β€” early decisions around data modeling and feature design will shape how research is done long-term.

     

    What you’ll do:

    • Build and maintain research data pipelines for historical and live match data
    • Clean, normalize, and structure raw data into research-ready features
    • Collaborate closely with researchers on feature design and iteration
    • Implement feature computation using Spark and SQL
    • Ensure data jobs run reliably, are monitored, and recover correctly
    • Own backup and recovery processes
    • Maintain scalable, well-structured datasets for downstream use
    • Support limited data collection via APIs and web sources

     

    What we’re looking for:

    • 2+ years of experience as a Data Engineer or in a similar role
    • Strong Python skills
    • Hands-on experience with Spark and SQL
    • Experience with cloud-native data infrastructure
    • Docker experience
    • Proven experience owning production data pipelines and feature workflows
    • Comfortable making architectural decisions in greenfield systems
    • Interest or experience with e-sports, game telemetry, or match-level data
    More
  • Β· 29 views Β· 0 applications Β· 8d

    Senior Data Engineer

    Ukraine Β· Product Β· 4 years of experience Β· English - B2
    Your future responsibilities: Collaborate with data and analytics experts to strive for greater functionality in our data systems Design, use and test the infrastructure required for optimal extraction, transformation, and loading of data from a wide...

    Your future responsibilities:

    • Collaborate with data and analytics experts to strive for greater functionality in our data systems
    • Design, use and test the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies (DevOps & Continuous Integration)
    • Drive the advancement of data infrastructure by designing and implementing the underlying logic and structure for how data is set up, cleansed, and ultimately stored for organizational usage
    • Assemble large, complex data sets that meet functional / non-functional business requirements
    • Build data integration from various sources and technologies to the data lake infrastructure as part of an agile delivery team
    • Monitor the capabilities and react on unplanned interruptions ensuring that environments are provided & loaded in time

    Your skills and experience:

    • Minimum 5 years experience in a dedicated data engineer role
    • Experience working with large structured and unstructured data in various formats
    • Knowledge or experience with streaming data frameworks and distributed data architectures (e.g. Spark Structured Streaming, Apache Beam or Apache Flink)
    • Experience with cloud technologies (preferable AWS, Azure)
    • Experience in Cloud services (Data Flow, Data Proc, BigQuery, Pub/Sub)
    • Experience of practical operation of Big Data stack: Hadoop, HDFS, Hive, Presto, Kafka
    • Experience of Python in the context of creating ETL data pipelines
    • Experience with Data Lake / Data Warehouse solutions (AWS S3 // Minio)
    • Experience with Apache Airflow
    • Development skills in a Docker / Kubernetes environment
    • Open and team-minded personality and communication skills
    • Willingness to work in an agile environment

    We offer what matters most to you:

    • Competitive salary: we guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
    • Social package: official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
    • Comfortable working conditions: possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment. Classification: PUBLIC
    • Wellbeing program: all employees have access to medical insurance from the first working day; consultations with a psychologist, nutritionist, or lawyer; discount programs for sports and purchases; family days for children and adults; in-office massages
    • Training and development: access to over 130 online training resources; corporate training programs in CX, Data, IT Security, Leadership, Agile. Corporate library and English lessons. β€’ Great team: our colleagues form a community where curiosity, talent, and innovation are welcome. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
    • Career opportunities: we encourage advancement within the bank across functions
    • Innovations and technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (IOS), Kotlin (Android). Data stores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
    • Support program for defenders: we maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and develop the Bank’s veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes

    Why Raiffeisen Bank?

    • Our main value is people, and we support and recognize them, educate them and involve them in changes. Join Raif’s team because for us YOU matter!
    • One of the largest lenders to the economy and agricultural business among private banks
    • Recognized as the best employer by EY, Forbes, Randstad, Franklin Covey, and Delo.UA
    • The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)
    • One of the largest IT product teams among the country’s banks. β€’ One of the largest taxpayers in Ukraine; 6.6 billion UAH were paid in taxes in 2023

    Opportunities for Everyone:

    • Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
    • We support the principles of diversity, equality and inclusiveness
    • We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
    • We cooperate with students and older people, creating conditions for growth at any career stage

    Want to learn more? β€” Follow us on social media:

    Facebook, Instagram, LinkedIn

    ___________________________________________________________________________________________

    Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ β€” Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ ΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· Ρ–Π½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡ–Ρ‚Π°Π»ΠΎΠΌ. Π‘Ρ–Π»ΡŒΡˆΠ΅ 30 Ρ€ΠΎΠΊΡ–Π² ΠΌΠΈ ΡΡ‚Π²ΠΎΡ€ΡŽΡ”ΠΌΠΎ Ρ‚Π° Π²ΠΈΠ±ΡƒΠ΄ΠΎΠ²ΡƒΡ”ΠΌΠΎ Π±Π°Π½ΠΊΡ–Π²ΡΡŒΠΊΡƒ систСму Π½Π°ΡˆΠΎΡ— Π΄Π΅Ρ€ΠΆΠ°Π²ΠΈ.

    Π£ Π Π°ΠΉΡ„Ρ– ΠΏΡ€Π°Ρ†ΡŽΡ” ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡ–Π²Ρ€ΠΎΠ±Ρ–Ρ‚Π½ΠΈΠΊΡ–Π², сСрСд Π½ΠΈΡ… ΠΎΠ΄Π½Π° Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΎΠ²ΠΈΡ… Π†Π’-ΠΊΠΎΠΌΠ°Π½Π΄, Ρ‰ΠΎ Π½Π°Π»Ρ–Ρ‡ΡƒΡ” ΠΏΠΎΠ½Π°Π΄ 800 Ρ„Π°Ρ…Ρ–Π²Ρ†Ρ–Π². Щодня ΠΏΠ»Ρ–Ρ‡-ΠΎ-ΠΏΠ»Ρ–Ρ‡ ΠΌΠΈ ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ, Ρ‰ΠΎΠ± Π±Ρ–Π»ΡŒΡˆ Π½Ρ–ΠΆ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½Π° Π½Π°ΡˆΠΈΡ… ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡ‚Ρ€ΠΈΠΌΠ°Ρ‚ΠΈ якіснС обслуговування, користуватися ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Π°ΠΌΠΈ Ρ– ΡΠ΅Ρ€Π²Ρ–сами Π±Π°Π½ΠΊΡƒ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ‚ΠΈ бізнСс, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡ€Π°Ρ—Π½ΠΎΡŽ.β€―

    Π’Π²ΠΎΡ— ΠΌΠ°ΠΉΠ±ΡƒΡ‚Π½Ρ– обов’язки:

    • Бпівпраця Π· Π΅ΠΊΡΠΏΠ΅Ρ€Ρ‚Π°ΠΌΠΈ Π· Π΄Π°Π½ΠΈΡ… Ρ‚Π° Π°Π½Π°Π»Ρ–Ρ‚ΠΈΠΊΠΈ, Ρ‰ΠΎΠ± досягти Π±Ρ–Π»ΡŒΡˆΠΎΡ— Ρ„ΡƒΠ½ΠΊΡ†Ρ–ΠΎΠ½Π°Π»ΡŒΠ½ΠΎΡΡ‚Ρ– Π½Π°ΡˆΠΈΡ… систСм Π΄Π°Π½ΠΈΡ…
    • ΠŸΡ€ΠΎΠ΅ΠΊΡ‚ΡƒΠ²Π°Π½Π½Ρ, використання Ρ‚Π° Ρ‚Сстування інфраструктури, Π½Π΅ΠΎΠ±Ρ…Ρ–Π΄Π½ΠΎΡ— для ΠΎΠΏΡ‚ΠΈΠΌΠ°Π»ΡŒΠ½ΠΎΠ³ΠΎ вилучСння, пСрСтворСння Ρ‚Π° Π·Π°Π²Π°Π½Ρ‚аТСння Π΄Π°Π½ΠΈΡ… Π· ΡˆΠΈΡ€ΠΎΠΊΠΎΠ³ΠΎ спСктру Π΄ΠΆΠ΅Ρ€Π΅Π» Π΄Π°Π½ΠΈΡ… Π·Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³ΠΎΡŽ Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–ΠΉ SQL Ρ‚Π° AWS для Π²Π΅Π»ΠΈΠΊΠΈΡ… Π΄Π°Π½ΠΈΡ… (DevOps Ρ‚Π° Π±Π΅Π·ΠΏΠ΅Ρ€Π΅Ρ€Π²Π½Π° інтСграція)
    • Бприяння Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ інфраструктури Π΄Π°Π½ΠΈΡ… ΡˆΠ»ΡΡ…ΠΎΠΌ проСктування Ρ‚Π° Π²ΠΏΡ€ΠΎΠ²Π°Π΄ΠΆΠ΅Π½Π½Ρ Π±Π°Π·ΠΎΠ²ΠΎΡ— Π»ΠΎΠ³Ρ–ΠΊΠΈ Ρ‚Π° ΡΡ‚Ρ€ΡƒΠΊΡ‚ΡƒΡ€ΠΈ для Π½Π°Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ, очищСння Ρ‚Π°, Π·Ρ€Π΅ΡˆΡ‚ΠΎΡŽ, збСрігання Π΄Π°Π½ΠΈΡ… для використання Π² ΠΎΡ€Π³Π°Π½Ρ–Π·Π°Ρ†Ρ–Ρ—
    • Π—Π±ΠΈΡ€Π°Ρ‚ΠΈ Π²Π΅Π»ΠΈΠΊΡ–, складні Π½Π°Π±ΠΎΡ€ΠΈ Π΄Π°Π½ΠΈΡ…, Ρ‰ΠΎ Π²Ρ–Π΄ΠΏΠΎΠ²Ρ–Π΄Π°ΡŽΡ‚ΡŒ Ρ„ΡƒΠ½ΠΊΡ†Ρ–ΠΎΠ½Π°Π»ΡŒΠ½ΠΈΠΌ/Π½Π΅Ρ„ΡƒΠ½ΠΊΡ†Ρ–ΠΎΠ½Π°Π»ΡŒΠ½ΠΈΠΌ бізнСс-Π²ΠΈΠΌΠΎΠ³Π°ΠΌ
    • Π‘Ρ‚Π²ΠΎΡ€ΡŽΠ²Π°Ρ‚ΠΈ Ρ–Π½Ρ‚Π΅Π³Ρ€Π°Ρ†Ρ–ΡŽ Π΄Π°Π½ΠΈΡ… Π· Ρ€Ρ–Π·Π½ΠΈΡ… Π΄ΠΆΠ΅Ρ€Π΅Π» Ρ‚Π° Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–ΠΉ Π² Ρ–нфраструктуру ΠΎΠ·Π΅Ρ€Π° Π΄Π°Π½ΠΈΡ… як Ρ‡Π°ΡΡ‚ΠΈΠ½Π° Π³Π½ΡƒΡ‡ΠΊΠΎΡ— ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π· ΠΏΠΎΡΡ‚ачання
    • ΠœΠΎΠ½Ρ–Ρ‚ΠΎΡ€ΠΈΡ‚ΠΈ моТливості Ρ‚Π° Ρ€Π΅Π°Π³ΡƒΠ²Π°Ρ‚ΠΈ Π½Π° Π½Π΅Π·Π°ΠΏΠ»Π°Π½ΠΎΠ²Π°Π½Ρ– ΠΏΠ΅Ρ€Π΅Π±ΠΎΡ—, Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡ΡƒΡŽΡ‡ΠΈ своєчаснС надання Ρ‚Π° Π·Π°Π²Π°Π½Ρ‚аТСння сСрСдовищ

    Π’Π²Ρ–ΠΉ досвід Ρ‚Π° Π½Π°Π²ΠΈΡ‡ΠΊΠΈ:

    • ΠœΡ–Π½Ρ–ΠΌΡƒΠΌ 5 Ρ€ΠΎΠΊΡ–Π² досвіду Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π½Π° ΠΏΠΎΡΠ°Π΄Ρ– спСціалізованого Ρ–Π½ΠΆΠ΅Π½Π΅Ρ€Π° Π· Π΄Π°Π½ΠΈΡ…
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Π²Π΅Π»ΠΈΠΊΠΈΠΌΠΈ структурованими Ρ‚Π° Π½Π΅ΡΡ‚Ρ€ΡƒΠΊΡ‚ΡƒΡ€ΠΎΠ²Π°Π½ΠΈΠΌΠΈ Π΄Π°Π½ΠΈΠΌΠΈ Π² Ρ€Ρ–Π·Π½ΠΈΡ… Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°Ρ…
    • Знання Π°Π±ΠΎ досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ„Ρ€Π΅ΠΉΠΌΠ²ΠΎΡ€ΠΊΠ°ΠΌΠΈ ΠΏΠΎΡ‚ΠΎΠΊΠΎΠ²ΠΈΡ… Π΄Π°Π½ΠΈΡ… Ρ‚Π° Ρ€ΠΎΠ·ΠΏΠΎΠ΄Ρ–Π»Π΅Π½ΠΈΠΌΠΈ Π°Ρ€Ρ…Ρ–Ρ‚Π΅ΠΊΡ‚ΡƒΡ€Π°ΠΌΠΈ Π΄Π°Π½ΠΈΡ… (Π½Π°ΠΏΡ€ΠΈΠΊΠ»Π°Π΄,
    • Spark Structured Streaming, Apache Beam Π°Π±ΠΎ Apache Flink)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ…ΠΌΠ°Ρ€Π½ΠΈΠΌΠΈ тСхнологіями (Π±Π°ΠΆΠ°Π½ΠΎ AWS, Azure)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ…ΠΌΠ°Ρ€Π½ΠΈΠΌΠΈ сСрвісами (Data Flow, Data Proc, BigQuery, Pub/Sub)
    • Досвід ΠΏΡ€Π°ΠΊΡ‚ΠΈΡ‡Π½ΠΎΡ— Сксплуатації стСку Big Data: Hadoop, HDFS, Hive, Presto, Kafka
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Python Ρƒ ΠΊΠΎΠ½Ρ‚Сксті створСння ETL-ΠΏΠΎΡ‚ΠΎΠΊΡ–Π² Π΄Π°Π½ΠΈΡ…
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ€Ρ–ΡˆΠ΅Π½Π½ΡΠΌΠΈ Data Lake / Data Warehouse (AWS S3 // Minio)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Apache Airflow
    • Навички Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΈ Π² ΡΠ΅Ρ€Π΅Π΄ΠΎΠ²ΠΈΡ‰Ρ– Docker / Kubernetes
    • Π’Ρ–Π΄ΠΊΡ€ΠΈΡ‚Π° Ρ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄Π½Π° ΠΎΡΠΎΠ±ΠΈΡΡ‚Ρ–ΡΡ‚ΡŒ, ΠΊΠΎΠΌΡƒΠ½Ρ–ΠΊΠ°Ρ‚ΠΈΠ²Π½Ρ– Π½Π°Π²ΠΈΡ‡ΠΊΠΈ
    • Π“ΠΎΡ‚ΠΎΠ²Π½Ρ–ΡΡ‚ΡŒ ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Ρ‚ΠΈ Π² Π³Π½ΡƒΡ‡ΠΊΠΎΠΌΡƒ сСрСдовищі

    ΠŸΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ Ρ‚Π΅, Ρ‰ΠΎ ΠΌΠ°Ρ” значСння самС для Ρ‚Π΅Π±Π΅:β€―

    • ΠšΠΎΠ½ΠΊΡƒΡ€Π΅Π½Ρ‚Π½Π° Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Π° ΠΏΠ»Π°Ρ‚Π°: Π³Π°Ρ€Π°Π½Ρ‚ΡƒΡ”ΠΌΠΎ ΡΡ‚Π°Π±Ρ–Π»ΡŒΠ½ΠΈΠΉ Π΄ΠΎΡ…Ρ–Π΄ Ρ‚Π° Ρ€Ρ–Ρ‡Π½Ρ– бонуси Π·Π° Ρ‚Π²Ρ–ΠΉ особистий внСсок. Π”ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΎ, Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” Ρ€Π΅Ρ„Π΅Ρ€Π°Π»ΡŒΠ½Π° ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡ€ΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡƒΡ‡Π΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ… ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊΡƒ.
    • Π‘ΠΎΡ†Ρ–Π°Π»ΡŒΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ‚: ΠΎΡ„Ρ–Ρ†Ρ–ΠΉΠ½Π΅ ΠΏΡ€Π°Ρ†Π΅Π²Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ, 28 Π΄Π½Ρ–Π² ΠΎΠΏΠ»Π°Ρ‡ΡƒΠ²Π°Π½ΠΎΡ— відпустки, Π΄ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΈΠΉ β€œΠ΄Π΅ΠΊΡ€Π΅Ρ‚β€ для татусів, Ρ‚Π° ΠΌΠ°Ρ‚Π΅Ρ€Ρ–Π°Π»ΡŒΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° для Π±Π°Ρ‚ΡŒΠΊΡ–Π² ΠΏΡ€ΠΈ Π½Π°Ρ€ΠΎΠ΄ΠΆΠ΅Π½Π½Ρ– Π΄Ρ–Ρ‚Π΅ΠΉ.
    • ΠšΠΎΠΌΡ„ΠΎΡ€Ρ‚Π½Ρ– ΡƒΠΌΠΎΠ²ΠΈ ΠΏΡ€Π°Ρ†Ρ–: ΠΌΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ Π³Ρ–Π±Ρ€ΠΈΠ΄Π½ΠΎΠ³ΠΎ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρƒ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ, офіси Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡Π΅Π½Π½Ρ– укриттями Ρ‚Π° Π³Π΅Π½Π΅Ρ€Π°Ρ‚ΠΎΡ€Π°ΠΌΠΈ, забСзпСчСння ΡΡƒΡ‡Π°ΡΠ½ΠΎΡŽ Ρ‚Π΅Ρ…Π½Ρ–ΠΊΠΎΡŽ.
    • Wellbeing ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ°: для всіх співробітників доступні ΠΌΠ΅Π΄ΠΈΡ‡Π½Π΅ страхування Π· ΠΏΠ΅Ρ€ΡˆΠΎΠ³ΠΎ Ρ€ΠΎΠ±ΠΎΡ‡ΠΎΠ³ΠΎ дня; ΠΊΠΎΠ½ΡΡƒΠ»ΡŒΡ‚Π°Ρ†Ρ–Ρ— психолога, Π½ΡƒΡ‚Ρ€ΠΈΡ†Ρ–ΠΎΠ»ΠΎΠ³Π° Ρ‡ΠΈ ΡŽΡ€ΠΈΡΡ‚Π°; дисконт ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡ€Ρ‚ Ρ‚Π° ΠΏΠΎΠΊΡƒΠΏΠΊΠΈ; family days для Π΄Ρ–Ρ‚Π΅ΠΉ Ρ‚Π° Π΄ΠΎΡ€ΠΎΡΠ»ΠΈΡ…; масаТ Π² ΠΎΡ„ісі.
    • Навчання Ρ‚Π° Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ: доступ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½ΠΈΡ… ΠΎΠ½Π»Π°ΠΉΠ½-рСсурсів; ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Ρ– Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½Ρ– ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π· CX, Data, IT Security, ЛідСрства, Agile. ΠšΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Π° Π±Ρ–Π±Π»Ρ–ΠΎΡ‚Π΅ΠΊΠ° Ρ‚Π° ΡƒΡ€ΠΎΠΊΠΈ Π°Π½Π³Π»Ρ–ΠΉΡΡŒΠΊΠΎΡ—.
    • ΠšΡ€ΡƒΡ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡˆΡ– ΠΊΠΎΠ»Π΅Π³ΠΈ β€” Ρ†Π΅ ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π°, Π΄Π΅ Π²Ρ–Ρ‚Π°ΡŽΡ‚ΡŒΡΡ Π΄ΠΎΠΏΠΈΡ‚Π»ΠΈΠ²Ρ–ΡΡ‚ΡŒ, Ρ‚Π°Π»Π°Π½Ρ‚ Ρ‚Π° Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ—. Ми ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, вчимося Ρ€Π°Π·ΠΎΠΌ Ρ‚Π° Π·Ρ€ΠΎΡΡ‚Π°Ρ”ΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρˆ Π·Π½Π°ΠΉΡ‚ΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡƒΠΌΡ†Ρ–Π² Ρƒ ΠΏΠΎΠ½Π°Π΄ 15-Ρ‚ΠΈ профСсійних ΠΊΠΎΠΌβ€™ΡŽΠ½Ρ–Ρ‚Ρ–, Ρ‡ΠΈΡ‚Π°Ρ†ΡŒΠΊΠΎΠΌΡƒ Ρ‡ΠΈ ΡΠΏΠΎΡ€Ρ‚ΠΈΠ²Π½ΠΎΠΌΡƒ ΠΊΠ»ΡƒΠ±Π°Ρ….
    • ΠšΠ°Ρ€β€™Ρ”Ρ€Π½Ρ– моТливості: ΠΌΠΈ Π·Π°ΠΎΡ…ΠΎΡ‡ΡƒΡ”ΠΌΠΎ просування всСрСдині Π±Π°Π½ΠΊΡƒ ΠΌΡ–ΠΆ функціями.
    • Π†Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ— Ρ‚Π° Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ—. Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink.
    • ΠŸΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ захисників Ρ– Π·Π°Ρ…ΠΈΡΠ½ΠΈΡ†ΡŒ: ΠΌΠΈ Π·Π±Π΅Ρ€Ρ–Π³Π°Ρ”ΠΌΠΎ Ρ€ΠΎΠ±ΠΎΡ‡Ρ– місця Ρ‚Π° Π²ΠΈΠΏΠ»Π°Ρ‡ΡƒΡ”ΠΌΠΎ ΡΠ΅Ρ€Π΅Π΄Π½ΡŽ Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Ρƒ ΠΏΠ»Π°Ρ‚Ρƒ ΠΌΠΎΠ±Ρ–Π»Ρ–Π·ΠΎΠ²Π°Π½ΠΈΠΌ. Для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ‚Π° Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΠΎΠΊ Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ”Ρ‚ΡŒΡΡ Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΡΡŒΠΊΠ° ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π° Π‘Π°Π½ΠΊΡƒ. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π½Π°Π΄ підвищСнням обізнаності ΠΊΠ΅Ρ€Ρ–Π²Π½ΠΈΠΊΡ–Π² Ρ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡ‚Π°Π½ΡŒ повСрнСння Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Π΄ΠΎ Ρ†ΠΈΠ²Ρ–Π»ΡŒΠ½ΠΎΠ³ΠΎ Тиття. Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ Π²Ρ–Π΄Π·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ як ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΡ… Ρ€ΠΎΠ±ΠΎΡ‚ΠΎΠ΄Π°Π²Ρ†Ρ–Π² для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² (Forbes).

    Π§ΠΎΠΌΡƒ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ?β€―

    • Наша Π³ΠΎΠ»ΠΎΠ²Π½Π° Ρ†Ρ–Π½Π½Ρ–ΡΡ‚ΡŒ β€” люди Ρ– ΠΌΠΈ Π΄Π°Ρ”ΠΌΠΎ Ρ—ΠΌ ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΡƒ Ρ– Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²Ρ‡Π°Ρ”ΠΌΠΎ, Π·Π°Π»ΡƒΡ‡Π°Ρ”ΠΌΠΎ Π΄ΠΎ Π·ΠΌΡ–Π½. ΠŸΡ€ΠΈΡ”Π΄Π½ΡƒΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡ„Ρƒ, Π°Π΄ΠΆΠ΅ для нас Π’И ΠΌΠ°Ρ”Ρˆ значСння!β€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΊΡ€Π΅Π΄ΠΈΡ‚ΠΎΡ€Ρ–Π² Π΅ΠΊΠΎΠ½ΠΎΠΌΡ–ΠΊΠΈ Ρ‚Π° Π°Π³Ρ€Π°Ρ€Π½ΠΎΠ³ΠΎ бізнСсу сСрСд ΠΏΡ€ΠΈΠ²Π°Ρ‚Π½ΠΈΡ… Π±Π°Π½ΠΊΡ–Π²β€―
    • Π’ΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΠΌ ΠΏΡ€Π°Ρ†Π΅Π΄Π°Π²Ρ†Π΅ΠΌ Π·Π° Π²Π΅Ρ€ΡΡ–ями EY, Forbes, Randstad, Franklin Covey, Delo.UAβ€―
    • ΠΠ°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ€ Π³ΡƒΠΌΠ°Π½Ρ–Ρ‚Π°Ρ€Π½ΠΎΡ— допомогисСрСд Π±Π°Π½ΠΊΡ–Π² (Π§Π΅Ρ€Π²ΠΎΠ½ΠΈΠΉ Π₯рСст Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)β€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΠ»Π°Ρ‚Π½ΠΈΠΊΡ–Π² ΠΏΠΎΠ΄Π°Ρ‚ΠΊΡ–Π² Π² Π£ΠΊΡ€Π°Ρ—Π½Ρ–, Π·Π° 2023 Ρ€Ρ–ΠΊ Π±ΡƒΠ»ΠΎ сплачСно 6,6 ΠΌΠ»Ρ€Π΄ Π³Ρ€ΠΈΠ²Π΅Π½ΡŒ

    ΠœΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡ‚Ρ– для всіх:β€―

    • Π Π°ΠΉΡ„ ΠΊΠ΅Ρ€ΡƒΡ”Ρ‚ΡŒΡΡ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠ°ΠΌΠΈ, Ρ‰ΠΎ Ρ„ΠΎΠΊΡƒΡΡƒΡŽΡ‚ΡŒΡΡ Π½Π° Π»ΡŽΠ΄ΠΈΠ½Ρ– Ρ‚Π° Ρ—Ρ— Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ, Ρƒ Ρ†Π΅Π½Ρ‚Ρ€Ρ– ΡƒΠ²Π°Π³ΠΈ 5β€―500 співробітників Ρ‚Π° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½ΠΈ ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π²β€―β€―
    • ΠŸΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΈ різноманіття, рівності Ρ‚Π° Ρ–Π½ΠΊΠ»ΡŽΠ·ΠΈΠ²Π½ΠΎΡΡ‚Ρ–
    • Ми Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Ρ– Π΄ΠΎ Π½Π°ΠΉΠΌΡƒ Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ– Π»ΡŽΠ΄Π΅ΠΉ Π· Ρ–Π½Π²Π°Π»Ρ–Π΄Π½Ρ–ΡΡ‚ΡŽ Ρ‚Π° Π³ΠΎΡ‚ΠΎΠ²Ρ– Π°Π΄Π°ΠΏΡ‚ΡƒΠ²Π°Ρ‚ΠΈ Ρ€ΠΎΠ±ΠΎΡ‡Π΅ сСрСдовищС ΠΏΡ–Π΄ Π²Π°ΡˆΡ– особливі ΠΏΠΎΡ‚Ρ€Π΅Π±ΠΈ
    • Π‘ΠΏΡ–Π²ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π·Ρ– ΡΡ‚ΡƒΠ΄Π΅Π½Ρ‚Π°ΠΌΠΈ Ρ‚Π° Π»ΡŽΠ΄ΡŒΠΌΠΈ ΡΡ‚Π°Ρ€ΡˆΠΎΠ³ΠΎ Π²Ρ–ΠΊΡƒ,β€―ΡΡ‚Π²ΠΎΡ€ΡŽΡŽΡ‡ΠΈ ΡƒΠΌΠΎΠ²ΠΈ для зростання Π½Π° Π±ΡƒΠ΄ΡŒ-якому Π΅Ρ‚Π°ΠΏΡ– кар’єри

    Π‘Π°ΠΆΠ°Ρ”Ρˆ дізнатися Π±Ρ–Π»ΡŒΡˆΠ΅? β€” ΠŸΡ–дписуйся Π½Π° Π½Π°Ρ Ρƒ ΡΠΎΡ†.ΠΌΠ΅Ρ€Π΅ΠΆΠ°Ρ…:

    Facebook, Instagram, LinkedInβ€―

    More
  • Β· 43 views Β· 1 application Β· 9d

    Data Engineer (Relocation to Spain)

    Office Work Β· Spain Β· Product Β· 3 years of experience Β· English - None
    Do you know that your professional skills can ensure the liquidity of a cryptocurrency exchange? We are looking for a Data Engineer with ETL/ELT for the Spanish office of the most famous Ukrainian company. Working with big data, strong team, assistance...

    Do you know that your professional skills can ensure the liquidity of a cryptocurrency exchange?
    We are looking for a Data Engineer with ETL/ELT for the Spanish office of the most famous Ukrainian company.

    Working with big data, strong team, assistance with family relocation, TOP conditions.

     

    Main Responsibilities

    β€” Design, build, and maintain scalable and resilient data pipelines (batch and real-time)
    β€” Develop and support data lake/data warehouse architectures
    β€” Integrate internal and external data sources/APIs into unified data systems
    β€” Ensure data quality, observability, and monitoring of pipelines
    β€” Collaborate with backend and DevOps engineers on infrastructure and deployment
    β€” Optimize query performance and data processing latency across systems
    β€” Maintain documentation and contribute to internal data engineering standards
    β€” Implement data access layers and provide well-structured data for downstream teams

     

    Mandatory Requirements

    β€” 3+ years of experience as a Data Engineer in high-load or data-driven environments
    β€” Proficient in Python for data processing and automation (pandas, pyarrow, sqlalchemy, etc.)
    β€” Advanced knowledge of SQL: query optimization, indexes, partitions, materialized views
    β€” Hands-on experience with ETL/ELT orchestration tools (e.g., Airflow, Prefect)
    β€” Experience with streaming technologies (e.g., Kafka, Flink, Spark Streaming)
    β€” Solid background in data warehouse solutions: ClickHouse, BigQuery, Redshift, or Snowflake
    β€” Familiarity with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code principles
    β€” Experience with containerization and deployment tools (e.g., Docker, Kubernetes, CI/CD)
    β€” Understanding of data modeling, data versioning, and schema evolution (e.g., dbt, Avro, Parquet)
    β€” English β€” at least intermediate (for documentation & communication with tech teams)

     

    We offer

    Immerse yourself in Crypto & Web3:
    β€” Master cutting-edge technologies and become an expert in the most innovative industry.
    Work with the Fintech of the Future:
    β€” Develop your skills in digital finance and shape the global market.
    Take Your Professionalism to the Next Level:
    β€” Gain unique experience and be part of global transformations.
    Drive Innovations:
    β€” Influence the industry and contribute to groundbreaking solutions.
    Join a Strong Team:
    β€” Collaborate with top experts worldwide and grow alongside the best.
    Work-Life Balance & Well-being:
    β€” Modern equipment.
    β€” Comfortable working conditions, and an inspiring environment to help you thrive.
    β€” 30 calendar days of paid leave.
    β€” Additional days off for national holidays.

     

    With us, you’ll dive into the world of unique blockchain technologies, reshape the crypto landscape, and become an innovator in your field. If you’re ready to take on challenges and join our dynamic team, apply now and start a new chapter in your career!
     

    More
Log In or Sign Up to see all posted jobs