Jobs Data Engineer

147
  • Β· 59 views Β· 11 applications Β· 2d

    Service engineer for web scraping product to $1200

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 0.5 years of experience Β· English - B1
    Position Type: Full-time, Remote This role involves support/creation of web scraping configurations, that are running on our engine and are based on regular expressions. You will work with server logs, HTTP requests, regexp tools, spreadsheets, and big...

    Position Type: Full-time, Remote

     

    This role involves support/creation of web scraping configurations, that are running on our engine and are based on regular expressions. You will work with server logs, HTTP requests, regexp tools, spreadsheets, and big structured and unstructured files. This role combines tech duties and project coordination duties in the future. See more https://www.webspidermount.com/jobmarketpulse/

     

    Skills to succeed in this role are:

     

    1. HARD SKILLS

    β€’ General knowledge of web technologies such as HTML, JavaScript, FTP, SFTP, XML, JSON, CSV, APIs.

    β€’ Regular expressions.

    β€’ Experience with web scraping tools and frameworks, such as Scrapy, BeautifulSoup, Selenium, etc

    β€’ Proficiency in Excel/Google Spreadsheets.

    β€’ Optional: XSLT templates, Velocity templates.

     

    2. SOFT SKILLS:

    β€’ Proactivity and responsibility, attention to detail.

    β€’ Ability to create the work process

    β€’ Experience in team coordination

    β€’ Ability to work not only according to instructions but to create them for yourself

    β€’ English language - Intermediate (read and write without vocabulary)

     

    3. DUTIES & RESPONSIBILITIES:

    β€’ Setting up new crawling robots using our own system, based on regular expressions

    β€’ Monitoring and improvement of existing crawling robots

    β€’ Making sure data is collected and provided correctly

    β€’ Communicating with clients via tickets

    β€’ Communication with team

    β€’ Solving not typical working issues

     

    4. WORK SCHEDULE

    β€’ Work remotely from home

    β€’ Full time

    β€’ Target work schedule is 10:00-19:00 (GMT+2), flexible schedule allowed

     

    WHAT WE OFFER

    A competitive compensation package includes:

    - wage and periodic reviews

    - yearly bonus

    - health support bonus

    - performance-based growth opportunities

    - paid vacation and sick leaves

    - hardware

     

    Online and offline test are will be provided after first interview

    Please, apply with CV in English

    salary $800-1200 gross per month, depends on skills

    More
  • Β· 25 views Β· 3 applications Β· 3d

    Senior Data Engineer (Batch and Streaming)

    Spain, Poland, Portugal, Ukraine Β· 7 years of experience Β· English - B2
    Quantum is a global technology partner delivering high-end software products that address real-world problems. We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps,...

    Quantum is a global technology partner delivering high-end software products that address real-world problems. 

     

    We advance emerging technologies for outside-the-box solutions. We focus on Machine Learning, Computer Vision, Deep Learning, GIS, MLOps, Blockchain, and more.

     

    Here at Quantum, we are dedicated to creating state-of-the-art solutions that effectively address the pressing issues faced by businesses and the world. To date, our team of exceptional people has already helped many organizations globally attain technological leadership.

    We constantly discover new ways to solve never-ending business challenges by adopting new technologies, even when there isn’t yet a best practice. If you share our passion for problem-solving and making an impact, join us and enjoy getting to know our wealth of experience!

     

    About the position

    Quantum is expanding the team and has brilliant opportunities for a proactive Senior Data Engineer who can design, implement, and evolve scalable data systems in AWS. We are building a greenfield analytics platform supporting both batch and real-time data processing. This role is a strategic blend of hands-on development, high-level architectural decision-making, and end-to-end platform ownership.

     

    The client is a technological research company that utilizes proprietary AI-based analysis and language models to provide comprehensive insights into global stocks in all languages. Our mission is to bridge the knowledge gap in the investment world and empower investors of all types to become β€œsuper-investors.”

    Through our generative AI technology implemented into brokerage platforms and other financial institutions’ infrastructures, we offer instant fundamental analyses of global stocks alongside bespoke investment strategies, enabling informed investment decisions for millions of investors worldwide.

     

    Must have skills:

    • 5+ years of experience in Data Engineering
    • Strong hands-on experience with Apache Spark (including Structured Streaming)
    • Experience building both batch and streaming pipelines in production environments
    • Proven experience designing AWS-based data lake architectures (S3, EMR, Glue, Athena)
    • Experience with event streaming platforms such as Apache Kafka or Amazon Kinesis
    • Experience implementing lakehouse formats such as Delta Lake
    • Strong understanding of partitioning strategies and schema evolution
    • Experience using SparkUI and AWS CloudWatch for profiling and optimization
    • Strong understanding of Spark performance tuning (shuffle, skew, memory, partitioning)
    • Proven track record of cost optimization in AWS environments
    • Experience with Docker and CI/CD pipelines
    • Experience with Infrastructure as Code (Terraform, AWS CDK, or similar)
    • Familiarity with monitoring and observability practices
    • Upper-Intermediate or higher English proficiency (spoken and written)

     

    Would be a plus:

    • Experience in the financial domain
    • Experience running Spark workloads on Kubernetes
    • Exposure to or interest in Large Language Models (LLMs) and AI integration
    • Experience implementing data quality frameworks or metadata/lineage systems

     

    Your tasks will include:

    • Design and implement batch and streaming data pipelines using Apache Spark
    • Build and evolve a scalable AWS-based data lake architecture
    • Develop and maintain real-time data processing systems (event-driven pipelines)
    • Own performance tuning and cost optimization of Spark workloads
    • Define best practices for data modeling, partitioning, and schema evolution
    • Implement monitoring, observability, and data quality controls
    • Contribute to infrastructure automation and CI/CD for data workflows
    • Participate in architectural decisions and mentor other engineers

       

    We offer:

    • Delivering high-end software projects that address real-world problems
    • Surrounding experts who are ready to move forward professionally
    • Professional growth plan and team leader support
    • Taking ownership of R&D and socially significant projects
    • Participation in worldwide tech conferences and competitions
    • Taking part in regular educational activities
    • Being a part of a multicultural company with a fun and lighthearted atmosphere
    • Working from anywhere with flexible working hours
    • Paid vacation and sick leave days

       

    Join Quantum and take a step toward your data-driven future.

    More
  • Β· 23 views Β· 2 applications Β· 3d

    Senior Data Engineer (IRC289061)

    Full Remote Β· Ukraine, Poland, Croatia, Romania, Slovakia Β· 5 years of experience Β· English - B2
    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an...

    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
    You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
    We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people’s lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.

     

    Requirements

    • Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
    • Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
    • Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
    • Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
    • Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
    • Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
    • Strong understanding of data modeling, schema design, and database performance optimization
    • Practical experience working with various file formats, including JSON, Parquet, and ORC
    • Familiarity with machine learning and AI integration within the data platform context
    • Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
    • Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
    • Strong analytical and problem-solving skills with attention to detail
    • Excellent teamwork and communication skills
    • Upper-Intermediate English (spoken and written)

     

    Job responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
    • Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
    • Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
    • Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
    • Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, estimation, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
    More
  • Β· 32 views Β· 5 applications Β· 3d

    Senior Data Engineer (US-Based Product ,Real-Time Data Platform)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - B2
    About the Product We are building a US-based, data-driven product with a strong focus on scalability, performance, and cost efficiency. Our mission is to design a modern data platform that transforms raw behavioral and monetization data into reliable,...

    About the Product

    We are building a US-based, data-driven product with a strong focus on scalability, performance, and cost efficiency.

    Our mission is to design a modern data platform that transforms raw behavioral and monetization data into reliable, actionable business insights β€” in near real-time.

     

    For us, data engineering is not just about moving data.
    It’s about:

    • Designing resilient architecture
    • Optimizing for performance and cost
    • Building reliable automation
    • Ensuring architectural integrity at scale
    •  

    Role Overview

    We are looking for a Senior Data Engineer who will take ownership of the data platform architecture and drive technical excellence across ingestion, modeling, and performance optimization.

    This role requires deep expertise in SQL, Python, AWS infrastructure, and modern data stack principles. You will not only build pipelines β€” you will define standards, lead architectural decisions, and proactively improve system efficiency.

    You will play a critical role in ensuring that data flows seamlessly from event streams to business-ready datasets while maintaining high performance, reliability, and cost control.

    What Makes This Role Senior-Level

     

    As a Senior Data Engineer, you will:

    • Own architectural decisions for the data platform
    • Identify scalability bottlenecks before they become incidents
    • Optimize data infrastructure for performance and cost
    • Lead technical code reviews and set engineering standards
    • Mentor mid-level engineers
    • Act as a technical partner to Product and Analytics stakeholders
    • Balance real-time and batch processing strategies strategically

    Technical Requirements

    Must-Have

    Expert-Level SQL

    • Complex analytical queries and window functions
    • Query optimization and execution plan analysis
    • Identifying and eliminating performance bottlenecks
    • Reducing query complexity and compute costs
    • Designing partitioning and clustering strategies

    Python

    • Advanced data manipulation
    • Building scalable ETL/ELT frameworks
    • Writing production-grade data services
    • Automation and monitoring scripts

    AWS Core Infrastructure

    • AWS Kinesis Firehose (near-real-time data streaming)
    • Amazon S3 (data lake architecture and storage optimization)
    • Designing reliable ingestion layers

    Version Control

    • Git (GitHub / GitLab)
    • Branching strategies
    • Leading technical code reviews
    • Enforcing best practices in code quality

     

    Nice-to-Have

    Modern Data Stack

    • dbt (modular SQL modeling, documentation, testing)
    • Experience structuring layered data models (staging β†’ intermediate β†’ marts)

    Data Warehousing

    • Google BigQuery
    • Slot management
    • Cost-efficient querying
    • Storage and compute optimization

    Advanced Optimization Techniques

    • Partitioning
    • Clustering
    • Bucketing
    • Storage layout optimization

    Integrations & Infrastructure

    • Salesforce data integration
    • Docker / ECS
    • CI/CD for data workflows

    AI / ML Exposure

    • Supporting feature pipelines
    • Understanding data requirements for ML systems

     

    Key Responsibilities

    Data Platform Architecture

    • Design and maintain a scalable real-time and batch data platform
    • Architect ingestion pipelines using AWS Kinesis and Python
    • Ensure high availability and reliability of data flows

    Real-Time Processing

    • Enable near-real-time (seconds–minutes latency) data processing
    • Build systems for operational alerting and anomaly detection
    • Ensure early detection of monetization and traffic issues

    Data Modeling

    • Transform raw event data into business-ready datasets using dbt
    • Design scalable, maintainable schemas aligned with product evolution

    Performance & Cost Engineering

    • Optimize SQL queries and storage structures
    • Design cost-efficient partitioning strategies
    • Monitor and reduce warehouse and infrastructure costs
    • Balance real-time and batch processing appropriately

    Engineering Excellence

    • Lead and participate in code reviews
    • Enforce high standards of performance, security, and maintainability
    • Improve observability and monitoring across pipelines

    Cross-Functional Collaboration

    • Work closely with Data Analysts and Product Managers
    • Translate business requirements into scalable technical solutions
    • Clearly communicate trade-offs between speed, cost, and complexity

    Type of Data We Process

    • User behavior events (page views, clicks, searches, conversions)
    • Ad & monetization events (impressions, clicks, CTR, attribution)
    • System and integration logs (latency, errors, rate limits)

    Why Real-Time Is Critical

    • Detect broken ads or impression drops before revenue is lost
    • Identify traffic anomalies or abuse early
    • Enable same-day operational intervention
    • Prevent negative user and advertiser experience

    Near-real-time (seconds to minutes latency) is required for operational awareness.
    Batch processing remains important for historical analysis and reporting β€” but not for incident detection.

     

    Working Schedule

    • Monday – Friday
    • 16:00 – 00:00 Kyiv time
    • Full alignment with US-based stakeholders

     

    What We Value

    • Strong ownership mindset
    • Strategic thinking about architecture
    • Focus on scalability, reliability, and cost efficiency
    • Proactive problem-solving
    • Clear communication with both technical and non-technical teams
    • Ability to think beyond β€œjust making it work”
    More
  • Β· 47 views Β· 1 application Β· 3d

    Senior Data Engineer

    Ukraine Β· Product Β· 4 years of experience Β· English - B2
    Your future responsibilities: Collaborate with data and analytics experts to strive for greater functionality in our data systems Design, use and test the infrastructure required for optimal extraction, transformation, and loading of data from a wide...

    Your future responsibilities:

    • Collaborate with data and analytics experts to strive for greater functionality in our data systems
    • Design, use and test the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies (DevOps & Continuous Integration)
    • Drive the advancement of data infrastructure by designing and implementing the underlying logic and structure for how data is set up, cleansed, and ultimately stored for organizational usage
    • Assemble large, complex data sets that meet functional / non-functional business requirements
    • Build data integration from various sources and technologies to the data lake infrastructure as part of an agile delivery team
    • Monitor the capabilities and react on unplanned interruptions ensuring that environments are provided & loaded in time

    Your skills and experience:

    • Minimum 5 years experience in a dedicated data engineer role
    • Experience working with large structured and unstructured data in various formats
    • Knowledge or experience with streaming data frameworks and distributed data architectures (e.g. Spark Structured Streaming, Apache Beam or Apache Flink)
    • Experience with cloud technologies (preferable AWS, Azure)
    • Experience in Cloud services (Data Flow, Data Proc, BigQuery, Pub/Sub)
    • Experience of practical operation of Big Data stack: Hadoop, HDFS, Hive, Presto, Kafka
    • Experience of Python in the context of creating ETL data pipelines
    • Experience with Data Lake / Data Warehouse solutions (AWS S3 // Minio)
    • Experience with Apache Airflow
    • Development skills in a Docker / Kubernetes environment
    • Open and team-minded personality and communication skills
    • Willingness to work in an agile environment

    We offer what matters most to you:

    • Competitive salary: we guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
    • Social package: official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
    • Comfortable working conditions: possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment. Classification: PUBLIC
    • Wellbeing program: all employees have access to medical insurance from the first working day; consultations with a psychologist, nutritionist, or lawyer; discount programs for sports and purchases; family days for children and adults; in-office massages
    • Training and development: access to over 130 online training resources; corporate training programs in CX, Data, IT Security, Leadership, Agile. Corporate library and English lessons. β€’ Great team: our colleagues form a community where curiosity, talent, and innovation are welcome. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
    • Career opportunities: we encourage advancement within the bank across functions
    • Innovations and technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (IOS), Kotlin (Android). Data stores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
    • Support program for defenders: we maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and develop the Bank’s veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes

    Why Raiffeisen Bank?

    • Our main value is people, and we support and recognize them, educate them and involve them in changes. Join Raif’s team because for us YOU matter!
    • One of the largest lenders to the economy and agricultural business among private banks
    • Recognized as the best employer by EY, Forbes, Randstad, Franklin Covey, and Delo.UA
    • The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)
    • One of the largest IT product teams among the country’s banks. β€’ One of the largest taxpayers in Ukraine; 6.6 billion UAH were paid in taxes in 2023

    Opportunities for Everyone:

    • Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
    • We support the principles of diversity, equality and inclusiveness
    • We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
    • We cooperate with students and older people, creating conditions for growth at any career stage

    Want to learn more? β€” Follow us on social media:

    Facebook, Instagram, LinkedIn

    ___________________________________________________________________________________________

    Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ β€” Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ ΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· Ρ–Π½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡ–Ρ‚Π°Π»ΠΎΠΌ. Π‘Ρ–Π»ΡŒΡˆΠ΅ 30 Ρ€ΠΎΠΊΡ–Π² ΠΌΠΈ ΡΡ‚Π²ΠΎΡ€ΡŽΡ”ΠΌΠΎ Ρ‚Π° Π²ΠΈΠ±ΡƒΠ΄ΠΎΠ²ΡƒΡ”ΠΌΠΎ Π±Π°Π½ΠΊΡ–Π²ΡΡŒΠΊΡƒ систСму Π½Π°ΡˆΠΎΡ— Π΄Π΅Ρ€ΠΆΠ°Π²ΠΈ.

    Π£ Π Π°ΠΉΡ„Ρ– ΠΏΡ€Π°Ρ†ΡŽΡ” ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡ–Π²Ρ€ΠΎΠ±Ρ–Ρ‚Π½ΠΈΠΊΡ–Π², сСрСд Π½ΠΈΡ… ΠΎΠ΄Π½Π° Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΎΠ²ΠΈΡ… Π†Π’-ΠΊΠΎΠΌΠ°Π½Π΄, Ρ‰ΠΎ Π½Π°Π»Ρ–Ρ‡ΡƒΡ” ΠΏΠΎΠ½Π°Π΄ 800 Ρ„Π°Ρ…Ρ–Π²Ρ†Ρ–Π². Щодня ΠΏΠ»Ρ–Ρ‡-ΠΎ-ΠΏΠ»Ρ–Ρ‡ ΠΌΠΈ ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ, Ρ‰ΠΎΠ± Π±Ρ–Π»ΡŒΡˆ Π½Ρ–ΠΆ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½Π° Π½Π°ΡˆΠΈΡ… ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡ‚Ρ€ΠΈΠΌΠ°Ρ‚ΠΈ якіснС обслуговування, користуватися ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Π°ΠΌΠΈ Ρ– ΡΠ΅Ρ€Π²Ρ–сами Π±Π°Π½ΠΊΡƒ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ‚ΠΈ бізнСс, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡ€Π°Ρ—Π½ΠΎΡŽ.β€―

    Π’Π²ΠΎΡ— ΠΌΠ°ΠΉΠ±ΡƒΡ‚Π½Ρ– обов’язки:

    • Бпівпраця Π· Π΅ΠΊΡΠΏΠ΅Ρ€Ρ‚Π°ΠΌΠΈ Π· Π΄Π°Π½ΠΈΡ… Ρ‚Π° Π°Π½Π°Π»Ρ–Ρ‚ΠΈΠΊΠΈ, Ρ‰ΠΎΠ± досягти Π±Ρ–Π»ΡŒΡˆΠΎΡ— Ρ„ΡƒΠ½ΠΊΡ†Ρ–ΠΎΠ½Π°Π»ΡŒΠ½ΠΎΡΡ‚Ρ– Π½Π°ΡˆΠΈΡ… систСм Π΄Π°Π½ΠΈΡ…
    • ΠŸΡ€ΠΎΠ΅ΠΊΡ‚ΡƒΠ²Π°Π½Π½Ρ, використання Ρ‚Π° Ρ‚Сстування інфраструктури, Π½Π΅ΠΎΠ±Ρ…Ρ–Π΄Π½ΠΎΡ— для ΠΎΠΏΡ‚ΠΈΠΌΠ°Π»ΡŒΠ½ΠΎΠ³ΠΎ вилучСння, пСрСтворСння Ρ‚Π° Π·Π°Π²Π°Π½Ρ‚аТСння Π΄Π°Π½ΠΈΡ… Π· ΡˆΠΈΡ€ΠΎΠΊΠΎΠ³ΠΎ спСктру Π΄ΠΆΠ΅Ρ€Π΅Π» Π΄Π°Π½ΠΈΡ… Π·Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³ΠΎΡŽ Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–ΠΉ SQL Ρ‚Π° AWS для Π²Π΅Π»ΠΈΠΊΠΈΡ… Π΄Π°Π½ΠΈΡ… (DevOps Ρ‚Π° Π±Π΅Π·ΠΏΠ΅Ρ€Π΅Ρ€Π²Π½Π° інтСграція)
    • Бприяння Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ інфраструктури Π΄Π°Π½ΠΈΡ… ΡˆΠ»ΡΡ…ΠΎΠΌ проСктування Ρ‚Π° Π²ΠΏΡ€ΠΎΠ²Π°Π΄ΠΆΠ΅Π½Π½Ρ Π±Π°Π·ΠΎΠ²ΠΎΡ— Π»ΠΎΠ³Ρ–ΠΊΠΈ Ρ‚Π° ΡΡ‚Ρ€ΡƒΠΊΡ‚ΡƒΡ€ΠΈ для Π½Π°Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ, очищСння Ρ‚Π°, Π·Ρ€Π΅ΡˆΡ‚ΠΎΡŽ, збСрігання Π΄Π°Π½ΠΈΡ… для використання Π² ΠΎΡ€Π³Π°Π½Ρ–Π·Π°Ρ†Ρ–Ρ—
    • Π—Π±ΠΈΡ€Π°Ρ‚ΠΈ Π²Π΅Π»ΠΈΠΊΡ–, складні Π½Π°Π±ΠΎΡ€ΠΈ Π΄Π°Π½ΠΈΡ…, Ρ‰ΠΎ Π²Ρ–Π΄ΠΏΠΎΠ²Ρ–Π΄Π°ΡŽΡ‚ΡŒ Ρ„ΡƒΠ½ΠΊΡ†Ρ–ΠΎΠ½Π°Π»ΡŒΠ½ΠΈΠΌ/Π½Π΅Ρ„ΡƒΠ½ΠΊΡ†Ρ–ΠΎΠ½Π°Π»ΡŒΠ½ΠΈΠΌ бізнСс-Π²ΠΈΠΌΠΎΠ³Π°ΠΌ
    • Π‘Ρ‚Π²ΠΎΡ€ΡŽΠ²Π°Ρ‚ΠΈ Ρ–Π½Ρ‚Π΅Π³Ρ€Π°Ρ†Ρ–ΡŽ Π΄Π°Π½ΠΈΡ… Π· Ρ€Ρ–Π·Π½ΠΈΡ… Π΄ΠΆΠ΅Ρ€Π΅Π» Ρ‚Π° Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–ΠΉ Π² Ρ–нфраструктуру ΠΎΠ·Π΅Ρ€Π° Π΄Π°Π½ΠΈΡ… як Ρ‡Π°ΡΡ‚ΠΈΠ½Π° Π³Π½ΡƒΡ‡ΠΊΠΎΡ— ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π· ΠΏΠΎΡΡ‚ачання
    • ΠœΠΎΠ½Ρ–Ρ‚ΠΎΡ€ΠΈΡ‚ΠΈ моТливості Ρ‚Π° Ρ€Π΅Π°Π³ΡƒΠ²Π°Ρ‚ΠΈ Π½Π° Π½Π΅Π·Π°ΠΏΠ»Π°Π½ΠΎΠ²Π°Π½Ρ– ΠΏΠ΅Ρ€Π΅Π±ΠΎΡ—, Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡ΡƒΡŽΡ‡ΠΈ своєчаснС надання Ρ‚Π° Π·Π°Π²Π°Π½Ρ‚аТСння сСрСдовищ

    Π’Π²Ρ–ΠΉ досвід Ρ‚Π° Π½Π°Π²ΠΈΡ‡ΠΊΠΈ:

    • ΠœΡ–Π½Ρ–ΠΌΡƒΠΌ 5 Ρ€ΠΎΠΊΡ–Π² досвіду Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π½Π° ΠΏΠΎΡΠ°Π΄Ρ– спСціалізованого Ρ–Π½ΠΆΠ΅Π½Π΅Ρ€Π° Π· Π΄Π°Π½ΠΈΡ…
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Π²Π΅Π»ΠΈΠΊΠΈΠΌΠΈ структурованими Ρ‚Π° Π½Π΅ΡΡ‚Ρ€ΡƒΠΊΡ‚ΡƒΡ€ΠΎΠ²Π°Π½ΠΈΠΌΠΈ Π΄Π°Π½ΠΈΠΌΠΈ Π² Ρ€Ρ–Π·Π½ΠΈΡ… Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°Ρ…
    • Знання Π°Π±ΠΎ досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ„Ρ€Π΅ΠΉΠΌΠ²ΠΎΡ€ΠΊΠ°ΠΌΠΈ ΠΏΠΎΡ‚ΠΎΠΊΠΎΠ²ΠΈΡ… Π΄Π°Π½ΠΈΡ… Ρ‚Π° Ρ€ΠΎΠ·ΠΏΠΎΠ΄Ρ–Π»Π΅Π½ΠΈΠΌΠΈ Π°Ρ€Ρ…Ρ–Ρ‚Π΅ΠΊΡ‚ΡƒΡ€Π°ΠΌΠΈ Π΄Π°Π½ΠΈΡ… (Π½Π°ΠΏΡ€ΠΈΠΊΠ»Π°Π΄,
    • Spark Structured Streaming, Apache Beam Π°Π±ΠΎ Apache Flink)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ…ΠΌΠ°Ρ€Π½ΠΈΠΌΠΈ тСхнологіями (Π±Π°ΠΆΠ°Π½ΠΎ AWS, Azure)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ…ΠΌΠ°Ρ€Π½ΠΈΠΌΠΈ сСрвісами (Data Flow, Data Proc, BigQuery, Pub/Sub)
    • Досвід ΠΏΡ€Π°ΠΊΡ‚ΠΈΡ‡Π½ΠΎΡ— Сксплуатації стСку Big Data: Hadoop, HDFS, Hive, Presto, Kafka
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Python Ρƒ ΠΊΠΎΠ½Ρ‚Сксті створСння ETL-ΠΏΠΎΡ‚ΠΎΠΊΡ–Π² Π΄Π°Π½ΠΈΡ…
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ€Ρ–ΡˆΠ΅Π½Π½ΡΠΌΠΈ Data Lake / Data Warehouse (AWS S3 // Minio)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Apache Airflow
    • Навички Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΈ Π² ΡΠ΅Ρ€Π΅Π΄ΠΎΠ²ΠΈΡ‰Ρ– Docker / Kubernetes
    • Π’Ρ–Π΄ΠΊΡ€ΠΈΡ‚Π° Ρ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄Π½Π° ΠΎΡΠΎΠ±ΠΈΡΡ‚Ρ–ΡΡ‚ΡŒ, ΠΊΠΎΠΌΡƒΠ½Ρ–ΠΊΠ°Ρ‚ΠΈΠ²Π½Ρ– Π½Π°Π²ΠΈΡ‡ΠΊΠΈ
    • Π“ΠΎΡ‚ΠΎΠ²Π½Ρ–ΡΡ‚ΡŒ ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Ρ‚ΠΈ Π² Π³Π½ΡƒΡ‡ΠΊΠΎΠΌΡƒ сСрСдовищі

    ΠŸΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ Ρ‚Π΅, Ρ‰ΠΎ ΠΌΠ°Ρ” значСння самС для Ρ‚Π΅Π±Π΅:β€―

    • ΠšΠΎΠ½ΠΊΡƒΡ€Π΅Π½Ρ‚Π½Π° Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Π° ΠΏΠ»Π°Ρ‚Π°: Π³Π°Ρ€Π°Π½Ρ‚ΡƒΡ”ΠΌΠΎ ΡΡ‚Π°Π±Ρ–Π»ΡŒΠ½ΠΈΠΉ Π΄ΠΎΡ…Ρ–Π΄ Ρ‚Π° Ρ€Ρ–Ρ‡Π½Ρ– бонуси Π·Π° Ρ‚Π²Ρ–ΠΉ особистий внСсок. Π”ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΎ, Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” Ρ€Π΅Ρ„Π΅Ρ€Π°Π»ΡŒΠ½Π° ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡ€ΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡƒΡ‡Π΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ… ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊΡƒ.
    • Π‘ΠΎΡ†Ρ–Π°Π»ΡŒΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ‚: ΠΎΡ„Ρ–Ρ†Ρ–ΠΉΠ½Π΅ ΠΏΡ€Π°Ρ†Π΅Π²Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ, 28 Π΄Π½Ρ–Π² ΠΎΠΏΠ»Π°Ρ‡ΡƒΠ²Π°Π½ΠΎΡ— відпустки, Π΄ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΈΠΉ β€œΠ΄Π΅ΠΊΡ€Π΅Ρ‚β€ для татусів, Ρ‚Π° ΠΌΠ°Ρ‚Π΅Ρ€Ρ–Π°Π»ΡŒΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° для Π±Π°Ρ‚ΡŒΠΊΡ–Π² ΠΏΡ€ΠΈ Π½Π°Ρ€ΠΎΠ΄ΠΆΠ΅Π½Π½Ρ– Π΄Ρ–Ρ‚Π΅ΠΉ.
    • ΠšΠΎΠΌΡ„ΠΎΡ€Ρ‚Π½Ρ– ΡƒΠΌΠΎΠ²ΠΈ ΠΏΡ€Π°Ρ†Ρ–: ΠΌΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ Π³Ρ–Π±Ρ€ΠΈΠ΄Π½ΠΎΠ³ΠΎ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρƒ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ, офіси Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡Π΅Π½Π½Ρ– укриттями Ρ‚Π° Π³Π΅Π½Π΅Ρ€Π°Ρ‚ΠΎΡ€Π°ΠΌΠΈ, забСзпСчСння ΡΡƒΡ‡Π°ΡΠ½ΠΎΡŽ Ρ‚Π΅Ρ…Π½Ρ–ΠΊΠΎΡŽ.
    • Wellbeing ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ°: для всіх співробітників доступні ΠΌΠ΅Π΄ΠΈΡ‡Π½Π΅ страхування Π· ΠΏΠ΅Ρ€ΡˆΠΎΠ³ΠΎ Ρ€ΠΎΠ±ΠΎΡ‡ΠΎΠ³ΠΎ дня; ΠΊΠΎΠ½ΡΡƒΠ»ΡŒΡ‚Π°Ρ†Ρ–Ρ— психолога, Π½ΡƒΡ‚Ρ€ΠΈΡ†Ρ–ΠΎΠ»ΠΎΠ³Π° Ρ‡ΠΈ ΡŽΡ€ΠΈΡΡ‚Π°; дисконт ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡ€Ρ‚ Ρ‚Π° ΠΏΠΎΠΊΡƒΠΏΠΊΠΈ; family days для Π΄Ρ–Ρ‚Π΅ΠΉ Ρ‚Π° Π΄ΠΎΡ€ΠΎΡΠ»ΠΈΡ…; масаТ Π² ΠΎΡ„ісі.
    • Навчання Ρ‚Π° Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ: доступ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½ΠΈΡ… ΠΎΠ½Π»Π°ΠΉΠ½-рСсурсів; ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Ρ– Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½Ρ– ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π· CX, Data, IT Security, ЛідСрства, Agile. ΠšΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Π° Π±Ρ–Π±Π»Ρ–ΠΎΡ‚Π΅ΠΊΠ° Ρ‚Π° ΡƒΡ€ΠΎΠΊΠΈ Π°Π½Π³Π»Ρ–ΠΉΡΡŒΠΊΠΎΡ—.
    • ΠšΡ€ΡƒΡ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡˆΡ– ΠΊΠΎΠ»Π΅Π³ΠΈ β€” Ρ†Π΅ ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π°, Π΄Π΅ Π²Ρ–Ρ‚Π°ΡŽΡ‚ΡŒΡΡ Π΄ΠΎΠΏΠΈΡ‚Π»ΠΈΠ²Ρ–ΡΡ‚ΡŒ, Ρ‚Π°Π»Π°Π½Ρ‚ Ρ‚Π° Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ—. Ми ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, вчимося Ρ€Π°Π·ΠΎΠΌ Ρ‚Π° Π·Ρ€ΠΎΡΡ‚Π°Ρ”ΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρˆ Π·Π½Π°ΠΉΡ‚ΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡƒΠΌΡ†Ρ–Π² Ρƒ ΠΏΠΎΠ½Π°Π΄ 15-Ρ‚ΠΈ профСсійних ΠΊΠΎΠΌβ€™ΡŽΠ½Ρ–Ρ‚Ρ–, Ρ‡ΠΈΡ‚Π°Ρ†ΡŒΠΊΠΎΠΌΡƒ Ρ‡ΠΈ ΡΠΏΠΎΡ€Ρ‚ΠΈΠ²Π½ΠΎΠΌΡƒ ΠΊΠ»ΡƒΠ±Π°Ρ….
    • ΠšΠ°Ρ€β€™Ρ”Ρ€Π½Ρ– моТливості: ΠΌΠΈ Π·Π°ΠΎΡ…ΠΎΡ‡ΡƒΡ”ΠΌΠΎ просування всСрСдині Π±Π°Π½ΠΊΡƒ ΠΌΡ–ΠΆ функціями.
    • Π†Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ— Ρ‚Π° Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ—. Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink.
    • ΠŸΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ захисників Ρ– Π·Π°Ρ…ΠΈΡΠ½ΠΈΡ†ΡŒ: ΠΌΠΈ Π·Π±Π΅Ρ€Ρ–Π³Π°Ρ”ΠΌΠΎ Ρ€ΠΎΠ±ΠΎΡ‡Ρ– місця Ρ‚Π° Π²ΠΈΠΏΠ»Π°Ρ‡ΡƒΡ”ΠΌΠΎ ΡΠ΅Ρ€Π΅Π΄Π½ΡŽ Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Ρƒ ΠΏΠ»Π°Ρ‚Ρƒ ΠΌΠΎΠ±Ρ–Π»Ρ–Π·ΠΎΠ²Π°Π½ΠΈΠΌ. Для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ‚Π° Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΠΎΠΊ Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ”Ρ‚ΡŒΡΡ Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΡΡŒΠΊΠ° ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π° Π‘Π°Π½ΠΊΡƒ. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π½Π°Π΄ підвищСнням обізнаності ΠΊΠ΅Ρ€Ρ–Π²Π½ΠΈΠΊΡ–Π² Ρ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡ‚Π°Π½ΡŒ повСрнСння Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Π΄ΠΎ Ρ†ΠΈΠ²Ρ–Π»ΡŒΠ½ΠΎΠ³ΠΎ Тиття. Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ Π²Ρ–Π΄Π·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ як ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΡ… Ρ€ΠΎΠ±ΠΎΡ‚ΠΎΠ΄Π°Π²Ρ†Ρ–Π² для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² (Forbes).

    Π§ΠΎΠΌΡƒ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ?β€―

    • Наша Π³ΠΎΠ»ΠΎΠ²Π½Π° Ρ†Ρ–Π½Π½Ρ–ΡΡ‚ΡŒ β€” люди Ρ– ΠΌΠΈ Π΄Π°Ρ”ΠΌΠΎ Ρ—ΠΌ ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΡƒ Ρ– Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²Ρ‡Π°Ρ”ΠΌΠΎ, Π·Π°Π»ΡƒΡ‡Π°Ρ”ΠΌΠΎ Π΄ΠΎ Π·ΠΌΡ–Π½. ΠŸΡ€ΠΈΡ”Π΄Π½ΡƒΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡ„Ρƒ, Π°Π΄ΠΆΠ΅ для нас Π’И ΠΌΠ°Ρ”Ρˆ значСння!β€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΊΡ€Π΅Π΄ΠΈΡ‚ΠΎΡ€Ρ–Π² Π΅ΠΊΠΎΠ½ΠΎΠΌΡ–ΠΊΠΈ Ρ‚Π° Π°Π³Ρ€Π°Ρ€Π½ΠΎΠ³ΠΎ бізнСсу сСрСд ΠΏΡ€ΠΈΠ²Π°Ρ‚Π½ΠΈΡ… Π±Π°Π½ΠΊΡ–Π²β€―
    • Π’ΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΠΌ ΠΏΡ€Π°Ρ†Π΅Π΄Π°Π²Ρ†Π΅ΠΌ Π·Π° Π²Π΅Ρ€ΡΡ–ями EY, Forbes, Randstad, Franklin Covey, Delo.UAβ€―
    • ΠΠ°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ€ Π³ΡƒΠΌΠ°Π½Ρ–Ρ‚Π°Ρ€Π½ΠΎΡ— допомогисСрСд Π±Π°Π½ΠΊΡ–Π² (Π§Π΅Ρ€Π²ΠΎΠ½ΠΈΠΉ Π₯рСст Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)β€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΠ»Π°Ρ‚Π½ΠΈΠΊΡ–Π² ΠΏΠΎΠ΄Π°Ρ‚ΠΊΡ–Π² Π² Π£ΠΊΡ€Π°Ρ—Π½Ρ–, Π·Π° 2023 Ρ€Ρ–ΠΊ Π±ΡƒΠ»ΠΎ сплачСно 6,6 ΠΌΠ»Ρ€Π΄ Π³Ρ€ΠΈΠ²Π΅Π½ΡŒ

    ΠœΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡ‚Ρ– для всіх:β€―

    • Π Π°ΠΉΡ„ ΠΊΠ΅Ρ€ΡƒΡ”Ρ‚ΡŒΡΡ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠ°ΠΌΠΈ, Ρ‰ΠΎ Ρ„ΠΎΠΊΡƒΡΡƒΡŽΡ‚ΡŒΡΡ Π½Π° Π»ΡŽΠ΄ΠΈΠ½Ρ– Ρ‚Π° Ρ—Ρ— Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ, Ρƒ Ρ†Π΅Π½Ρ‚Ρ€Ρ– ΡƒΠ²Π°Π³ΠΈ 5β€―500 співробітників Ρ‚Π° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½ΠΈ ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π²β€―β€―
    • ΠŸΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΈ різноманіття, рівності Ρ‚Π° Ρ–Π½ΠΊΠ»ΡŽΠ·ΠΈΠ²Π½ΠΎΡΡ‚Ρ–
    • Ми Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Ρ– Π΄ΠΎ Π½Π°ΠΉΠΌΡƒ Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ– Π»ΡŽΠ΄Π΅ΠΉ Π· Ρ–Π½Π²Π°Π»Ρ–Π΄Π½Ρ–ΡΡ‚ΡŽ Ρ‚Π° Π³ΠΎΡ‚ΠΎΠ²Ρ– Π°Π΄Π°ΠΏΡ‚ΡƒΠ²Π°Ρ‚ΠΈ Ρ€ΠΎΠ±ΠΎΡ‡Π΅ сСрСдовищС ΠΏΡ–Π΄ Π²Π°ΡˆΡ– особливі ΠΏΠΎΡ‚Ρ€Π΅Π±ΠΈ
    • Π‘ΠΏΡ–Π²ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π·Ρ– ΡΡ‚ΡƒΠ΄Π΅Π½Ρ‚Π°ΠΌΠΈ Ρ‚Π° Π»ΡŽΠ΄ΡŒΠΌΠΈ ΡΡ‚Π°Ρ€ΡˆΠΎΠ³ΠΎ Π²Ρ–ΠΊΡƒ,β€―ΡΡ‚Π²ΠΎΡ€ΡŽΡŽΡ‡ΠΈ ΡƒΠΌΠΎΠ²ΠΈ для зростання Π½Π° Π±ΡƒΠ΄ΡŒ-якому Π΅Ρ‚Π°ΠΏΡ– кар’єри

    Π‘Π°ΠΆΠ°Ρ”Ρˆ дізнатися Π±Ρ–Π»ΡŒΡˆΠ΅? β€” ΠŸΡ–дписуйся Π½Π° Π½Π°Ρ Ρƒ ΡΠΎΡ†.ΠΌΠ΅Ρ€Π΅ΠΆΠ°Ρ…:

    Facebook, Instagram, LinkedInβ€―

    More
  • Β· 903 views Β· 74 applications Β· 3d

    Data Engineer

    Countries of Europe or Ukraine Β· 2 years of experience Β· English - B1
    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV. Skills requirements: β€’ 2+ years of experience with...

    Looking for a Data Engineer to join the Dataforest team. If you are looking for a friendly team, a healthy working environment, and a flexible schedule β€’ you have found the right place to send your CV.

     

    Skills requirements:
    β€’ 2+ years of experience with Python;
    β€’ 2+ years of experience as a Data Engineer;
    β€’ Experience with Pandas;
    β€’ Experience with SQL DB / NoSQL (Redis, Mongo, Elasticsearch) / BigQuery;
    β€’ Familiarity with Amazon Web Services;
    β€’ Knowledge of data algorithms and data structures is a MUST;
    β€’ Working with high volume tables 10m+.


    Optional skills (as a plus):
    β€’ Experience with Spark (pyspark);
    β€’ Experience with Airflow;
    β€’ Experience with Kafka;
    β€’ Experience in statistics;
    β€’ Knowledge of DS and Machine learning algorithms..

     

    Key responsibilities:
    β€’ Create ETL pipelines and data management solutions (API, Integration logic);
    β€’ Different data processing algorithms;
    β€’ Involvement in creation of forecasting, recommendation, and classification models.

     

    We offer:

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 102 views Β· 13 applications Β· 3d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    Dataforest is looking for a Senior Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture...

    Dataforest is looking for a Senior Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics.
    You will be responsible for developing and maintaining a scalable data architecture that powers large-scale data collection, processing, analysis, and integrations.

    If you are passionate about data optimization, system performance, and architecture, we’re waiting for your CV!

    Requirements:
    β€’ 4+ years of commercial experience with Python;
    β€’ Advanced experience with SQL DBs (optimisations, monitoring, etc.);
    β€’ PostgreSQL β€” must have;
    β€’ Solid understanding of ETL principles (architecture/ monitoring/ alerting/search and resolve bottlenecks);
    β€’ Experience with Message brokers: Kafka/ Redis;
    β€’ Experience with Pandas;
    β€’ Familiar with AWS infrastructure (boto3, S3 buckets, etc);
    β€’ Experience working with large volumes of data;
    β€’ Understanding the principles of medallion architecture.   

    Will Be a Plus:
    β€’ Understanding noSQL DBs (Elastic);
    β€’ TimeScaleDB;
    β€’ PySpark;
    β€’ Experience with e-commerce or fintech.   
     

    Key Responsibilities:

    β€’ Develop and maintain a robust and scalable data processing architecture using Python.

    β€’  Design, optimize, and monitor data pipelines using Kafka and AWS SQS.

    β€’  Implement and optimize ETL processes for various data sources.

    β€’  Manage and optimize SQL and NoSQL databases (PostgreSQL, TimeScaleDB, Elasticsearch).

    β€’  Work with AWS infrastructure to ensure reliability, scalability, and cost efficiency.

    β€’  Proactively identify bottlenecks and suggest technical improvements.

     

     We offer:

    β€’  Working in a fast-growing company;

    β€’  Great networking opportunities with international clients, challenging tasks;

    • Personal and professional development opportunities;
    • Competitive salary fixed in USD;
    • Paid vacation and sick leaves;
    • Flexible work schedule;
    • Friendly working environment with minimal hierarchy;
    • Team building activities, corporate events.


     

    More
  • Β· 45 views Β· 4 applications Β· 3d

    Data Engineer

    Full Remote Β· EU Β· 3 years of experience Β· English - B2
    We are looking for you! As we continue to design and build data-driven solutions across diverse domains β€” we’re seeking a Data Engineer who thrives on transforming data into impactful insights. If you’re passionate about crafting robust architectures,...

    We are looking for you!
     

    As we continue to design and build data-driven solutions across diverse domains β€” we’re seeking a Data Engineer who thrives on transforming data into impactful insights. If you’re passionate about crafting robust architectures, optimizing data pipelines, and enabling intelligent decision-making at scale, we’d love to have you join our global team and shape the next generation of data excellence with us.


    Contract type: Gig contract.


    Skills and experience you can bring to this role
     

    Qualifications & experience:
     

    • 3+ years of intensive experience as a Data Engineer or in a similar role, with a demonstrable track record of leading large-scale projects;
    • Mastery in Python and data stack (NumPy, Pandas, scikit-learn);
    • Good Understanding of SQL/RDBMS and familiarity with data warehouses (BigQuery, Snowflake, Redshift, etc.);
    • Experience building ETL data pipelines (Airflow, Prefect, Dagster, etc);
    • Experience with modern Scrum-based Software Development Life Cycle (SDLC);
    • Strong communication skills to explain technical insights to non-technical stakeholders.

     

    Nice to have:

     

    • Hands-on experience with Python web stack (Fast API / Flask);
    • Proven expertise in designing and optimizing large-scale data pipelines;
    • Comprehensive understanding of data governance principles and data quality management practices;
    • Understand marketing and media metrics (i.e., what conversion rate is and how it is calculated).
    • Exceptional leadership, communication, and collaboration skills, with a knack for guiding and nurturing teams.
       

    Educational requirements:

    • Bachelor’s degree in Computer Science, Information Systems, or a related discipline is preferred.

     

    What impact you’ll make 
     

    • Lead the design, development, testing, and maintenance of scalable data architectures, ensuring they align with business and technical objectives;
    • Spearhead the creation of sophisticated data pipelines using Python, leveraging advanced Snowflake capabilities such as Data Shares, Snowpipe, Snowpark, and more;
    • Collaborate intensively with data scientists, product teams, and other stakeholders to define and fulfill intricate data requirements for cross-channel budget optimization solutions;
    • Drive initiatives for new data collection, refining existing data sources, and ensuring the highest standards of data accuracy and reliability;
    • Set the gold standard for data quality, introducing cutting-edge tools and frameworks to detect and address data inconsistencies and inaccuracies; and
    • Identify, design, and implement process improvements, focusing on data delivery optimization, automation of manual processes, and infrastructure enhancements for scalability.

     

    What you’ll get 
     

    Regardless of your position or role, we have a wide array of benefits in place, including flexible working (hybrid/remote models) and generous time off policies (unlimited vacations, sick and parental leaves) to make it easier for all people to thrive and succeed at Star. On top of that, we offer an extensive reward and compensation package, intellectually and creatively stimulating space, health insurance and unique travel opportunities.

    Your holistic well-being is central at Star. You'll join a warm and vibrant multinational environment filled with impactful projects, career development opportunities, mentorship and training programs, fun sports activities, workshops, networking and outdoor meet-ups.

    More
  • Β· 35 views Β· 1 application Β· 4d

    Senior Data Engineer

    Full Remote Β· Poland, Spain, Portugal, Germany, Bulgaria Β· 5 years of experience Β· English - B2
    We are seeking a Senior Data Engineer to deliver data-driven solutions that optimize fleet utilization and operational efficiency across 46,000+ assets in 545+ locations. You will enable decision-making through demand forecasting, asset cascading,...

    We are seeking a Senior Data Engineer to deliver data-driven solutions that optimize fleet utilization and operational efficiency across 46,000+ assets in 545+ locations. You will enable decision-making through demand forecasting, asset cascading, contract analysis, and risk detection, partnering with engineering and business stakeholders to take models from concept to production on AWS. 

     

    Requirements 

    • 5+ years of experience in data engineering 
    • 3+ years of hands-on experience building and supporting production ETL/ELT pipelines 
    • Advanced SQL skills (CTEs, window functions, performance optimization) 
    • Strong Python skills (pandas, API integrations) 
    • Proven experience with Snowflake (schema design, Snowpipe, Streams, Tasks, performance tuning, data quality) 
    • Solid knowledge of AWS services: S3, Lambda, EventBridge, IAM, CloudWatch, Step Functions 
    • Strong understanding of dimensional data modeling (Kimball methodology, SCDs) 
    • Experience working with enterprise systems (ERP, CRM, or similar) 

     

    Nice-to-haves 

    • Experience with data quality frameworks (Great Expectations, Deequ) 
    • Knowledge of CDC tools and concepts (AWS DMS, Kafka, Debezium) 
    • Hands-on experience with data lake technologies (Apache Iceberg, Parquet) 
    • Exposure to ML data pipelines and feature stores (SageMaker Feature Store) 
    • Experience with document processing tools such as Amazon Textract 

     

    Core Responsibilities 

    • Design and develop ETL/ELT pipelines using Snowflake, Snowpipe, internal systems, Salesforce, SharePoint, and DocuSign 
    • Build and maintain dimensional data models in Snowflake using dbt, including data quality checks (Great Expectations, Deequ) 
    • Implement CDC patterns for near real-time data synchronization 
    • Manage and evolve the data platform across S3 Data Lake (Apache Iceberg) and Snowflake data warehouse 
    • Build and maintain Medallion architecture data lake in Snowflake 
    • Prepare ML features using SageMaker Feature Store 
    • Develop analytical dashboards and reports in Power BI 

     

    What we offer   

    • Continuous learning and career growth opportunities 
    • Professional training and English/Spanish language classes   
    • Comprehensive medical insurance 
    • Mental health support 
    • Specialized benefits program with compensation for fitness activities, hobbies, pet care, and more 
    • Flexible working hours 
    • Inclusive and supportive culture 
       

    Π’ΠΈΠΌΠΎΠ³ΠΈ Π΄ΠΎ володіння ΠΌΠΎΠ²Π°ΠΌΠΈ

    ΠΠ½Π³Π»Ρ–ΠΉΡΡŒΠΊΠ°

    B1 – Π‘Π΅Ρ€Π΅Π΄Π½Ρ–ΠΉ

    Π£ΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠ°

    Носій ΠΌΠΎΠ²ΠΈ
     

    ΠŸΡ€ΠΎ ΠΊΠΎΠΌΠΏΠ°Π½Ρ–ΡŽ Trinetix

    Established in 2011, Trinetix is a dynamic tech service provider supporting enterprise clients around the world. 

    Headquartered in Nashville, Tennessee, we have a global team of over 1,000 professionals and delivery centers across Europe, the United States, and Argentina. We partner with leading global brands, delivering innovative digital solutions across Fintech, Professional Services, Logistics, Healthcare, and Agriculture. 

    Our operations are driven by a strong business vision, a people-first culture, and a commitment to responsible growth. We actively give back to the community through various CSR activities and adhere to international principles for sustainable development and business ethics. 

     

    To learn more about how we collect, process, and store your personal data, please review our Privacy Notice: https://www.trinetix.com/corporate-policies/privacy-notice 

    More
  • Β· 29 views Β· 2 applications Β· 4d

    Senior Data Engineer (Enterprise and Game Solution Unit)

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    Company Description Are you an experienced Data Engineer ready to tackle complex, high-load, and data-intensive systems? We are looking for a Senior professional to join our team in Ukraine, Europe, working full-time on a project that will make a real...

    Company Description

    Are you an experienced Data Engineer ready to tackle complex, high-load, and data-intensive systems? We are looking for a Senior professional to join our team in Ukraine, Europe, working full-time on a project that will make a real impact in the public sector.

    At Sigma Software, we specialize in delivering innovative solutions for enterprise clients and public organizations. In this role, you will contribute to building an integrated platform that collects, processes, and visualizes critical indicators, enabling better decision-making and analytics.

    Why join us? You will work with a modern big data stack, have end-to-end involvement from ingestion to machine learning workflows, and be part of a professional team that values ownership, collaboration, and continuous improvement.

    PROJECT
    You will be involved in developing an integrated platform that processes both batch and streaming data, ensures secure and governed data environments, and supports advanced analytics and machine learning workflows. The solution will leverage modern big data technologies to provide actionable insights for the public sector.

    Job Description

    • Design and implement data ingestion pipelines for batch and streaming data
    • Configure and maintain data orchestration workflows (Airflow, NiFi) and CI/CD automation for data processes
    • Design and organize data layers within Data Lake architecture (HDFS, Iceberg, S3)
    • Build and maintain secure and governed data environments using Apache Ranger, Atlas, and SDX
    • Develop SQL queries and optimize performance for analytical workloads in Hive/Impala
    • Collaborate on data modeling for analytics and BI, ensuring clean schemas and dimensional models
    • Support machine learning workflows using Spark MLlib or Cloudera Machine Learning (CML)

    Qualifications

    • Proven experience in building and maintaining large-scale data pipelines (batch and streaming)
    • Strong knowledge of data engineering fundamentals: ETL/ELT, data governance, data warehousing, Medallion architecture
    • Strong SQL skills for Data Warehouse data serving
    • Minimum 3 years of experience in Python or Scala for data processing
    • Hands-on experience with Apache Spark, Kafka, Airflow, and distributed systems optimization
    • Experience with Apache Ranger and Atlas for security and metadata management
    • Upper-Intermediate English proficiency

    WILL BE A PLUS

    • Experience with Cloudera Data Platform (CDP)
    • Advanced SQL skills and Hive/Impala query optimization
    • BS in Computer Science or related field
    • Exposure to ML frameworks and predictive modeling

    PERSONAL PROFILE

    • Ownership mindset and proactive approach
    • Ability to drive initiatives forward and suggest improvements
    • Team player with shared responsibility for delivery speed, efficiency, and quality
    • Excellent written and verbal communication skills

     

    More
  • Β· 460 views Β· 47 applications Β· 4d

    Junior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 0.5 years of experience Β· English - B2
    We seek a Junior Data Engineer with basic pandas and SQL experience. At Dataforest, we are actively seeking Data Engineers of all experience levels. If you're ready to take on a challenge and join our team, please send us your resume. We will review it...

    We seek a Junior Data Engineer with basic pandas and SQL experience.

    At Dataforest, we are actively seeking Data Engineers of all experience levels.

    If you're ready to take on a challenge and join our team, please send us your resume.

    We will review it and discuss potential opportunities with you.

     

    Requirements:

    β€’ 6+ months of experience as a Data Engineer

    β€’ Experience with SQL ;

    β€’ Experience with Python;

     

     

    Optional skills (as a plus):

    β€’ Experience with ETL / ELT pipelines;

    β€’ Experience with PySpark;

    β€’ Experience with Airflow;

    β€’ Experience with Databricks;

     

    Key Responsibilities:

    β€’ Apply data processing algorithms;

    β€’ Create ETL/ELT pipelines and data management solutions;

    β€’ Work with SQL queries for data extraction and analysis;

    β€’ Data analysis and application of data processing algorithms to solve business problems;

     

     

    We offer:

    β€’ Onboarding phase with hands-on experience with major DE stack, including Pandas, Kafka, Redis, Cassandra, and Spark

    β€’ Opportunity to work with the high-skilled engineering team on challenging projects;

    β€’ Interesting projects with new technologies;

    β€’ Great networking opportunities with international clients, challenging tasks;

    β€’ Building interesting projects from scratch using new technologies;

    β€’ Personal and professional development opportunities;

    β€’ Competitive salary fixed in USD;

    β€’ Paid vacation and sick leaves;

    β€’ Flexible work schedule;

    β€’ Friendly working environment with minimal hierarchy;

    β€’ Team building activities, corporate events.

    More
  • Β· 37 views Β· 2 applications Β· 4d

    Senior Data Engineer Healthcare to $4000

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    CrunchCode β€” ΠΌΡ–ΠΆΠ½Π°Ρ€ΠΎΠ΄Π½Π° сСрвісна Π†Π’-компанія Π· досвідом близько 7 Ρ€ΠΎΠΊΡ–Π² Ρƒ Ρ€ΠΎΠ·Ρ€ΠΎΠ±Ρ†Ρ– вСбсСрвісів Ρ– вСбзастосунків. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Ρƒ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°Ρ… staff augmentation (outstaff) Ρ‚Π° outsourcing Ρ– ΠΏΡ–Π΄ΠΊΠ»ΡŽΡ‡Π°Ρ”ΠΌΠΎ спСціалістів Π΄ΠΎ ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–Π² ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² Ρƒ довгостроковій ΠΌΠΎΠ΄Π΅Π»Ρ–...

    CrunchCode β€” ΠΌΡ–ΠΆΠ½Π°Ρ€ΠΎΠ΄Π½Π° сСрвісна Π†Π’-компанія Π· Π΄ΠΎΡΠ²Ρ–Π΄ΠΎΠΌ близько 7 Ρ€ΠΎΠΊΡ–Π² Ρƒ Ρ€ΠΎΠ·Ρ€ΠΎΠ±Ρ†Ρ– вСбсСрвісів Ρ– Π²Π΅Π±Π·Π°ΡΡ‚осунків. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Ρƒ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°Ρ… staff augmentation (outstaff) Ρ‚Π° outsourcing Ρ– ΠΏΡ–Π΄ΠΊΠ»ΡŽΡ‡Π°Ρ”ΠΌΠΎ спСціалістів Π΄ΠΎ ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–Π² ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² Ρƒ Π΄ΠΎΠ²Π³ΠΎΡΡ‚Ρ€ΠΎΠΊΠΎΠ²Ρ–ΠΉ ΠΌΠΎΠ΄Π΅Π»Ρ– співпраці.

    Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ ΠΏΠ΅Ρ€Π΅Π²Π°ΠΆΠ½ΠΎ Π· ΠΏΡ€ΠΎΡ”ΠΊΡ‚Π°ΠΌΠΈ Π² Π΄ΠΎΠΌΠ΅Π½Π°Ρ… логістики (Π²ΠΊΠ»ΡŽΡ‡Π½ΠΎ Π· last mile),e-commerce, fintech Ρ‚Π° Π±Π°Π½ΠΊΡ–Π½Π³Ρƒ, Π° Ρ‚Π°ΠΊΠΎΠΆ enterprise-Ρ€Ρ–ΡˆΠ΅Π½Π½ΡΠΌΠΈ.
    Для нас Π²Π°ΠΆΠ»ΠΈΠ²ΠΎ, Ρ‰ΠΎΠ± ΠΏΡ€ΠΎΡ”ΠΊΡ‚ Π±ΡƒΠ² β€œΡ‡ΠΈΡΡ‚ΠΈΠΌβ€ Ρ– Π·Ρ€ΠΎΠ·ΡƒΠΌΡ–Π»ΠΈΠΌ Π· Ρ‚ΠΎΡ‡ΠΊΠΈ Π·ΠΎΡ€Ρƒ Π΅Ρ‚ΠΈΠΊΠΈ Ρ‚Π° Ρ†Ρ–нності для користувачів.

    Ми ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΎΠ²ΠΎ Π½Π΅ Π±Π΅Ρ€Π΅ΠΌΠΎ ΠΏΡ€ΠΎΡ”ΠΊΡ‚ΠΈ, пов’язані Π·:
    ● gambling / Π³Π΅ΠΌΠ±Π»Ρ–Π½Π³ΠΎΠΌ,
    ● adult-ΠΊΠΎΠ½Ρ‚Π΅Π½Ρ‚ΠΎΠΌ Ρ‚Π° ΠΏΠΎΡ€Π½ΠΎΠ³Ρ€Π°Ρ„Ρ–Ρ”ΡŽ,
    ● ΡˆΠ°Ρ…Ρ€Π°ΠΉΡΡ‚Π²ΠΎΠΌ Π°Π±ΠΎ Π±ΡƒΠ΄ΡŒ-якою Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΎΡŽ, Ρ‰ΠΎ ΡΠΏΡ€ΡΠΌΠΎΠ²Π°Π½Π° Π½Π° ΠΎΠ±ΠΌΠ°Π½ Ρ‡ΠΈ ΠΌΠ°Π½Ρ–пуляції.

    What We Offer:
    ● Fully remote work
    ● Long-term, stable project
    ● High level of autonomy and trust
    ● Minimal bureaucracy
    ● Direct impact on business-critical logistics systems
    ● Long-term engagement, not a short-term contract.

    Required: 
    4/5 hours overlap with PST required, start ASAP, initial term with possible extension

    Project Overview:
    Healthcare solution, innovative and dynamic. Client create a solution, to speed up the process of inserting data by doctors. On average, doctors have to log in to 30 different systems to make sure the treatment will be covered by the insurance company.  So their solution make the copy of info (ex, all clients info) and spread it around all systems they need. This significantly decreases the time. Patients or doctors don’t pay for it, but the insurance companies.

    Client Location: USA

    Requirements (Must-have):
    β€’ 5+ years in data engineering or similar role
    β€’ Strong SQL skills
    β€’ Experience with Fivetran, dbt, Airflow (or similar orchestration tools)
    β€’ Snowflake or equivalent cloud data warehouse experience
    β€’ Familiar with BI tools (Superset, Looker, Tableau)
    β€’ AWS experience in a data context
    β€’ Able to work independently on assigned tasks
    β€’ 3–4 hrs/day overlap with PST
    β€’ Bonus: healthcare data experience (FHIR, HL7) or HIPAA knowledge

    Responsibilities:
    β€’ Monitor for upstream schema changes & update pipelines
    β€’ Troubleshoot ETL/ELT job failures
    β€’ Debug & optimize Superset queries
    β€’ Address data quality/access issues
    β€’ Document fixes & changes


    Hiring Process:
    - Intro call
    - Technical discussion (focused on real experience)
    - Offer
    Start: ASAP

    More
  • Β· 65 views Β· 8 applications Β· 5d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    We are looking for a Senior Data Engineer to join AltexSoft and strengthen our data engineering practice on a large-scale, high-impact project within the travel and hospitality domain. In this role, you will work hands-on with complex data ecosystems,...

    We are looking for a Senior Data Engineer to join AltexSoft and strengthen our data engineering practice on a large-scale, high-impact project within the travel and hospitality domain. In this role, you will work hands-on with complex data ecosystems, taking responsibility for the stability, scalability, and performance of mission-critical data integrations.
    You will play a key role in resolving non-trivial technical issues, improving existing pipelines, and evolving engineering processes to make data operations more resilient and proactive. The position offers a strong technical challenge and real influence on systems that process massive volumes of data used by analytics platforms and customer-facing products.
     

    You Have

    • 5+ years of experience in Python and proven experience working with large-scale datasets.
    • Experience with PMS integrations/Hospitality domain
    • Solid background in designing, building, and maintaining data processing pipelines.
    • Experience with cloud platforms (GCP, AWS, or Azure).
    • Hands-on skills with SQL and data storage/querying systems (e.g., BigQuery, BigTable, or similar).
    • Knowledge of containerization and orchestration tools (Docker, Kubernetes).
    • Ability to troubleshoot and debug complex technical issues in distributed systems.
    • Strong communication skills in English, with the ability to explain technical details to both technical and non-technical stakeholders.
    • Experience using AI coding assistants (e.g., Cursor, GitHub Copilot, or similar) in day-to-day development tasks.
    • Experience with Google Cloud services such as Pub/Sub, Dataflow, and ML-driven data workflows.

       

    Would be a plus

    • Experience with airline, travel, or hospitality-related datasets.
    • Exposure to observability and monitoring tools for large-scale data systems.
    • Experience building AI-powered solutions or integrating AI pipelines/APIs into software projects.
    • Experience with 2nd tier PMS market like Tesipro or Maestro.
    • Or any property management systems APIs.

       

    You Are Going To

    • Maintain and enhance existing data integrations, ensuring the reliability, accuracy, and quality of incoming data.
    • Lead the investigation and resolution of complex incidents by performing deep technical analysis and debugging.
    • Communicate effectively with stakeholders (including customer-facing teams and external partners) by providing transparent and timely updates.
    • Collaborate with partners to troubleshoot integration issues and ensure smooth data flow.
    • Identify opportunities to improve processes, tooling, and documentation to scale and streamline data operations.
    • Contribute to the design and delivery of new data engineering solutions supporting business-critical systems.

    We offer

    More
  • Β· 40 views Β· 4 applications Β· 5d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 3.5 years of experience Β· English - B2
    Job Description We're excited if you have: - 4+ years of commercial data engineering/analytics experience - Good culture fit and excellent team player - Strong level knowledge of Python - Strong knowledge of SQL - Experience with ETL/ELT and Reverse ETL...

    Job Description

    We're excited if you have:
    - 4+ years of commercial data engineering/analytics experience
    - Good culture fit and excellent team player
    - Strong level knowledge of Python
    - Strong knowledge of SQL
    - Experience with ETL/ELT and Reverse ETL (Hightouch preferred)
    - Experience with Data warehouse systems (e.g. Snowflake)
    - Experience with CDP and Analytics systems
    - Strong English, both verbal (video meetings online) and written (requirements, descriptions, blueprints).
    - BS or MS degree in Computer Science or equivalent

     

    It is a plus if you have:
    - Experience with analytics: Tableau, Power BI, DOMO

     

    Job Responsibilities

    What you'll be doing
    - Work in the team working under agile software engineering (analysis, architecture, technical design, task planning, coding, PR reviews, maintenance, etc.) framework
    - Build & maintain data engineering pipelines in the media domain space
    - Work with APIs, webhooks, and batch processes
    - Design and optimize data schemas, queries, storage, processing costs (e.g. in Snowflake)
    - Clean and enrich data using SQL-based transformations
    - Set up data validation checks, implement data observability, manage PII
    - Sync data across various systems
    - Document data pipelines, transformations, best practises
    - Collaborate with Product, Engineering Managers, Analytics and Data engineering teams in to build and enhance the data pipelines

     

    Department/Project Description

    Development, refactoring, and maintenance of data and analytics pipelines for an international broadcasting company headquartered in New York, US. The company owns several television channels/brands, developed state-of-the-art TV streaming applications for Android TV, Android Mobile, Apple TV, and Fire TV platforms, and wants to improve its data processing.

    More
  • Β· 56 views Β· 0 applications Β· 5d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    CrunchCode β€” ΠΌΡ–ΠΆΠ½Π°Ρ€ΠΎΠ΄Π½Π° сСрвісна Π†Π’-компанія Π· досвідом близько 7 Ρ€ΠΎΠΊΡ–Π² Ρƒ Ρ€ΠΎΠ·Ρ€ΠΎΠ±Ρ†Ρ– вСбсСрвісів Ρ– вСбзастосунків. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Ρƒ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°Ρ… staff augmentation (outstaff) Ρ‚Π° outsourcing Ρ– ΠΏΡ–Π΄ΠΊΠ»ΡŽΡ‡Π°Ρ”ΠΌΠΎ спСціалістів Π΄ΠΎ ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–Π² ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² Ρƒ довгостроковій ΠΌΠΎΠ΄Π΅Π»Ρ–...

    CrunchCode β€” ΠΌΡ–ΠΆΠ½Π°Ρ€ΠΎΠ΄Π½Π° сСрвісна Π†Π’-компанія Π· Π΄ΠΎΡΠ²Ρ–Π΄ΠΎΠΌ близько 7 Ρ€ΠΎΠΊΡ–Π² Ρƒ Ρ€ΠΎΠ·Ρ€ΠΎΠ±Ρ†Ρ– вСбсСрвісів Ρ– Π²Π΅Π±Π·Π°ΡΡ‚осунків. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Ρƒ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°Ρ… staff augmentation (outstaff) Ρ‚Π° outsourcing Ρ– ΠΏΡ–Π΄ΠΊΠ»ΡŽΡ‡Π°Ρ”ΠΌΠΎ спСціалістів Π΄ΠΎ ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–Π² ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² Ρƒ Π΄ΠΎΠ²Π³ΠΎΡΡ‚Ρ€ΠΎΠΊΠΎΠ²Ρ–ΠΉ ΠΌΠΎΠ΄Π΅Π»Ρ– співпраці.

    Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ ΠΏΠ΅Ρ€Π΅Π²Π°ΠΆΠ½ΠΎ Π· ΠΏΡ€ΠΎΡ”ΠΊΡ‚Π°ΠΌΠΈ Π² Π΄ΠΎΠΌΠ΅Π½Π°Ρ… логістики (Π²ΠΊΠ»ΡŽΡ‡Π½ΠΎ Π· last mile),e-commerce, fintech Ρ‚Π° Π±Π°Π½ΠΊΡ–Π½Π³Ρƒ, Π° Ρ‚Π°ΠΊΠΎΠΆ enterprise-Ρ€Ρ–ΡˆΠ΅Π½Π½ΡΠΌΠΈ.
    Для нас Π²Π°ΠΆΠ»ΠΈΠ²ΠΎ, Ρ‰ΠΎΠ± ΠΏΡ€ΠΎΡ”ΠΊΡ‚ Π±ΡƒΠ² β€œΡ‡ΠΈΡΡ‚ΠΈΠΌβ€ Ρ– Π·Ρ€ΠΎΠ·ΡƒΠΌΡ–Π»ΠΈΠΌ Π· Ρ‚ΠΎΡ‡ΠΊΠΈ Π·ΠΎΡ€Ρƒ Π΅Ρ‚ΠΈΠΊΠΈ Ρ‚Π° Ρ†Ρ–нності для користувачів.

    Ми ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΎΠ²ΠΎ Π½Π΅ Π±Π΅Ρ€Π΅ΠΌΠΎ ΠΏΡ€ΠΎΡ”ΠΊΡ‚ΠΈ, пов’язані Π·:
    ● gambling / Π³Π΅ΠΌΠ±Π»Ρ–Π½Π³ΠΎΠΌ,
    ● adult-ΠΊΠΎΠ½Ρ‚Π΅Π½Ρ‚ΠΎΠΌ Ρ‚Π° ΠΏΠΎΡ€Π½ΠΎΠ³Ρ€Π°Ρ„Ρ–Ρ”ΡŽ,
    ● ΡˆΠ°Ρ…Ρ€Π°ΠΉΡΡ‚Π²ΠΎΠΌ Π°Π±ΠΎ Π±ΡƒΠ΄ΡŒ-якою Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΎΡŽ, Ρ‰ΠΎ ΡΠΏΡ€ΡΠΌΠΎΠ²Π°Π½Π° Π½Π° ΠΎΠ±ΠΌΠ°Π½ Ρ‡ΠΈ ΠΌΠ°Π½Ρ–пуляції.

    What We Offer:
    ● Fully remote work
    ● Long-term, stable project
    ● High level of autonomy and trust
    ● Minimal bureaucracy
    ● Direct impact on business-critical logistics systems
    ● Long-term engagement, not a short-term contract.

    Required: 
    Data migration to Snowflake/dbt using AI-native development (Cursor/Claude) and clean SQL/Python

    Project Overview:
    Project overview: client is an "old school" organization currently undergoing a digital transformation. They are building agentic tools and hiring technologists to differentiate themselves in the education space.

    Domain: Education


    Requirements (Must-have):
    - SQL, dbt Cloud/pipelines, Snowflake, Python

    Responsibilities:
    - Optimize the initial user experience (post-conversion), Focus on deepening user engagement after they join the society (rather than just driving immediate monetization or LTV), Run experiments to iterate on the sign-up and product experience.

    Hiring Process:
    - Intro call
    - Technical discussion (focused on real experience)
    - Offer
    Start: ASAP

    More
Log In or Sign Up to see all posted jobs