Jobs Data Engineer

143
  • Β· 206 views Β· 12 applications Β· 9d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B1
    We are looking for you! As we architect the next wave of data solutions in the AdTech and MarTech sectors, we're looking for a Senior Data Engineerβ€”a maestro in data architecture and pipeline design. If you're a seasoned expert eager to lead, innovate,...

    We are looking for you!

    As we architect the next wave of data solutions in the AdTech and MarTech sectors, we're looking for a Senior Data Engineerβ€”a maestro in data architecture and pipeline design. If you're a seasoned expert eager to lead, innovate, and craft state-of-the-art data solutions, we're keen to embark on this journey with you.

    Contract type: Gig contract.

    Skills and experience you can bring to this role

    Qualifications & experience:

    • 6+ years of intensive experience as a Data Engineer or in a similar role, with a demonstrable track record of leading large-scale projects;
    • Mastery in Python, SQL;
    • Deep understanding and practical experience with cloud data warehouses (Snowflake, BigQuery, Redshift);
    • Extensive experience building data and ML pipelines;
    • Experience with modern Scrum-based Software Development Life Cycle (SDLC);
    • Deep understanding of Git and its workflows;
    • Open to collaborating with data scientists and businesses.

    Nice to have:

    • Hands-on experience with Dagster, dbt, Snowflake and FastAPI;
    • Proven expertise in designing and optimizing large-scale data pipelines;
    • Comprehensive understanding of data governance principles and data quality management practices;
    • Understand marketing and media metrics (i.e., what conversion rate is and how it is calculated);
    • Exceptional leadership, communication, and collaboration skills, with a knack for guiding and nurturing teams.
       

    Educational requirements:

    • Bachelor’s degree in Computer Science, Information Systems, or a related discipline is preferred. A Master's degree or higher is a distinct advantage.

    What impact you’ll make 

    • Lead the design, development, testing, and maintenance of scalable data architectures, ensuring they align with business and technical objectives;
    • Spearhead the creation of sophisticated data pipelines using Python, leveraging advanced Snowflake capabilities such as Data Shares, Snowpipe, Snowpark, and more;
    • Collaborate intensively with data scientists, product teams, and other stakeholders to define and fulfill intricate data requirements for cross-channel budget optimization solutions;
    • Drive initiatives for new data collection, refining existing data sources, and ensuring the highest standards of data accuracy and reliability;
    • Set the gold standard for data quality, introducing cutting-edge tools and frameworks to detect and address data inconsistencies and inaccuracies; 
    • Identify, design, and implement process improvements, focusing on data delivery optimization, automation of manual processes, and infrastructure enhancements for scalability.

    What you’ll get 

    Regardless of your position or role, we have a wide array of benefits in place, including flexible working (hybrid/remote models) and generous time off policies (unlimited vacations, sick and parental leaves) to make it easier for all people to thrive and succeed at Star. On top of that, we offer an extensive reward and compensation package, intellectually and creatively stimulating space, health insurance and unique travel opportunities.

    Your holistic well-being is central at Star. You'll join a warm and vibrant multinational environment filled with impactful projects, career development opportunities, mentorship and training programs, fun sports activities, workshops, networking and outdoor meet-ups.

    More
  • Β· 16 views Β· 0 applications Β· 11d

    Senior Data Platform Architect

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - None
    We are seeking an expert with deep proficiency as a Platform Engineer, possessing experience in data engineering. This individual should have a comprehensive understanding of both data platforms and software engineering, enabling them to integrate the...

    We are seeking an expert with deep proficiency as a Platform Engineer, possessing experience in data engineering. This individual should have a comprehensive understanding of both data platforms and software engineering, enabling them to integrate the platform effectively within an IT ecosystem.

    Responsibilities:

    β€’ Manage and optimize data platforms (Databricks, Palantir).
    β€’ Ensure high availability, security, and performance of data systems.
    β€’ Provide valuable insights about data platform usage.
    β€’ Optimize computing and storage for large-scale data processing.
    β€’ Design and maintain system libraries (Python) used in ETL pipelines and platform governance.
    β€’ Optimize ETL Processes – Enhance and tune existing ETL processes for better performance, scalability, and reliability.

    Mandatory Skills Description:

    β€’ Minimum 10 Years of experience in IT/Data.
    β€’ Minimum 5 years of experience as a Data Platform Engineer/Data Engineer.
    β€’ Bachelor's in IT or related field.
    β€’ Infrastructure & Cloud: Azure, AWS (expertise in storage, networking, compute).
    β€’ Data Platform Tools: Any of Palantir, Databricks, Snowflake.
    β€’ Programming: Proficiency in PySpark for distributed computing and Python for ETL development.
    β€’ SQL: Expertise in writing and optimizing SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake.
    β€’ Data Warehousing: Experience working with data warehousing concepts and platforms, ideally Databricks.
    β€’ ETL Tools: Familiarity with ETL tools & processes
    β€’ Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design.
    β€’ Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development.
    β€’ Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance.
    β€’ Data Quality Tools: Experience implementing data validation, cleaning, and quality frameworks, ideally Monte Carlo.

    Nice-to-Have Skills Description:

    β€’ Containerization & Orchestration: Docker, Kubernetes.
    β€’ Infrastructure as Code (IaC): Terraform.
    β€’ Understanding of Investment Data domain (desired).

    Languages:

    English: B2 Upper Intermediate

    More
  • Β· 335 views Β· 15 applications Β· 3d

    Middle Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· English - B2
    Dataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture...

    Dataforest is looking for a Middle Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics.
    You will be responsible for developing and maintaining a scalable data architecture that powers large-scale data collection, processing, analysis, and integrations.

    If you are passionate about data optimization, system performance, and architecture, we’re waiting for your CV!
     

         Requirements:

    • 2+ years of commercial experience with Python.
    • Advanced experience with SQL DBs (optimisations, monitoring, etc.);
    • PostgreSQL β€” must have;
    • Solid understanding of ETL principles (architecture/ monitoring/ alerting/search and resolve bottlenecks);
    • Experience with Message brokers: Kafka/ Redis;
    • Experience with Pandas;
    • Familiar with AWS infrastructure (boto3, S3 buckets, etc);
    • Experience working with large volumes of data;
    • Understanding the principles of medallion architecture.
       

         Will Be a Plus:

    • Understanding noSQL DBs (Elastic);
    • TimeScaleDB;
    • PySpark;
    • Experience with e-commerce or fin-tech.
       

         Key Responsibilities:

    • Develop and maintain a robust and scalable data processing architecture using Python.
    • Design, optimize, and monitor data pipelines using Kafka and AWS SQS.
    • Implement and optimize ETL processes for various data sources.
    • Manage and optimize SQL and NoSQL databases (PostgreSQL, TimeScaleDB, Elasticsearch).
    • Work with AWS infrastructure to ensure reliability, scalability, and cost efficiency.
    • Proactively identify bottlenecks and suggest technical improvements.
       

      We offer:

    • Working in a fast growing company;
    • Great networking opportunities with international clients, challenging tasks;
    • Personal and professional development opportunities;
    • Competitive salary fixed in USD;
    • Paid vacation and sick leaves;
    • Flexible work schedule;
    • Friendly working environment with minimal hierarchy;
    • Team building activities, corporate events.
    More
  • Β· 44 views Β· 1 application Β· 1d

    Data Engineer

    Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - B2
    We’re looking for a highly skilled Data Expert! Product | Remote ​​We’re looking for a data expert who bridges technical depth with curiosity. You’ll help Redocly turn data into insight β€” driving smarter product, growth, and business decisions. ...

     

    πŸ”₯ We’re looking for a highly skilled Data Expert!πŸ”₯

     

    Product | Remote

     

    ​​We’re looking for a data expert who bridges technical depth with curiosity. You’ll help Redocly turn data into insight β€” driving smarter product, growth, and business decisions.

     

    This role combines data governance and development. You’ll build reliable data pipelines, improve observability, and uncover meaningful patterns that guide how we grow and evolve.

     

    You’ll work closely with product and technical teams to support data collection, processing, and consistency across systems.

     

    What you’ll do 

    • Analyze product and user behavior to uncover trends, bottlenecks, and opportunities.
    • Build and maintain data pipelines and ETL processes.
    • Design and optimize data models for new features and analytics (e.g., using dbt).
    • Work with event-driven architectures and standards like AsyncAPI and CloudEvents.
    • Collaborate with engineers to improve data quality, consistency, and governance across systems.
    • Use observability and tracing tools (e.g., OpenTelemetry) to monitor and improve performance.
    • Support existing frontend and backend systems related to analytics and data processing.
    • Build and maintain datasets for analytics and reporting.

     

    You’re a great fit if you have 

    • 5+ years of software engineering experience, with 3+ years focused on data engineering.
    • Strong SQL skills and experience with data modeling (dbt preferred).
    • Strong proficiency with Node.js, React, JavaScript, and TypeScript.
    • Proven experience in data governance and backend systems.
    • Familiarity with columnar databases or analytics engines (ClickHouse, Postgres, etc.).
    • Strong analytical mindset, attention to detail, and clear communication.
    • Passionate about clarity, simplicity, and quality in both data and code.
    • English proficiency: Upper-Intermediate or higher.

     

    How you’ll know you’re doing a great job

    • Data pipelines are trusted, observable, and performant.
    • Metrics and dashboards are used across teams β€” not just built once.
    • Teams make better product decisions, faster, because of your insights.
    • Data pipelines are trusted, observable, and performant.
    • You’re the go-to person for clarity when questions arise about β€œwhat the data says.”

     

    About Redocly

    Redocly builds tools that accelerate API ubiquity. Our platform helps teams create world-class developer experiences β€” from API documentation and catalogs to internal developer hubs and public showcases. We're a globally distributed team that values clarity, autonomy, and craftsmanship. You'll work alongside people who love developer experience, storytelling, and building tools that make technical work simpler and more joyful.

    Headquarter – Austin, Texas, US. There is also an office in Lviv, Ukraine.

     

    Redocly is trusted by leading tech, fintech, telecom, and enterprise teams to power API documentation and developer portals. Redocly’s clients range from startups to Fortune 500 enterprises.

    https://redocly.com/

     

    Working with Redocly

    • Team: 4-6 people (middle-seniors)
    • Team’s location: Ukraine&Europe
    • There are functional, product, and platform teams and each has its own ownership, and line structure, and teams themselves decide when to have weekly meetings.
    • Cross-functional teams are formed for each two-month cycle, giving team members the opportunity to work across all parts of the product.
    • Methodology: Shape Up

     

    Perks

    • Competitive salary based on your expertise 
    • Full remote, though you’re welcome to come to the office occasionally if you wish.
    • Cooperation on a B2B basis with a US-based company (for EU citizens) or under a gig contract (for Ukraine).
    • After a year of working with the company, you can buy a certain number of company’s shares
    • Around 30 days of vacation (unlimited,  but let’s keep it reasonable)
    • 10 working days of sick leave per year
    • Public holidays according to the standards
    • No trackers and screen recorders
    • Working hours – EU/UA timezone. Working day – 8 hours. Mostly they start working from 10-11 am
    • Equipment provided – MacBooks (M1 – M4)
    • Regular performance reviews

     

    Hiring Stages

    • Prescreening (30-45 min)
    • HR Call (45 min)
    • Initial Interview (30 min)
    • Trial Day (paid)
    • Offer

     

    If you are an experienced Data Scientist, and you want to work on impactful data-driven projects, we’d love to hear from you! 


    Apply now to join our team!

    More
  • Β· 43 views Β· 1 application Β· 16d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· English - B1
    Zoral Labs, a leading provider of research and development to the software industry, is looking for an experienced Senior Data Engineer to join its development center remotely Required skills: 5+ years of enterprise experience in a similar position...

    Zoral Labs, a leading provider of research and development to the software industry, is looking for an experienced Senior Data Engineer to join its development center remotely

    Required skills:

    • 5+ years of enterprise experience in a similar position 
    • Expert knowledge of Python - experience with data pipelines and data frames
    • Expert knowledge of SQL and DBMS (any) on logical level. Knowledge of physical details will be a plus.
    • Experience with GCP (BigQuery, Composer, GKE, Storage, Logging and Monitoring, Services API etc.)
    • Understanding of DWH and DLH (Inmon vs Kimbal, medallion, ETL/ELT)
    • Columnar data management and/or NoSQL system(s) experience
    • Enterprise-like working environment understanding and acceptance


    Soft skills:

    • Fast learner, open-minded, goal oriented. Problem solver
    • Analytical thinking, proper communication
    • English B1+


    Project description:

    We specialize in advanced software fields such as BI, Data Mining, Artificial Intelligence, Machine Learning (AI/ML), High Speed Computing, Cloud Computing, BIG Data Predictive Analytics, Unstructured Data processing, Finance, Risk Management and Security.

    We create extensible decision engine services, data analysis and management solutions, real-time automatic data processing applications.
    We are looking for the software engineers to design, build and implement large, scalable web service architecture with decision engine used as its base. If you are excited about development of artificial intellect, behavior analysis data solutions, big data approach then we can give you an opportunity to reveal your talents.

     

    About Zoral Labs:

    Zoral is a fintech software research and development company. We were founded in 2004.

    We operate one of largest labs in Europe focused on Artificial Intelligence/Machine Learning (AI/ML), predictive systems for consumer/SME credit and financial products.

    Our clients are based in USA, Canada, Europe, Africa, Asia, South America and Australia.

    We are one of the world’s leading companies in the use of unstructured, social, device, MNO, bureau and behavioral data, for real-time decisioning and predictive modeling.

    Zoral software intelligently automates digital financial products.

    Zoral produced the world’s first, fully automated, STP consumer credit platforms.

    We are based in London, New York and Berlin 

    Web site:
    https://zorallabs.com/company

    Company page at DOU:
    https://jobs.dou.ua/companies/zoral/

    More
  • Β· 107 views Β· 13 applications Β· 22d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - None
    What You’ll Actually Do Build and run scalable pipelines (batch + streaming) that power gameplay, wallet, and promo analytics. Model data for decisions (star schemas, marts) that Product, BI, and Finance use daily. Make things reliable: tests, lineage,...

    🎯 What You’ll Actually Do

    • Build and run scalable pipelines (batch + streaming) that power gameplay, wallet, and promo analytics.
    • Model data for decisions (star schemas, marts) that Product, BI, and Finance use daily.
    • Make things reliable: tests, lineage, alerts, SLAs. Fewer surprises, faster fixes.
    • Optimize ETL/ELT for speed and cost (partitioning, clustering, late arrivals, idempotency).
    • Keep promo data clean and compliant (PII, GDPR, access controls).
    • Partner with POs and analysts on bets/wins/turnover KPIs, experiment readouts, and ROI.
    • Evaluate tools, migrate or deprecate with clear trade-offs and docs.
    • Handle prod issues without drama, then prevent the next one.

       

    🧠 What You Bring

    • 4+ years building production data systems. You’ve shipped, broken, and fixed pipelines at scale.
    • SQL that sings and Python you’re proud of.
    • Real experience with OLAP and BI (Power BI / Tableau / Redash β€” impact > logo).
    • ETL/ELT orchestration (Airflow/Prefect or similar) and CI/CD for data.
    • Strong grasp of warehouses & lakes: incremental loads, SCDs, partitioning.
    • Data quality mindset: contracts, tests, lineage, monitoring.
    • Product sense: you care about player impact, not just rows processed.

       

    ✨ Nice to Have (tell us if you’ve got it)

    • Kafka (or similar streaming), ClickHouse (we like it), dbt (modular ELT).
    • AWS data stack (S3, IAM, MSK/Glue/Lambda/Redshift) or equivalents.
    • Containers & orchestration (Docker/K8s), IaC (Terraform).
    • Familiarity with AI/ML data workflows (feature stores, reproducibility).
    • iGaming context: provider metrics bets / wins / turnover, regulated markets, promo events.

       

    πŸ”§ How We Work

    • Speed > perfection. Iterate, test, ship.
    • Impact > output. One rock-solid dataset beats five flaky ones.
    • Behavior > titles. Ownership matters more than hierarchy.
    • Direct > polite. Say what matters, early.

       

    πŸ”₯ What We Offer

    • Fully remote (EU-friendly time zones) or Bratislava if you like offices.
    • Unlimited vacation + paid sick leave.
    • Quarterly performance bonuses.
    • No micromanagement. Real ownership, real impact.
    • Budget for conferences and growth.
    • Product-led culture with sharp people who care.

       

    🧰 Our Day-to-Day Stack (representative)
    Python, SQL, Airflow/Prefect, Kafka, ClickHouse/OLAP DBs, AWS (S3 + friends), dbt, Redash/Power BI/Tableau, Docker/K8s, GitHub Actions.

     

    πŸ‘‰ If you know how to make data boringly reliable and blisteringly fast β€” hit apply and let’s talk.

    More
  • Β· 27 views Β· 2 applications Β· 4d

    Data Streaming Engineer

    Full Remote Β· Worldwide Β· 3 years of experience Β· English - None
    N.B.! Location: remote from Latvia/Lithuania; possible relocation (the company provides support). JD: Client: Media group Belgium. Skills Required: AWS, Kafka, Spark, Python (FastAPI), SQL, Terraform. You have: high standards for the quality of...

    N.B.! Location: remote from Latvia/Lithuania; possible relocation (the company provides support).
    JD:

    βœ”οΈο»ΏClient: Media group Belgium.

    βœ”οΈSkills Required: AWS, Kafka, Spark, Python (FastAPI), SQL, Terraform.

    βœ”οΈYou have:

    ● high standards for the quality of the work you deliver

    ● a degree in computer science, software engineering, a related field, or relevant prior experience

    ● 3+ years of software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience

    ● affinity with data analysis

    ● a natural interest in digital media products

    ● AWS certified on an Associate level or higher, or willing to get certified

    βœ”οΈyou have experience in:

    ● developing applications in a Kubernetes environment

    ● developing batch jobs in Apache Spark (pyspark or Scala)

    ● developing streaming applications for Apache Kafka in Python

    ● working with CI/CD pipelines

    ● writing Infrastructure as Code with Terraform

    βœ”οΈResponsibilities

    ● Maintain and extend our back-end.

    ● Support operational excellence through practices like code review and pair programming.

    ● The entire team is responsible for the operations of our services. This includes actively monitoring different applications and their infrastructure as well as intervening to solve operational problems whenever they arise.

    More
  • Β· 352 views Β· 57 applications Β· 25d

    Data Engineer

    Full Remote Β· Worldwide Β· 2 years of experience Β· English - B1
    We are looking for a Data Engineer to join our single full-stack data team of Big data products. The role requires a strong set of soft skills and the ability to independently solve data-related challenges. We expect solid knowledge of SQL, Python, and...

    We are looking for a Data Engineer to join our single full-stack data team of Big data products.

    The role requires a strong set of soft skills and the ability to independently solve data-related challenges. We expect solid knowledge of SQL, Python, and an understanding of Cloud-based data pipeline architecture.


    Most of your work will be with SQL, and occasionally with Python. You will work with the Google Cloud Platform and Apache Airflow to deploy changes. No data parsing from sites β€” just straightforward work with data pipelines received from trackers, partners, analytics tools, etc. You will be supported by the team lead, and over time, you will become the king of the company's data pipelines.

     

    Requirements

    • 2-4 years of experience in SQL and Python programming;
    • Experience with containerization and orchestration tools such as Docker and Kubernetes;
    • Practical skills in using Cloud tools;
    • Practical skills in working with analytical databases like Hadoop variations, Google BigQuery, Snowflake;
    • Level of English B1+.

     

    Responsibilities

    1. Design and implement efficient data pipelines to support data-driven decision-making and business intelligence processes;
    2. Develop and maintain custom dashboards using data from company databases to visualize key performance metrics;
    3. Build the data models that the whole company will use;
    4. Normalize, denormalize, and aggregate data from different sources into scalable datamarts;
    5. Maintaining existing data pipelines and datasets;
    6. Investigate what's wrong with some pieces of data and fix the problems;
    7. Deploy changes, fixes, and new features to Cloud services and internal tools.

     

    Would be a plus

    • Practical experience in Apache Airflow;
    • Experience with Google Cloud Platform and its components like BigQuery, App Engine, Cloud Run, Pub/Sub, Logs;
    • Experience with web technologies such as Header of requests and URL structure;
    • Domain knowledge of buying or/and selling traffic;
    • Basic experience in Machine Learning;
    • Basic experience with BI Tools like Tableau, Power BI, Google Looker.

     

    Work conditions

    πŸ’° Competitive salary
    🏠 Fully remote work format
    πŸ•’ Flexible schedule
    🌴 15 paid vacation days + 5 paid sick leave days
    πŸ“š Corporate English lessons
    πŸ“ˆ Opportunities for professional growth and development
    ⚑ Minimal bureaucracy and fast decision-making
    πŸ“ Cooperation in the format of Private Entrepreneur (FOP)

     

    More
  • Β· 52 views Β· 12 applications Β· 29d

    Lead/ Senior Data Engineer (Controlling)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - B2
    Help us build the next-generation controlling platform that powers the operations of a 240,000+ car fleet worldwide. As part of a full-stack agile team at SIXT Tech, you’ll design and develop data solutions that directly impact business decisions and...

    Help us build the next-generation controlling platform that powers the operations of a 240,000+ car fleet worldwide.
    As part of a full-stack agile team at SIXT Tech, you’ll design and develop data solutions that directly impact business decisions and operational efficiency in a fast-growing global company.

    Your impact

    In this role, you will:

    • Translate business requirements into clear technical roadmaps and sprint goals
    • Drive businessβ€”IT alignment for data and analytics topics
    • Define, design, and develop ETL processes based on stakeholder needs
    • Shape and evolve our data warehouse and reporting landscape
    • Collaborate closely with product owners, analysts, and engineers to deliver high-quality data products

    What you bring

    Essential requirements

    • Proven experience in a lead role (technical or team lead)
    • Practical experience with businessβ€”IT alignment in data analytics
    • Strong knowledge of SQL
    • 3+ years of ETL development experience
    • Hands-on experience in modeling and developing DWH (data warehouses)
    • Strong analytical and problem-solving skills
    • Hands-on experience with BI tools
    • Upper-intermediate English (or higher), both spoken and written

    Tech stack you’ll work with

    • AWS Redshift
    • AWS Athena
    • Apache Airflow
    • Git, Jira, Confluence
    • Python

    Nice to have

    • Experience developing controlling / financial controlling systems
    • Any kind of accounting or finance knowledge
    • Understanding of rent-a-car or mobility business

    Soft skills

    • Experience working in geographically distributed teams
    • Strong ownership mentality and quality focus
    • Ability to communicate clearly with both tech and non-tech stakeholders

    What we offer

    • Competitive high salary (pegged to EUR)
    • Full-time employment as an internal employee of SIXT TECH Ukraine
    • Relocation compensation
    • People-oriented management with minimal bureaucracy
    • challenging role where you can learn, grow, and influence architecture and processes
    • 25 calendar days of paid vacation
    • Educational budget for courses, conferences, and certifications
    • Medical insurance
    • Co-funding (50%) for gym and foreign language classes

    If you want to work with modern data tech, real business impact, and a global product at scale β€” we’d love to hear from you.

    More
  • Β· 29 views Β· 1 application Β· 2d

    Data Engineer (Relocation to Spain)

    Office Work Β· Spain Β· Product Β· 3 years of experience Β· English - None
    Do you know that your professional skills can ensure the liquidity of a cryptocurrency exchange? We are looking for a Data Engineer with ETL/ELT for the Spanish office of the most famous Ukrainian company. Working with big data, strong team, assistance...

    Do you know that your professional skills can ensure the liquidity of a cryptocurrency exchange?
    We are looking for a Data Engineer with ETL/ELT for the Spanish office of the most famous Ukrainian company.

    Working with big data, strong team, assistance with family relocation, TOP conditions.

     

    Main Responsibilities

    β€” Design, build, and maintain scalable and resilient data pipelines (batch and real-time)
    β€” Develop and support data lake/data warehouse architectures
    β€” Integrate internal and external data sources/APIs into unified data systems
    β€” Ensure data quality, observability, and monitoring of pipelines
    β€” Collaborate with backend and DevOps engineers on infrastructure and deployment
    β€” Optimize query performance and data processing latency across systems
    β€” Maintain documentation and contribute to internal data engineering standards
    β€” Implement data access layers and provide well-structured data for downstream teams

     

    Mandatory Requirements

    β€” 3+ years of experience as a Data Engineer in high-load or data-driven environments
    β€” Proficient in Python for data processing and automation (pandas, pyarrow, sqlalchemy, etc.)
    β€” Advanced knowledge of SQL: query optimization, indexes, partitions, materialized views
    β€” Hands-on experience with ETL/ELT orchestration tools (e.g., Airflow, Prefect)
    β€” Experience with streaming technologies (e.g., Kafka, Flink, Spark Streaming)
    β€” Solid background in data warehouse solutions: ClickHouse, BigQuery, Redshift, or Snowflake
    β€” Familiarity with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code principles
    β€” Experience with containerization and deployment tools (e.g., Docker, Kubernetes, CI/CD)
    β€” Understanding of data modeling, data versioning, and schema evolution (e.g., dbt, Avro, Parquet)
    β€” English β€” at least intermediate (for documentation & communication with tech teams)

     

    We offer

    Immerse yourself in Crypto & Web3:
    β€” Master cutting-edge technologies and become an expert in the most innovative industry.
    Work with the Fintech of the Future:
    β€” Develop your skills in digital finance and shape the global market.
    Take Your Professionalism to the Next Level:
    β€” Gain unique experience and be part of global transformations.
    Drive Innovations:
    β€” Influence the industry and contribute to groundbreaking solutions.
    Join a Strong Team:
    β€” Collaborate with top experts worldwide and grow alongside the best.
    Work-Life Balance & Well-being:
    β€” Modern equipment.
    β€” Comfortable working conditions, and an inspiring environment to help you thrive.
    β€” 30 calendar days of paid leave.
    β€” Additional days off for national holidays.

     

    With us, you’ll dive into the world of unique blockchain technologies, reshape the crypto landscape, and become an innovator in your field. If you’re ready to take on challenges and join our dynamic team, apply now and start a new chapter in your career!
     

    More
  • Β· 19 views Β· 0 applications Β· 1d

    Senior Data Engineer

    Ukraine Β· Product Β· 4 years of experience Β· English - B2
    Your future responsibilities: Collaborate with data and analytics experts to strive for greater functionality in our data systems Design, use and test the infrastructure required for optimal extraction, transformation, and loading of data from a wide...

    Your future responsibilities:

    • Collaborate with data and analytics experts to strive for greater functionality in our data systems
    • Design, use and test the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies (DevOps & Continuous Integration)
    • Drive the advancement of data infrastructure by designing and implementing the underlying logic and structure for how data is set up, cleansed, and ultimately stored for organizational usage
    • Assemble large, complex data sets that meet functional / non-functional business requirements
    • Build data integration from various sources and technologies to the data lake infrastructure as part of an agile delivery team
    • Monitor the capabilities and react on unplanned interruptions ensuring that environments are provided & loaded in time

    Your skills and experience:

    • Minimum 5 years experience in a dedicated data engineer role
    • Experience working with large structured and unstructured data in various formats
    • Knowledge or experience with streaming data frameworks and distributed data architectures (e.g. Spark Structured Streaming, Apache Beam or Apache Flink)
    • Experience with cloud technologies (preferable AWS, Azure)
    • Experience in Cloud services (Data Flow, Data Proc, BigQuery, Pub/Sub)
    • Experience of practical operation of Big Data stack: Hadoop, HDFS, Hive, Presto, Kafka
    • Experience of Python in the context of creating ETL data pipelines
    • Experience with Data Lake / Data Warehouse solutions (AWS S3 // Minio)
    • Experience with Apache Airflow
    • Development skills in a Docker / Kubernetes environment
    • Open and team-minded personality and communication skills
    • Willingness to work in an agile environment

    We offer what matters most to you:

    • Competitive salary: we guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
    • Social package: official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
    • Comfortable working conditions: possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment. Classification: PUBLIC
    • Wellbeing program: all employees have access to medical insurance from the first working day; consultations with a psychologist, nutritionist, or lawyer; discount programs for sports and purchases; family days for children and adults; in-office massages
    • Training and development: access to over 130 online training resources; corporate training programs in CX, Data, IT Security, Leadership, Agile. Corporate library and English lessons. β€’ Great team: our colleagues form a community where curiosity, talent, and innovation are welcome. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
    • Career opportunities: we encourage advancement within the bank across functions
    • Innovations and technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (IOS), Kotlin (Android). Data stores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
    • Support program for defenders: we maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and develop the Bank’s veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes

    Why Raiffeisen Bank?

    • Our main value is people, and we support and recognize them, educate them and involve them in changes. Join Raif’s team because for us YOU matter!
    • One of the largest lenders to the economy and agricultural business among private banks
    • Recognized as the best employer by EY, Forbes, Randstad, Franklin Covey, and Delo.UA
    • The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)
    • One of the largest IT product teams among the country’s banks. β€’ One of the largest taxpayers in Ukraine; 6.6 billion UAH were paid in taxes in 2023

    Opportunities for Everyone:

    • Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
    • We support the principles of diversity, equality and inclusiveness
    • We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
    • We cooperate with students and older people, creating conditions for growth at any career stage

    Want to learn more? β€” Follow us on social media:

    Facebook, Instagram, LinkedIn

    ___________________________________________________________________________________________

    Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ β€” Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ ΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· Ρ–Π½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡ–Ρ‚Π°Π»ΠΎΠΌ. Π‘Ρ–Π»ΡŒΡˆΠ΅ 30 Ρ€ΠΎΠΊΡ–Π² ΠΌΠΈ ΡΡ‚Π²ΠΎΡ€ΡŽΡ”ΠΌΠΎ Ρ‚Π° Π²ΠΈΠ±ΡƒΠ΄ΠΎΠ²ΡƒΡ”ΠΌΠΎ Π±Π°Π½ΠΊΡ–Π²ΡΡŒΠΊΡƒ систСму Π½Π°ΡˆΠΎΡ— Π΄Π΅Ρ€ΠΆΠ°Π²ΠΈ.

    Π£ Π Π°ΠΉΡ„Ρ– ΠΏΡ€Π°Ρ†ΡŽΡ” ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡ–Π²Ρ€ΠΎΠ±Ρ–Ρ‚Π½ΠΈΠΊΡ–Π², сСрСд Π½ΠΈΡ… ΠΎΠ΄Π½Π° Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΎΠ²ΠΈΡ… Π†Π’-ΠΊΠΎΠΌΠ°Π½Π΄, Ρ‰ΠΎ Π½Π°Π»Ρ–Ρ‡ΡƒΡ” ΠΏΠΎΠ½Π°Π΄ 800 Ρ„Π°Ρ…Ρ–Π²Ρ†Ρ–Π². Щодня ΠΏΠ»Ρ–Ρ‡-ΠΎ-ΠΏΠ»Ρ–Ρ‡ ΠΌΠΈ ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ, Ρ‰ΠΎΠ± Π±Ρ–Π»ΡŒΡˆ Π½Ρ–ΠΆ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½Π° Π½Π°ΡˆΠΈΡ… ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡ‚Ρ€ΠΈΠΌΠ°Ρ‚ΠΈ якіснС обслуговування, користуватися ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Π°ΠΌΠΈ Ρ– ΡΠ΅Ρ€Π²Ρ–сами Π±Π°Π½ΠΊΡƒ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ‚ΠΈ бізнСс, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡ€Π°Ρ—Π½ΠΎΡŽ.β€―

    Π’Π²ΠΎΡ— ΠΌΠ°ΠΉΠ±ΡƒΡ‚Π½Ρ– обов’язки:

    • Бпівпраця Π· Π΅ΠΊΡΠΏΠ΅Ρ€Ρ‚Π°ΠΌΠΈ Π· Π΄Π°Π½ΠΈΡ… Ρ‚Π° Π°Π½Π°Π»Ρ–Ρ‚ΠΈΠΊΠΈ, Ρ‰ΠΎΠ± досягти Π±Ρ–Π»ΡŒΡˆΠΎΡ— Ρ„ΡƒΠ½ΠΊΡ†Ρ–ΠΎΠ½Π°Π»ΡŒΠ½ΠΎΡΡ‚Ρ– Π½Π°ΡˆΠΈΡ… систСм Π΄Π°Π½ΠΈΡ…
    • ΠŸΡ€ΠΎΠ΅ΠΊΡ‚ΡƒΠ²Π°Π½Π½Ρ, використання Ρ‚Π° Ρ‚Сстування інфраструктури, Π½Π΅ΠΎΠ±Ρ…Ρ–Π΄Π½ΠΎΡ— для ΠΎΠΏΡ‚ΠΈΠΌΠ°Π»ΡŒΠ½ΠΎΠ³ΠΎ вилучСння, пСрСтворСння Ρ‚Π° Π·Π°Π²Π°Π½Ρ‚аТСння Π΄Π°Π½ΠΈΡ… Π· ΡˆΠΈΡ€ΠΎΠΊΠΎΠ³ΠΎ спСктру Π΄ΠΆΠ΅Ρ€Π΅Π» Π΄Π°Π½ΠΈΡ… Π·Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³ΠΎΡŽ Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–ΠΉ SQL Ρ‚Π° AWS для Π²Π΅Π»ΠΈΠΊΠΈΡ… Π΄Π°Π½ΠΈΡ… (DevOps Ρ‚Π° Π±Π΅Π·ΠΏΠ΅Ρ€Π΅Ρ€Π²Π½Π° інтСграція)
    • Бприяння Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ інфраструктури Π΄Π°Π½ΠΈΡ… ΡˆΠ»ΡΡ…ΠΎΠΌ проСктування Ρ‚Π° Π²ΠΏΡ€ΠΎΠ²Π°Π΄ΠΆΠ΅Π½Π½Ρ Π±Π°Π·ΠΎΠ²ΠΎΡ— Π»ΠΎΠ³Ρ–ΠΊΠΈ Ρ‚Π° ΡΡ‚Ρ€ΡƒΠΊΡ‚ΡƒΡ€ΠΈ для Π½Π°Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ, очищСння Ρ‚Π°, Π·Ρ€Π΅ΡˆΡ‚ΠΎΡŽ, збСрігання Π΄Π°Π½ΠΈΡ… для використання Π² ΠΎΡ€Π³Π°Π½Ρ–Π·Π°Ρ†Ρ–Ρ—
    • Π—Π±ΠΈΡ€Π°Ρ‚ΠΈ Π²Π΅Π»ΠΈΠΊΡ–, складні Π½Π°Π±ΠΎΡ€ΠΈ Π΄Π°Π½ΠΈΡ…, Ρ‰ΠΎ Π²Ρ–Π΄ΠΏΠΎΠ²Ρ–Π΄Π°ΡŽΡ‚ΡŒ Ρ„ΡƒΠ½ΠΊΡ†Ρ–ΠΎΠ½Π°Π»ΡŒΠ½ΠΈΠΌ/Π½Π΅Ρ„ΡƒΠ½ΠΊΡ†Ρ–ΠΎΠ½Π°Π»ΡŒΠ½ΠΈΠΌ бізнСс-Π²ΠΈΠΌΠΎΠ³Π°ΠΌ
    • Π‘Ρ‚Π²ΠΎΡ€ΡŽΠ²Π°Ρ‚ΠΈ Ρ–Π½Ρ‚Π΅Π³Ρ€Π°Ρ†Ρ–ΡŽ Π΄Π°Π½ΠΈΡ… Π· Ρ€Ρ–Π·Π½ΠΈΡ… Π΄ΠΆΠ΅Ρ€Π΅Π» Ρ‚Π° Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–ΠΉ Π² Ρ–нфраструктуру ΠΎΠ·Π΅Ρ€Π° Π΄Π°Π½ΠΈΡ… як Ρ‡Π°ΡΡ‚ΠΈΠ½Π° Π³Π½ΡƒΡ‡ΠΊΠΎΡ— ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π· ΠΏΠΎΡΡ‚ачання
    • ΠœΠΎΠ½Ρ–Ρ‚ΠΎΡ€ΠΈΡ‚ΠΈ моТливості Ρ‚Π° Ρ€Π΅Π°Π³ΡƒΠ²Π°Ρ‚ΠΈ Π½Π° Π½Π΅Π·Π°ΠΏΠ»Π°Π½ΠΎΠ²Π°Π½Ρ– ΠΏΠ΅Ρ€Π΅Π±ΠΎΡ—, Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡ΡƒΡŽΡ‡ΠΈ своєчаснС надання Ρ‚Π° Π·Π°Π²Π°Π½Ρ‚аТСння сСрСдовищ

    Π’Π²Ρ–ΠΉ досвід Ρ‚Π° Π½Π°Π²ΠΈΡ‡ΠΊΠΈ:

    • ΠœΡ–Π½Ρ–ΠΌΡƒΠΌ 5 Ρ€ΠΎΠΊΡ–Π² досвіду Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π½Π° ΠΏΠΎΡΠ°Π΄Ρ– спСціалізованого Ρ–Π½ΠΆΠ΅Π½Π΅Ρ€Π° Π· Π΄Π°Π½ΠΈΡ…
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Π²Π΅Π»ΠΈΠΊΠΈΠΌΠΈ структурованими Ρ‚Π° Π½Π΅ΡΡ‚Ρ€ΡƒΠΊΡ‚ΡƒΡ€ΠΎΠ²Π°Π½ΠΈΠΌΠΈ Π΄Π°Π½ΠΈΠΌΠΈ Π² Ρ€Ρ–Π·Π½ΠΈΡ… Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Π°Ρ…
    • Знання Π°Π±ΠΎ досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ„Ρ€Π΅ΠΉΠΌΠ²ΠΎΡ€ΠΊΠ°ΠΌΠΈ ΠΏΠΎΡ‚ΠΎΠΊΠΎΠ²ΠΈΡ… Π΄Π°Π½ΠΈΡ… Ρ‚Π° Ρ€ΠΎΠ·ΠΏΠΎΠ΄Ρ–Π»Π΅Π½ΠΈΠΌΠΈ Π°Ρ€Ρ…Ρ–Ρ‚Π΅ΠΊΡ‚ΡƒΡ€Π°ΠΌΠΈ Π΄Π°Π½ΠΈΡ… (Π½Π°ΠΏΡ€ΠΈΠΊΠ»Π°Π΄,
    • Spark Structured Streaming, Apache Beam Π°Π±ΠΎ Apache Flink)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ…ΠΌΠ°Ρ€Π½ΠΈΠΌΠΈ тСхнологіями (Π±Π°ΠΆΠ°Π½ΠΎ AWS, Azure)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ…ΠΌΠ°Ρ€Π½ΠΈΠΌΠΈ сСрвісами (Data Flow, Data Proc, BigQuery, Pub/Sub)
    • Досвід ΠΏΡ€Π°ΠΊΡ‚ΠΈΡ‡Π½ΠΎΡ— Сксплуатації стСку Big Data: Hadoop, HDFS, Hive, Presto, Kafka
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Python Ρƒ ΠΊΠΎΠ½Ρ‚Сксті створСння ETL-ΠΏΠΎΡ‚ΠΎΠΊΡ–Π² Π΄Π°Π½ΠΈΡ…
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Ρ€Ρ–ΡˆΠ΅Π½Π½ΡΠΌΠΈ Data Lake / Data Warehouse (AWS S3 // Minio)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Apache Airflow
    • Навички Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΈ Π² ΡΠ΅Ρ€Π΅Π΄ΠΎΠ²ΠΈΡ‰Ρ– Docker / Kubernetes
    • Π’Ρ–Π΄ΠΊΡ€ΠΈΡ‚Π° Ρ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄Π½Π° ΠΎΡΠΎΠ±ΠΈΡΡ‚Ρ–ΡΡ‚ΡŒ, ΠΊΠΎΠΌΡƒΠ½Ρ–ΠΊΠ°Ρ‚ΠΈΠ²Π½Ρ– Π½Π°Π²ΠΈΡ‡ΠΊΠΈ
    • Π“ΠΎΡ‚ΠΎΠ²Π½Ρ–ΡΡ‚ΡŒ ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Ρ‚ΠΈ Π² Π³Π½ΡƒΡ‡ΠΊΠΎΠΌΡƒ сСрСдовищі

    ΠŸΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ Ρ‚Π΅, Ρ‰ΠΎ ΠΌΠ°Ρ” значСння самС для Ρ‚Π΅Π±Π΅:β€―

    • ΠšΠΎΠ½ΠΊΡƒΡ€Π΅Π½Ρ‚Π½Π° Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Π° ΠΏΠ»Π°Ρ‚Π°: Π³Π°Ρ€Π°Π½Ρ‚ΡƒΡ”ΠΌΠΎ ΡΡ‚Π°Π±Ρ–Π»ΡŒΠ½ΠΈΠΉ Π΄ΠΎΡ…Ρ–Π΄ Ρ‚Π° Ρ€Ρ–Ρ‡Π½Ρ– бонуси Π·Π° Ρ‚Π²Ρ–ΠΉ особистий внСсок. Π”ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΎ, Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” Ρ€Π΅Ρ„Π΅Ρ€Π°Π»ΡŒΠ½Π° ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡ€ΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡƒΡ‡Π΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ… ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊΡƒ.
    • Π‘ΠΎΡ†Ρ–Π°Π»ΡŒΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ‚: ΠΎΡ„Ρ–Ρ†Ρ–ΠΉΠ½Π΅ ΠΏΡ€Π°Ρ†Π΅Π²Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ, 28 Π΄Π½Ρ–Π² ΠΎΠΏΠ»Π°Ρ‡ΡƒΠ²Π°Π½ΠΎΡ— відпустки, Π΄ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΈΠΉ β€œΠ΄Π΅ΠΊΡ€Π΅Ρ‚β€ для татусів, Ρ‚Π° ΠΌΠ°Ρ‚Π΅Ρ€Ρ–Π°Π»ΡŒΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° для Π±Π°Ρ‚ΡŒΠΊΡ–Π² ΠΏΡ€ΠΈ Π½Π°Ρ€ΠΎΠ΄ΠΆΠ΅Π½Π½Ρ– Π΄Ρ–Ρ‚Π΅ΠΉ.
    • ΠšΠΎΠΌΡ„ΠΎΡ€Ρ‚Π½Ρ– ΡƒΠΌΠΎΠ²ΠΈ ΠΏΡ€Π°Ρ†Ρ–: ΠΌΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ Π³Ρ–Π±Ρ€ΠΈΠ΄Π½ΠΎΠ³ΠΎ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρƒ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ, офіси Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡Π΅Π½Π½Ρ– укриттями Ρ‚Π° Π³Π΅Π½Π΅Ρ€Π°Ρ‚ΠΎΡ€Π°ΠΌΠΈ, забСзпСчСння ΡΡƒΡ‡Π°ΡΠ½ΠΎΡŽ Ρ‚Π΅Ρ…Π½Ρ–ΠΊΠΎΡŽ.
    • Wellbeing ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ°: для всіх співробітників доступні ΠΌΠ΅Π΄ΠΈΡ‡Π½Π΅ страхування Π· ΠΏΠ΅Ρ€ΡˆΠΎΠ³ΠΎ Ρ€ΠΎΠ±ΠΎΡ‡ΠΎΠ³ΠΎ дня; ΠΊΠΎΠ½ΡΡƒΠ»ΡŒΡ‚Π°Ρ†Ρ–Ρ— психолога, Π½ΡƒΡ‚Ρ€ΠΈΡ†Ρ–ΠΎΠ»ΠΎΠ³Π° Ρ‡ΠΈ ΡŽΡ€ΠΈΡΡ‚Π°; дисконт ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡ€Ρ‚ Ρ‚Π° ΠΏΠΎΠΊΡƒΠΏΠΊΠΈ; family days для Π΄Ρ–Ρ‚Π΅ΠΉ Ρ‚Π° Π΄ΠΎΡ€ΠΎΡΠ»ΠΈΡ…; масаТ Π² ΠΎΡ„ісі.
    • Навчання Ρ‚Π° Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ: доступ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½ΠΈΡ… ΠΎΠ½Π»Π°ΠΉΠ½-рСсурсів; ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Ρ– Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½Ρ– ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π· CX, Data, IT Security, ЛідСрства, Agile. ΠšΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Π° Π±Ρ–Π±Π»Ρ–ΠΎΡ‚Π΅ΠΊΠ° Ρ‚Π° ΡƒΡ€ΠΎΠΊΠΈ Π°Π½Π³Π»Ρ–ΠΉΡΡŒΠΊΠΎΡ—.
    • ΠšΡ€ΡƒΡ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡˆΡ– ΠΊΠΎΠ»Π΅Π³ΠΈ β€” Ρ†Π΅ ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π°, Π΄Π΅ Π²Ρ–Ρ‚Π°ΡŽΡ‚ΡŒΡΡ Π΄ΠΎΠΏΠΈΡ‚Π»ΠΈΠ²Ρ–ΡΡ‚ΡŒ, Ρ‚Π°Π»Π°Π½Ρ‚ Ρ‚Π° Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ—. Ми ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, вчимося Ρ€Π°Π·ΠΎΠΌ Ρ‚Π° Π·Ρ€ΠΎΡΡ‚Π°Ρ”ΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρˆ Π·Π½Π°ΠΉΡ‚ΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡƒΠΌΡ†Ρ–Π² Ρƒ ΠΏΠΎΠ½Π°Π΄ 15-Ρ‚ΠΈ профСсійних ΠΊΠΎΠΌβ€™ΡŽΠ½Ρ–Ρ‚Ρ–, Ρ‡ΠΈΡ‚Π°Ρ†ΡŒΠΊΠΎΠΌΡƒ Ρ‡ΠΈ ΡΠΏΠΎΡ€Ρ‚ΠΈΠ²Π½ΠΎΠΌΡƒ ΠΊΠ»ΡƒΠ±Π°Ρ….
    • ΠšΠ°Ρ€β€™Ρ”Ρ€Π½Ρ– моТливості: ΠΌΠΈ Π·Π°ΠΎΡ…ΠΎΡ‡ΡƒΡ”ΠΌΠΎ просування всСрСдині Π±Π°Π½ΠΊΡƒ ΠΌΡ–ΠΆ функціями.
    • Π†Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ— Ρ‚Π° Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ—. Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink.
    • ΠŸΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ захисників Ρ– Π·Π°Ρ…ΠΈΡΠ½ΠΈΡ†ΡŒ: ΠΌΠΈ Π·Π±Π΅Ρ€Ρ–Π³Π°Ρ”ΠΌΠΎ Ρ€ΠΎΠ±ΠΎΡ‡Ρ– місця Ρ‚Π° Π²ΠΈΠΏΠ»Π°Ρ‡ΡƒΡ”ΠΌΠΎ ΡΠ΅Ρ€Π΅Π΄Π½ΡŽ Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Ρƒ ΠΏΠ»Π°Ρ‚Ρƒ ΠΌΠΎΠ±Ρ–Π»Ρ–Π·ΠΎΠ²Π°Π½ΠΈΠΌ. Для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ‚Π° Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΠΎΠΊ Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ”Ρ‚ΡŒΡΡ Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΡΡŒΠΊΠ° ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π° Π‘Π°Π½ΠΊΡƒ. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π½Π°Π΄ підвищСнням обізнаності ΠΊΠ΅Ρ€Ρ–Π²Π½ΠΈΠΊΡ–Π² Ρ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡ‚Π°Π½ΡŒ повСрнСння Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Π΄ΠΎ Ρ†ΠΈΠ²Ρ–Π»ΡŒΠ½ΠΎΠ³ΠΎ Тиття. Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ Π²Ρ–Π΄Π·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ як ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΡ… Ρ€ΠΎΠ±ΠΎΡ‚ΠΎΠ΄Π°Π²Ρ†Ρ–Π² для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² (Forbes).

    Π§ΠΎΠΌΡƒ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ?β€―

    • Наша Π³ΠΎΠ»ΠΎΠ²Π½Π° Ρ†Ρ–Π½Π½Ρ–ΡΡ‚ΡŒ β€” люди Ρ– ΠΌΠΈ Π΄Π°Ρ”ΠΌΠΎ Ρ—ΠΌ ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΡƒ Ρ– Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²Ρ‡Π°Ρ”ΠΌΠΎ, Π·Π°Π»ΡƒΡ‡Π°Ρ”ΠΌΠΎ Π΄ΠΎ Π·ΠΌΡ–Π½. ΠŸΡ€ΠΈΡ”Π΄Π½ΡƒΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡ„Ρƒ, Π°Π΄ΠΆΠ΅ для нас Π’И ΠΌΠ°Ρ”Ρˆ значСння!β€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΊΡ€Π΅Π΄ΠΈΡ‚ΠΎΡ€Ρ–Π² Π΅ΠΊΠΎΠ½ΠΎΠΌΡ–ΠΊΠΈ Ρ‚Π° Π°Π³Ρ€Π°Ρ€Π½ΠΎΠ³ΠΎ бізнСсу сСрСд ΠΏΡ€ΠΈΠ²Π°Ρ‚Π½ΠΈΡ… Π±Π°Π½ΠΊΡ–Π²β€―
    • Π’ΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΠΌ ΠΏΡ€Π°Ρ†Π΅Π΄Π°Π²Ρ†Π΅ΠΌ Π·Π° Π²Π΅Ρ€ΡΡ–ями EY, Forbes, Randstad, Franklin Covey, Delo.UAβ€―
    • ΠΠ°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ€ Π³ΡƒΠΌΠ°Π½Ρ–Ρ‚Π°Ρ€Π½ΠΎΡ— допомогисСрСд Π±Π°Π½ΠΊΡ–Π² (Π§Π΅Ρ€Π²ΠΎΠ½ΠΈΠΉ Π₯рСст Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)β€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΠ»Π°Ρ‚Π½ΠΈΠΊΡ–Π² ΠΏΠΎΠ΄Π°Ρ‚ΠΊΡ–Π² Π² Π£ΠΊΡ€Π°Ρ—Π½Ρ–, Π·Π° 2023 Ρ€Ρ–ΠΊ Π±ΡƒΠ»ΠΎ сплачСно 6,6 ΠΌΠ»Ρ€Π΄ Π³Ρ€ΠΈΠ²Π΅Π½ΡŒ

    ΠœΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡ‚Ρ– для всіх:β€―

    • Π Π°ΠΉΡ„ ΠΊΠ΅Ρ€ΡƒΡ”Ρ‚ΡŒΡΡ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠ°ΠΌΠΈ, Ρ‰ΠΎ Ρ„ΠΎΠΊΡƒΡΡƒΡŽΡ‚ΡŒΡΡ Π½Π° Π»ΡŽΠ΄ΠΈΠ½Ρ– Ρ‚Π° Ρ—Ρ— Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ, Ρƒ Ρ†Π΅Π½Ρ‚Ρ€Ρ– ΡƒΠ²Π°Π³ΠΈ 5β€―500 співробітників Ρ‚Π° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½ΠΈ ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π²β€―β€―
    • ΠŸΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΈ різноманіття, рівності Ρ‚Π° Ρ–Π½ΠΊΠ»ΡŽΠ·ΠΈΠ²Π½ΠΎΡΡ‚Ρ–
    • Ми Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Ρ– Π΄ΠΎ Π½Π°ΠΉΠΌΡƒ Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ– Π»ΡŽΠ΄Π΅ΠΉ Π· Ρ–Π½Π²Π°Π»Ρ–Π΄Π½Ρ–ΡΡ‚ΡŽ Ρ‚Π° Π³ΠΎΡ‚ΠΎΠ²Ρ– Π°Π΄Π°ΠΏΡ‚ΡƒΠ²Π°Ρ‚ΠΈ Ρ€ΠΎΠ±ΠΎΡ‡Π΅ сСрСдовищС ΠΏΡ–Π΄ Π²Π°ΡˆΡ– особливі ΠΏΠΎΡ‚Ρ€Π΅Π±ΠΈ
    • Π‘ΠΏΡ–Π²ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π·Ρ– ΡΡ‚ΡƒΠ΄Π΅Π½Ρ‚Π°ΠΌΠΈ Ρ‚Π° Π»ΡŽΠ΄ΡŒΠΌΠΈ ΡΡ‚Π°Ρ€ΡˆΠΎΠ³ΠΎ Π²Ρ–ΠΊΡƒ,β€―ΡΡ‚Π²ΠΎΡ€ΡŽΡŽΡ‡ΠΈ ΡƒΠΌΠΎΠ²ΠΈ для зростання Π½Π° Π±ΡƒΠ΄ΡŒ-якому Π΅Ρ‚Π°ΠΏΡ– кар’єри

    Π‘Π°ΠΆΠ°Ρ”Ρˆ дізнатися Π±Ρ–Π»ΡŒΡˆΠ΅? β€” ΠŸΡ–дписуйся Π½Π° Π½Π°Ρ Ρƒ ΡΠΎΡ†.ΠΌΠ΅Ρ€Π΅ΠΆΠ°Ρ…:

    Facebook, Instagram, LinkedInβ€―

    More
  • Β· 39 views Β· 2 applications Β· 19d

    Data Engineer

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - None
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel the Ukrainian LLM and NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling the Data Scientists and ML Engineers to develop cutting-edge language models.

    You will work at the intersection of data engineering and machine learning, ensuring that the datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context.

    Requirements:
    - Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
    - NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    Understanding of FineWeb2 or a similar processing pipeline approach.
    - Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
    - Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
    - Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as the NLP applications may require embedding storage and fast similarity search.
    - Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
    - Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
    - Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.

    Nice to have:
    - Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
    - Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
    - CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
    - Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
    - Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimizing existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve the workflows.

    Responsibilities:
    - Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information.
    - Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
    - Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to the language modeling efforts.
    - Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
    - Automate data processing workflows and ensure their scalability and reliability.
    - Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
    - Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs.
    - Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
    - Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models.
    - Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
    - Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
    - Manage data security, access, and compliance.
    - Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 44 views Β· 5 applications Β· 12d

    Database Engineer

    Full Remote Β· Ukraine, Poland, Hungary Β· Product Β· 5 years of experience Β· English - None
    We’re hiring a Database Engineer to design, build, and operate reliable data platforms and pipelines. You’ll focus on robust ETL/ELT workflows, scalable big data processing, and cloud-first architectures (Azure preferred) that power analytics and...

    We’re hiring a Database Engineer to design, build, and operate reliable data platforms and pipelines. You’ll focus on robust ETL/ELT workflows, scalable big data processing, and cloud-first architectures (Azure preferred) that power analytics and applications.

     

    What You’ll Do

     

    • Design, build, and maintain ETL/ELT pipelines and data workflows (e.g., Azure Data Factory, Databricks, Spark, ClickHouse, Airflow, etc.).
    • Develop and optimize data models, data warehouse/lake/lakehouse schema (partitioning, indexing, clustering, cost/performance tuning, etc.).
    • Build scalable batch and streaming processing jobs (Spark/Databricks, Delta Lake; Kafka/Event Hubs a plus).
    • Ensure data quality, reliability, and observability (tests, monitoring, alerting, SLAs).
    • Implement CI/CD and version control for data assets and pipelines.
    • Secure data and environments (IAM/Entra ID, Key Vault, strong tenancy guarantees, encryption, least privilege).
    • Collaborate with application, analytics, and platform teams to deliver trustworthy, consumable datasets.

     

    Required Qualifications

     

    • ETL or ELT experience required (ADF/Databricks/dbt/Airflow or similar).
    • Big data experience required.
    • Cloud experience required; Azure preferred (Synapse, Data Factory, Databricks, Azure Storage, Event Hubs, etc.).
    • Strong SQL and performance tuning expertise; hands-on with at least one warehouse/lakehouse (Synapse/Snowflake/BigQuery/Redshift or similar).
    • Solid data modeling fundamentals (star/snowflake schemas, normalization/denormalization, CDC, etc.).
    • Experience with CI/CD, Git, and infrastructure automation basics.

     

    Nice to Have

     

    • Streaming pipelines (Kafka, Event Hubs, Kinesis, Pub/Sub) and exactly-once/at-least-once patterns.
    • Orchestration and workflow tools (Airflow, Prefect, Azure Data Factory).
    • Python for data engineering.
    • Data governance, lineage, and security best practices.
    • Infrastructure as Code (Terraform) for data platform provisioning.
    More
  • Β· 42 views Β· 8 applications Β· 3d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - None
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently fostered the largest Data Science Community in Eastern Europe, boasting a network of over 30,000 AI top engineers.

    About the client:
    We are working with a new generation of data service provider, specializing in data consulting and data-driven digital marketing, dedicated to transforming data into business impact across the entire value chain of organizations. The company’s data-driven services are built upon the deep AI expertise the company’s acquired with a 1000+ client base around the globe. The company has 1000 employees across 20 offices who are focused on accelerating digital transformation.

    About the role:
    We are seeking a Senior Data Engineer (Azure) to design and maintain data pipelines and systems for analytics and AI-driven applications. You will work on building reliable ETL/ELT workflows and ensuring data integrity across the organization.

    Required skills:
    - 6+ years of experience as a Data Engineer, preferably in Azure environments.
    - Proficiency in Python, SQL, NoSQL, and Cypher for data manipulation and querying.
    - Hands-on experience with Airflow and Azure Data Services for pipeline orchestration.
    - Strong understanding of data modeling, ETL/ELT workflows, and data warehousing concepts.
    - Experience in implementing DataOps practices for pipeline automation and monitoring.
    - Knowledge of data governance, data security, and metadata management principles.
    - Ability to work collaboratively with data science and analytics teams.
    - Excellent problem-solving and communication skills.

    Responsibilities:
    - Transform data into formats suitable for analysis by developing and maintaining processes for data transformation;
    - Structuring, metadata management, and workload management.
    - Design, implement, and maintain scalable data pipelines on Azure.
    - Develop and optimize ETL/ELT processes for various data sources.
    - Collaborate with data scientists and analysts to ensure data readiness.
    - Monitor and improve data quality, performance, and governance.

    More
  • Β· 25 views Β· 1 application Β· 17d

    Data Engineer (DBT, Snowflake), Investment Management Solution

    Ukraine, Poland, Georgia, Armenia, Cyprus Β· 5 years of experience Β· English - None
    Client Our client is one of the world’s top 20 investment companies headquartered in Great Britain, with branch offices in the US, Asia, and Europe. Project overview The company’s IT environment is constantly growing, with around 30 programs and more...

    Client

    Our client is one of the world’s top 20 investment companies headquartered in Great Britain, with branch offices in the US, Asia, and Europe.

     

    Project overview

    The company’s IT environment is constantly growing, with around 30 programs and more than 60 active projects. They are building a data marketplace that aggregates and analyzes data from multiple sources such as stock exchanges, news feeds, brokers, and internal quantitative systems.

    As the company moves to a new data source, the main goal of this project is to create a golden source of data for all downstream systems and applications. The team is performing classic ELT/ETL: transforming raw data from multiple sources (third-party and internal) and creating a single interface for delivering data to downstream applications.

     

    Position overview

    We are looking for a Data Engineer with strong expertise in DBT, Snowflake, and modern data engineering practices. In this role, you will design and implement scalable data models, build robust ETL/ELT pipelines, and ensure high-quality data delivery for critical investment management applications.

     

    Responsibilities

    • Design, build, and deploy DBT Cloud models.
    • Design, build, and deploy Airflow jobs (Astronomer).
    • Identify and test for bugs and bottlenecks in the ELT/ETL solution.

     

    Requirements

    • 5+ years of experience in software engineering (GIT, CI/CD, Shell scripting).
    • 3+ years of experience building scalable and robust Data Platforms (SQL, DWH, Distributed Data Processing).
    • 2+ years of experience developing in DBT Core/Cloud.
    • 2+ years of experience with Snowflake.
    • 2+ years of experience with Airflow.
    • 2+ years of experience with Python.
    • Good spoken English.

     

    Nice to have

    • Proficiency in message queues (Kafka).
    • Experience with cloud services (Azure).
    • CI/CD knowledge (Jenkins, Groovy scripting).
    More
Log In or Sign Up to see all posted jobs