Jobs

120
  • Β· 25 views Β· 0 applications Β· 23d

    Senior Data Engineer

    Full Remote Β· Romania Β· 4 years of experience Β· C1 - Advanced
    Project Description: We are looking for a Senior Data Engineer. This role focuses on enabling RM practice for mapping business applications and services delivered within this domain. The position offers an opportunity to take ownership of data product...

    Project Description:

    We are looking for a Senior Data Engineer.

    This role focuses on enabling RM practice for mapping business applications and services delivered within this domain. The position offers an opportunity to take ownership of data product pipelines, ensuring they are robust, maintainable, and aligned with business needs.

    Responsibilities:

    • Apply data engineering practices and standards to develop robust and maintainable data pipelines
    • Analyze and organize raw data ingestion pipelines
    • Evaluate business needs and objectives
    • Support senior business stakeholders in defining new data product use cases and their value
    • Take ownership of data product pipelines and their maintenance
    • Explore ways to enhance data quality and reliability, be the "Quality Gatekeeper" for developed Data Products
    • Adapt and apply best practices from the Data One community
    • Be constantly on the lookout for ways to improve best practices and efficiencies and make concrete proposals.
    • Take leadership and collaborate with other teams proactively to keep things moving
    • Be flexible and take on other responsibilities within the scope of the Agile Team

    Requirements:

    Must have:

    • Hands-on experience with Snowflake
    • Proven experience as a Data Engineer
    • Solid knowledge of data modeling techniques (e.g., Data Vault)
    • Advanced expertise with ETL tools (Talend, Alteryx, etc.)
    • Strong SQL programming skills; Working knowledge of Python is an advantage.
    • Experience with data transformation tools (DBT)
    • 2 – 3 years of experience in DB/ETL development (Talend and DBT preferred)
    • Hold a B.Sc., B.Eng., or higher, or equivalent in Computer Science, Data Engineering, or related fields
    • Be able to communicate in English at the level of C1+

    Nice to have:

    • Snowflake certification is a plus
    • Experience with Agile methodologies in software development
    • Familiarity with DevOps/DataOps practices (CI/CD, GitLab, DataOps.live)
    • Experience with the full lifecycle management of data products
    • Knowledge of Data Mesh and FAIR principles

    We offer:

    • Long-term B2B contract
    • Friendly atmosphere and Trust-based managerial culture
    • 100% remote work
    • Innovative Environment: Work on cutting-edge AI technologies in a highly impactful program
    • Growth Opportunities: Opportunities for professional development and learning in the rapidly evolving field of AI
    • Collaborative Culture: Be a part of a diverse and inclusive team that values collaboration and innovation
    • Participate only in international projects
    • Referral bonuses for recommending your friends to the Unitask Group
    • Paid Time Off (Vacation, Sick & Public Holidays in your country)
    More
  • Β· 36 views Β· 0 applications Β· 23d

    Data Engineer

    Full Remote Β· Romania Β· 4 years of experience Β· C1 - Advanced
    Description: We are looking for a Mid–Senior Data Engineer to join our team and contribute to the development of robust, scalable, and high-quality data solutions. This role blends hands-on data engineering with analytical expertise, focusing on building...

    Description:

    We are looking for a Mid–Senior Data Engineer to join our team and contribute to the development of robust, scalable, and high-quality data solutions.

    This role blends hands-on data engineering with analytical expertise, focusing on building efficient pipelines, ensuring data reliability, and enabling advanced analytics to support business insights.

    As part of our team, you will work with modern technologies such as Snowflake, DBT, and Python, and play a key role in enhancing data quality, implementing business logic, and applying statistical methods to real-world challenges.

    This position offers the opportunity to work in an innovative environment, contribute to impactful AI-driven projects, and grow professionally within a collaborative and supportive culture.

    Responsibilities:

    • Build and maintain data pipelines on Snowflake (pipes and streams)
    • Implement business logic to ensure scalable and reliable data workflows
    • Perform data quality assurance and checks using DBT
    • Conduct exploratory data analysis (EDA) to support business insights
    • Apply statistical methods and decision tree techniques to data challenges
    • Ensure model reliability through cross-validation

    Requirements:

    • Proven experience with Snowflake – hands-on expertise in building and optimizing data pipelines.
    • Strong knowledge of DBT – capable of implementing robust data quality checks and transformations.
    • Proficiency in Python, with experience using at least one of the following libraries: Pandas, Matplotlib, or Scikit-learn.
    • Familiarity with Jupyter for data exploration, analysis, and prototyping.
    • Hold a B.Sc., B.Eng., or higher, or equivalent in Computer Science, Data Engineering, or related fields
    • Be able to communicate in English at the level of C1+

    We offer:

    • Long-term B2B contract
    • Friendly atmosphere and Trust-based managerial culture
    • 100% remote work
    • Innovative Environment: Work on cutting-edge AI technologies in a highly impactful program
    • Growth Opportunities: Opportunities for professional development and learning in the rapidly evolving field of AI
    • Collaborative Culture: Be a part of a diverse and inclusive team that values collaboration and innovation
    • Participate only in international projects
    • Referral bonuses for recommending your friends to the Unitask Group
    • Paid Time Off (Vacation, Sick & Public Holidays in your country)
    More
  • Β· 44 views Β· 8 applications Β· 23d

    ETL Architect (Informatica / Talend / SSIS)

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· C1 - Advanced
    We are seeking an experienced ETL Architect with strong expertise in data integration, ETL pipelines, and data warehousing. The ideal candidate will have hands-on experience with tools such as Informatica PowerCenter/Cloud, Talend, and Microsoft SSIS, and...

    We are seeking an experienced ETL Architect with strong expertise in data integration, ETL pipelines, and data warehousing. The ideal candidate will have hands-on experience with tools such as Informatica PowerCenter/Cloud, Talend, and Microsoft SSIS, and will be responsible for architecting scalable, secure, and high-performing ETL solutions. This role involves collaborating with business stakeholders, data engineers, and BI teams to deliver clean, consistent, and reliable data for analytics, reporting, and enterprise systems.

     

    Key Responsibilities

    • Design, architect, and implement ETL pipelines to extract, transform, and load data across multiple sources and targets.
    • Define ETL architecture standards, frameworks, and best practices for performance and scalability.
    • Lead the development of data integration solutions using Informatica, Talend, SSIS, or equivalent ETL tools.
    • Collaborate with business analysts, data engineers, and BI developers to translate business requirements into data models and ETL workflows.
    • Ensure data quality, security, and compliance across all ETL processes.
    • Troubleshoot and optimize ETL jobs for performance, scalability, and reliability.
    • Support data warehouse / data lake design and integration.
    • Manage ETL environments, upgrades, and migration to cloud platforms (AWS, Azure, GCP).
    • Provide mentoring, code reviews, and technical leadership to junior ETL developers.

     

    Requirements

    • 7+ years of experience in ETL development, with at least 3+ years in a lead/architect role.
    • Strong expertise in one or more major ETL tools: Informatica (PowerCenter/Cloud), Talend, SSIS.
    • Experience with relational databases (Oracle, SQL Server, PostgreSQL) and data warehousing concepts (Kimball, Inmon).
    • Strong knowledge of SQL, PL/SQL, stored procedures, performance tuning.
    • Familiarity with cloud data integration (AWS Glue, Azure Data Factory, GCP Dataflow/Dataproc).
    • Experience in handling large-scale data migrations, batch and real-time ETL processing.
    • Strong problem-solving, analytical, and architectural design skills.
    • Excellent communication skills, with the ability to engage technical and non-technical stakeholders.

     

    Nice to Have

    • Hands-on experience with big data platforms (Hadoop, Spark, Kafka, Databricks).
    • Knowledge of data governance, MDM, and metadata management.
    • Familiarity with API-based integrations and microservices architectures.
    • Prior experience in industries such as banking, insurance, healthcare, or telecom.
    • Certification in Informatica, Talend, or cloud ETL platforms.
    More
  • Β· 27 views Β· 3 applications Β· 20d

    Oracle Cloud Architect

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Description You will be joining GlobalLogic’s Media and Entertainment (M&E) practice, a specialized team within a leading digital engineering company. Our practice is at the forefront of the media industry’s technological evolution, partnering with the...

    Description

    You will be joining GlobalLogic’s Media and Entertainment (M&E) practice, a specialized team within a leading digital engineering company. Our practice is at the forefront of the media industry’s technological evolution, partnering with the world’s largest broadcasters, content creators, and distributors. We have a proven track record of engineering complex solutions, including cloud-based OTT platforms (like VOS360), Media/Production Asset Management (MAM/PAM) systems, software-defined broadcast infrastructure, and innovative contribution/distribution workflows.

    This engagement is for a landmark cloud transformation project for a major client in the media sector. The objective is to architect the strategic migration of a large-scale linear broadcasting platform from its current foundation on AWS to Oracle Cloud Infrastructure (OCI). You will be a key advisor on a project aimed at modernizing critical broadcast operations, enhancing efficiency, and building a future-proof cloud architecture.

     

    Requirements

    We are seeking a seasoned cloud professional with a deep understanding of both cloud infrastructure and the unique demands of the media industry.

    • Expert-Level OCI Experience: Proven hands-on experience designing, building, and managing complex enterprise workloads on Oracle Cloud Infrastructure (OCI).
    • Cloud Migration Expertise: Demonstrable experience architecting and leading at least one significant cloud-to-cloud migration project, preferably from AWS to OCI.
    • Strong Architectural Acumen: Deep understanding of cloud architecture principles across compute, storage, networking, security, and identity/access management.
    • Client-Facing & Consulting Skills: Exceptional communication and presentation skills, with the ability to act as a credible and trusted advisor to senior-level clients.
    • Media & Entertainment Domain Knowledge (Highly Preferred): Experience with broadcast and media workflows is a significant advantage. Familiarity with concepts like linear channel playout, live video streaming, media asset management (MAM), and IP video standards (e.g., SMPTE 2110) is highly desirable.
    • Infrastructure as Code (IaC): Proficiency with IaC tools, particularly Terraform, for automating OCI environment provisioning.
    • Professional Certifications: An OCI Architect Professional certification is strongly preferred. Equivalent certifications in AWS are also valued.

     

    Job responsibilities

    As the OCI Architect, you will be the primary technical authority and trusted advisor for this cloud migration initiative. Your responsibilities will include:

    • Migration Strategy & Planning: Assess the client’s existing AWS-based media workflows and architect a comprehensive, phased migration strategy to OCI.
    • Architecture Design: Design a secure, scalable, resilient, and cost-efficient OCI architecture tailored for demanding, 24/7 linear broadcast operations. This includes defining compute, storage, networking (including IP video transport), and security models.
    • Technical Leadership: Serve as the subject matter expert on OCI for both the client and GlobalLogic engineering teams, providing hands-on guidance, best practices, and technical oversight.
    • Stakeholder Engagement: Effectively communicate complex architectural concepts and migration plans to senior client stakeholders, technical teams, and project managers.
    • Proof of Concept (PoC) Execution: Lead and participate in PoCs to validate architectural designs and de-risk critical components of the migration.
    • Cost Optimization: Develop cost models and identify opportunities for optimizing operational expenses on OCI, ensuring the solution is commercially viable.
    • Documentation: Create and maintain high-quality documentation, including architectural diagrams, design specifications, and operational runbooks.
    More
  • Β· 122 views Β· 15 applications Β· 20d

    Data Engineer

    Full Remote Β· Ukraine Β· Product Β· 1 year of experience Β· B1 - Intermediate
    Ready to design scalable data solutions and influence product growth? Softsich is a young and ambitious international product tech company that develops scalable B2B digital platforms. We’re looking for a Data Engineer eager to grow with us and bring...

    🌟Ready to design scalable data solutions and influence product growth?
     

    Softsich is a young and ambitious international product tech company that develops scalable B2B digital platforms. We’re looking for a Data Engineer eager to grow with us and bring modern data engineering practices into high-load solutions.

     

    Your key responsibilities will include:

    • Extending the existing data warehouse (AWS: Redshift, S3, EMR) with dbt.
    • Developing and maintaining data pipelines (Kafka, MongoDB, PostgreSQL, messaging systems) using AWS Glue.
    • Building and optimizing data models for analytics and reporting (dbt, SQL).
    • Creating data verification scripts in Python (pandas, numpy, marimo / Jupyter Notebook).
    • Maintaining infrastructure for efficient and secure data access.
    • Collaborating with product owners and analysts to provide insights.
    • Ensuring data quality, integrity, and security across the lifecycle.
    • Keeping up with emerging data engineering technologies and trends.

     

    It’s a match if you have:

    • 1+ year of experience as a Data Engineer.
    • Strong understanding of data warehousing concepts and practices.
    • Hands-on experience with AWS (EC2, S3, IAM, VPC, CloudWatch).
    • Experience with dbt.
    • Proficiency in SQL, PostgreSQL, and MongoDB.
    • Experience with AWS Glue.
    • Knowledge of Kafka, SQS, SNS.
    • Strong Python skills for automation and data processing.
    • Ukrainian β€” C1 level or native.
    • English β€” Intermediate (written and spoken).
    • You are proactive, communicative, and ready to ask questions and offer solutions instead of waiting for answers.

    Nice to have:

    • Knowledge of other cloud platforms (Azure, GCP).
    • Experience with Kubernetes, Docker.
    • Java/Scala as additional tools.
    • Exposure to ML/AI technologies.
    • Experience with data security tools and practices.

     

    What we offer:

    • Flexible schedule and remote format or offices in Warsaw/Kyiv β€” you choose.
    • 24 paid vacation days, sick leaves, and health insurance (UA-based, other locations in progress).
    • A supportive, friendly team where knowledge-sharing is part of the culture.
    • Coverage for professional events and learning.
    • Birthday greetings, team buildings, and warm human connection beyond work.
    • Zero joules of energy to the aggressor state, its affiliated businesses, or partners.

     

    πŸš€ If you’re ready to build scalable and impactful data solutions β€” send us your CV now, we’d love to get to know you better!

    More
  • Β· 51 views Β· 13 applications Β· 20d

    Data Engineer (with Azure)

    Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· B1 - Intermediate
    Main Responsibilities: Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements. You will work on cutting-edge cloud technologies,...

    Main Responsibilities:

    Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements.

     

    You will work on cutting-edge cloud technologies, including Microsoft Fabric, Azure Synapse Analytics, Apache Spark, Data Lake, Data Bricks, Data Factory, Cosmos DB, HD Insights, Stream Analytics, Event Grid in the implementation projects for corporate clients all over EU, CIS, United Kingdom, Middle East.

    Our ideal candidate is a professional passionated with technologies, a curious and self-motivated person.

     

    Responsibilities revolve around DevOps and include implementing ETL pipelines, monitoring/maintaining data pipeline performance, model optimization

     

    Mandatory Requirements:

    – 2+ years of experience, ideally within a Data Engineer role.

    – understanding of data modeling, data warehousing concepts, and ETL processes

    – experience with Azure Cloud technologies

    – experience in distributed computing principles and familiarity with key architectures, broad experience across a set of data stores (Azure Data Lake Store, Azure Synapse Analytics, Apache Spark, Azure Data Factory)

    – Understanding of landing, staging area, data cleansing, data profiling, data security and data architecture concepts (DWH, Data Lake, Delta Lake/Lakehouse, Datamart)

    – SQL-skills

    – communication and interpersonal skills

    – English β€”Π’2

    – Ukrainian language

     

    Will be beneficial if a candidate has experience in SQL migration from on-premises to cloud, data modernization and migration, advanced analytics projects, and/or professional certification in data&analytics.

     

    We offer:

    – professional growth and international certification

    – free of charge technical and business trainings and the best bootcamps (worldwide, including HQ Microsoft- Redmond courses)

    – innovative data & analytics projects, practical experience with cutting-edge Azure data&analytics technologies at various customers’ projects

    – great compensation and individual bonus remuneration

    – medical insurance

    – long-term employment

    – ondividual development plan

    More
  • Β· 62 views Β· 7 applications Β· 20d

    Big Data Engineer

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B2 - Upper Intermediate
    We are looking for a Data Engineer to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstar’s NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata,...

    We are looking for a Data Engineer to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstar’s NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling our data scientists and ML engineers to develop cutting-edge language models. You will work at the intersection of data engineering and machine learning, ensuring that our datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context. This is a unique opportunity to shape the data foundation of a pioneering AI project in Ukraine, working alongside NLP experts and leveraging modern big data technologies.

     

    What you will do

    • Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information. Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
    • Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to our language modeling efforts.
    • Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
    • Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    • Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
    • Automate data processing workflows and ensure their scalability and reliability. Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
    • Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs. Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
    • Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models. Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
    • Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
    • Manage data security, access, and compliance. Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.

     

    Qualifications and experience needed

    • Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
    • NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given our project’s focus. Understanding of FineWeb2 or a similar processing pipeline approach.
    • Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
    • Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
    • Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as our NLP applications may require embedding storage and fast similarity search.
    • Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
    • Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
    • Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.

     

    A plus would be

    • Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
    • Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
    • CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
    • Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
    • Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimising existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve our workflows.

     

    What we offer

    • Office or remote – it’s up to you. You can work from anywhere, and we will arrange your workplace.
    • Remote onboarding.
    • Performance bonuses.
    • We train employees with the opportunity to learn through the company’s library, internal resources, and programs from partners.β€―  
    • Health and life insurance.  
    • Wellbeing program and corporate psychologist.  
    • Reimbursement of expenses for Kyivstar mobile communication.  
    More
  • Β· 49 views Β· 0 applications Β· 19d

    Data Engineer to $7500

    Full Remote Β· Poland Β· 5 years of experience Β· B2 - Upper Intermediate
    Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Gett is a Ground Transportation Solution with the...

    Who we are:

    Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. 

     

    About the Product: 

    Gett is a Ground Transportation Solution with the mission to organize all the best mobility providers in one global platform with great UX - optimizing the entire experience from booking and riding to invoicing and analytics, to save businesses time and money.

     

    About the Role: 
    We are looking for a talented Data Engineer to join us.

    As a Data Engineer at Gett, you will be a key member of the data team, at the core of a data-driven company, developing scalable, robust data platforms and data models and providing business intelligence. You will work in an evolving, challenging environment with a variety of data sources, technologies, and stakeholders to deliver the best solutions to support the business and provide operational excellence.

     

    Key Responsibilities: 

    • Design, Develop & Deploy Data Pipelines and Data Models on various Data Lake / DWH layers;
    • Ingest data from and export data to multiple third-party systems and platforms (e.g., Salesforce, Braze, SurveyMonkey);
    • Architect and implement data-related microservices and products;
    • Ensure the implementation of best practices in data management, including data lineage, observability, and data contracts;
    • Maintain, support, and refactor legacy models and layers within the DWH;
    • Planning and owning complex projects that involve business logic and technical implementation.

     

    Required Competence and Skills:

    • 5+ years of experience in data engineering;
    • Proficiency in Python and SQL;
    • Strong background in data modeling, ETL development, and data warehousing;
    • Experience with data technologies, such as Airflow, Iceberg, Hive, Spark, Airbyte, Kafka,

    Postgres;

    • Experience with cloud environments like AWS, GCP, or Azure.
       

    Nice to have:

    • Experience with Terraform, Kubernetes (K8S), or ArgoCD;
    • Experience in production-level Software development;
    • A bachelor’s degree in Computer Science, Engineering, or a related field.
       

    Why Us?

    We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).

     

    We provide full accounting and legal support in all countries we operate.

     

    We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.

     

    We offer a highly competitive package with yearly performance and compensation reviews.

    More
  • Β· 50 views Β· 9 applications Β· 19d

    Data Engineer (Google Cloud Platform)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B2 - Upper Intermediate
    Cloudfresh is a Global Google Cloud Premier Partner, Zendesk Premier Partner, Asana Solutions Partner, GitLab Select Partner, Hubspot Platinum Partner, Okta Activate Partner, and Microsoft Partner. Since 2017, we’ve been specializing in the...

    Cloudfresh ⛅️ is a Global Google Cloud Premier Partner, Zendesk Premier Partner, Asana Solutions Partner, GitLab Select Partner, Hubspot Platinum Partner, Okta Activate Partner, and Microsoft Partner.

    Since 2017, we’ve been specializing in the implementation, migration, integration, audit, administration, support, and training for top-tier cloud solutions. Our products focus on cutting-edge cloud computing, advanced location and mapping, seamless collaboration from anywhere, unparalleled customer service, and innovative DevSecOps.

    We are looking for a Data Engineer with solid experience in Google Cloud Platform to strengthen our technical team and support projects for international clients.

    Requirements:

    • 3+ years of professional experience in Data Engineering.
    • Hands-on expertise with Google Cloud services such as:
    • BigQuery, Dataflow, Dataproc, Pub/Sub, Composer (Airflow), Cloud Storage, Cloud Functions, IAM.
    • Strong SQL knowledge, including optimization of complex queries.
    • Proficiency in Python for data processing and pipeline development.
    • Solid understanding of data warehouse, data lake, and data mesh architectures.
    • Experience building ETL/ELT pipelines and automating workflows in GCP.
    • CI/CD (GitLab CI or Cloud Build) and basic Terraform (GCP).
    • Familiarity with integrating data from APIs, relational databases, NoSQL systems, and SaaS platforms.
    • English proficiency at Upper-Intermediate (B2) or higher.

    Responsibilities:

    • Design and build modern data platforms for enterprise clients.
    • Develop, optimize, and maintain ETL/ELT processes in GCP.
    • Migrate data from on-premises or other cloud platforms into GCP.
    • Work with both structured and unstructured datasets at scale.
    • Ensure performance optimization and cost-efficiency of pipelines.
    • Engage with clients to gather requirements, run workshops, and deliver technical presentations.
    • Collaborate with architects, DevOps, and ML engineers to deliver end-to-end cloud solutions.
    • Document solutions and produce technical documentation for internal and client use.

    Would be a plus:

    • Experience with Vertex AI or Apigee.
    • Datastream (CDC), BigQuery DTS, Looker / Looker Studio.
    • Dataplex, Data Catalog, policy tags, basic DLP concepts.
    • Google Cloud certifications (Data Engineer, Architect, Digital Leader).
    • Background in building high-load solutions and optimizing GCP costs.

    Work conditions:

    • Competitive Salary: Receive a competitive base salary with employment or contractor arrangement depending on location.
    • Flexible Work Format: Work remotely with flexible hours with core hours aligned to EET, allowing you to balance your professional and personal life efficiently.
    • Training with Leading Cloud Products: Access in-depth training on cutting-edge cloud solutions, enhancing your expertise and equipping you with the tools to succeed in an ever-evolving industry.
    • International Collaboration: Work alongside A-players and seasoned professionals in the cloud industry. Expand your expertise by engaging with international markets across the EMEA and CEE regions.
    • When applying to this position, you consent to the processing of your personal data by CLOUDFRESH for the purposes necessary to conduct the recruitment process, in accordance with Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 (GDPR).
    • Additionally, you agree that CLOUDFRESH may process your personal data for future recruitment processes.
    More
  • Β· 51 views Β· 3 applications Β· 19d

    Data Engineer

    Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    On behalf of our Client from France, Mobilunity is looking for a Senior Data Engineer for a 2-month engagement. Our client is a table management software and CRM that enables restaurant owners to welcome their customers easily. The app is useful to...

    On behalf of our Client from France, Mobilunity is looking for a Senior Data Engineer for a 2-month engagement.

     

    Our client is a table management software and CRM that enables restaurant owners to welcome their customers easily. The app is useful to manage booking requests and register new bookings. You can view all your bookings, day after day, wherever you are and optimize your restaurant’s occupation rate. Our client offers a commission-free booking solution that guarantees freedom above all. New technologies thus become the restaurateurs best allies for saving time and gaining customers while ensuring a direct relationship with them.

     

    Their goal is to become the #1 growth platform for Restaurants. They believe that restaurants have become lifestyle brands, and with forward-thinking digital products, restauranteurs will create the same perfect experience online as they already do offline, resulting in a more valuable, loyalty-led business.

     

    Our client is looking for a Senior Engineer to align key customer data across Salesforce, Chargebee, Zendesk, other tools and their Back‑office. The goal is a dedicated, historized β€œCustomer 360β€³ table at restaurant and contact levels that exposes discrepancies and gaps, supports updates/cleaning across systems where appropriate, and includes monitoring and Slack alerts.

    Tech Stack: Databricks (Delta/Unity Catalog), Python, SQL, Slack.

     

    Responsibilities:

    • Design and build a consolidated Customer 360 table in Databricks that links entities across Salesforce, Chargebee, Zendesk, and Back‑office (entity resolution, deduplication, survivorship rules)
    • Implement data cleaning and standardization rules; where safe and approved, update upstream systems via Python/API
    • Historize customer attributes to track changes over time
    • Create robust data quality checks (completeness, consistency across systems, referential integrity, unexpected changes) and surface issues via Slack alerts
    • Establish operational monitoring: freshness SLAs, job success/failure notifications
    • Document schemas, matching logic, cleaning rules, and alert thresholds; define ownership and escalation paths

     

    Requirements:

    • 5+ years in data engineering/analytics engineering with strong Python/SQL skills
    • Hands‑on experience with Databricks (Delta, SQL, PySpark optional) and building production data models
    • Experience integrating with external SaaS APIs (e.g., Salesforce REST/Bulk, Zendesk, Chargebee) including auth, rate limiting, retries, and idempotency
    • Solid grasp of entity resolution, deduplication, and survivorship strategies; strong SQL
    • Experience implementing data quality checks and alerting (Slack/webhooks or similar)
    • Security‑minded when handling PII (access control, minimization, logging)
    • Proficient with Git and PR-based workflows (Databricks Repos, code review, versioning)
    • Upper-intermediate, close to advance English

     

    Nice to have:

    • Experience with Databricks (Delta/Unity Catalog)
    • Background in MDM/Golden Record/Customer 360 initiatives
    • Experience with CI/CD for data (tests, code review, environments) and Databricks Jobs for scheduling

    Success Criteria (by end of engagement):

    • Production Customer 360 table with documented matching logic and survivorship rules
    • Data is cleaned and consistent across systems where business rules permit; change history persisted
    • Automated data quality checks and Slack alerts in place; clear runbooks for triage
    • Documentation and ownership model delivered; stakeholders can self-serve the aligned view

     

    In return we offer:

    • The friendliest community of like-minded IT-people
    • Open knowledge-sharing environment – exclusive access to a rich pool of colleagues willing to share their endless insights into the broadest variety of modern technologies
    • Perfect office location in the city-center (900m from Lukyanivska metro station with a green and spacious neighborhood) or remote mode engagement: you can choose a convenient one for you, with a possibility to fit together both
    • No open-spaces setup – separate rooms for every team’s comfort and multiple lounge and gaming zones
    • English classes in 1-to-1 & group modes with elements of gamification
    • Neverending fun: sports events, tournaments, music band, multiple affinity groups

     

    🐳Come on board, and let’s grow together!🐳

    More
  • Β· 71 views Β· 9 applications Β· 18d

    Data Engineer

    Full Remote Β· Poland, Ukraine, Romania, Bulgaria, Lithuania Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    Data Engineer (100% remote) in either Poland, Ukraine, Romania, Bulgaria, Lithuania, Latvia, or Estonia Point Wild helps customers monitor, manage, and protect against the risks associated with their identities and personal information in a digital...

    Data Engineer (100% remote) in either Poland, Ukraine, Romania, Bulgaria, Lithuania, Latvia, or Estonia

     

    Point Wild helps customers monitor, manage, and protect against the risks associated with their identities and personal information in a digital world. Backed by WndrCo, Warburg Pincus and General Catalyst, Point Wild is dedicated to creating the world’s most comprehensive portfolio of industry-leading cybersecurity solutions. Our vision is to become THE go-to resource for every cyber protection need individuals may face - today and in the future. 

     

    Join us for the ride!

     

    About the Role:

    We are seeking a highly skilled Data Engineer with deep experience in Databricks and modern lakehouse architectures to join the Lat61 platform team. This role is critical in designing, building, and optimizing the pipelines, data structures, and integrations that power Lat61.

     

    You will collaborate closely with data architects, AI engineers, and product leaders to deliver a scalable, resilient, and secure foundation for advanced analytics, machine learning, and cryptographic risk management use cases.

     

    Your Day to Day:

    • Build and optimize data ingestion pipelines on Databricks (batch and streaming) to process structured, semi-structured, and unstructured data.
    • Implement scalable data models and transformations leveraging Delta Lake and open data formats (Parquet, Delta).
    • Design and manage workflows with Databricks Workflows, Airflow, or equivalent orchestration tools.
    • Implement automated testing, lineage, and monitoring frameworks using tools like Great Expectations and Unity Catalog.
    • Build integrations with enterprise and third-party systems via cloud APIs, Kafka/Kinesis, and connectors into Databricks.
    • Partner with AI/ML teams to provision feature stores, integrate vector databases (Pinecone, Milvus, Weaviate), and support RAG-style architectures.
    • Optimize Spark and SQL workloads for speed and cost efficiency across multi-cloud environments (AWS, Azure, GCP).
    • Apply secure-by-design data engineering practices aligned with Point Wild’s cybersecurity standards and evolving post-quantum cryptographic frameworks.

     

    What you bring to the table:

    • At least 5 years in Data Engineering with strong experience building production data systems on Databricks.
    • Expertise in PySpark, SQL, and Python.
    • Strong expertise with various AWS services.
    • Strong knowledge of Delta Lake, Parquet, and lakehouse architectures.
    • Experience with streaming frameworks (Structured Streaming, Kafka, Kinesis, or Pub/Sub).
    • Familiarity with DBT for transformation and analytics workflows.
    • Strong understanding of data governance and security controls (Unity Catalog, IAM).
    • Exposure to AI/ML data workflows (feature stores, embeddings, vector databases).
    • Detail-oriented, collaborative, and comfortable working in a fast-paced innovation-driven environment.

     

    Bonus Points:

    • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
    • Data Engineering experience in a B2B SaaS organization.

     

    Lat61 Mission

    The Lat61 platform will power the next generation of cybersecurity and AI-enabled decision-making. As a Data Engineer on this team, you will help deliver:

    • Multi-Modal Data Ingestion: Bringing together logs, telemetry, threat intel, identity data, cryptographic assets, and third-party feeds into a unified lakehouse.
    • AI Agent Enablement: Supporting Retrieval-Augmented Generation (RAG) workflows, embeddings, and feature stores to fuel advanced AI use cases across Point Wild products.
    • Analytics & Decision Systems: Providing real-time insights into risk posture, compliance, and security events through scalable pipelines and APIs.
    • Future-Proofing for Quantum: Laying the groundwork for automated remediation and transition to post-quantum cryptographic standards.

     

    Your work won’t just be about pipelines and data models - it will directly shape how enterprises anticipate, prevent, and respond to cybersecurity risks in an era of quantum disruption.

    More
  • Β· 68 views Β· 12 applications Β· 18d

    Middle Data Engineer

    Full Remote Β· EU Β· Product Β· 3 years of experience Β· B1 - Intermediate
    FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment. We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable...

    FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment.
    We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable to withstand enormous loads and provide a unique experience for players.
    FAVBET Tech does not organize and conduct gambling on its platform. Its main focus is software development.

     

    We are looking for a Middle/Senior Data Engineer to join our Data Integration Team.

    Main areas of work:

    • Betting/Gambling Platform Software Development β€” software development that is easy to use and personalized for each customer.
    • Highload Development β€” development of highly loaded services and systems.
    • CRM System Development β€” development of a number of services to ensure a high level of customer service, effective engagement of new customers and retention of existing ones.
    • Big Data β€” development of complex systems for processing and analysis of big data.
    • Cloud Services β€” we use cloud technologies for scaling and business efficiency

    Responsibilities:

    • Design, build, install, test, and maintain highly scalable data management systems.
    • Develop ETL/ELT processes and frameworks for efficient data transformation and loading.
    • Implement, optimize, and support reporting solutions for the Sportsbook domain.
    • Ensure effective storage, retrieval, and management of large-scale data.
    • Improve data query performance and overall system efficiency.
    • Collaborate closely with data scientists and analysts to deliver data solutions and actionable insights.

    Requirements:

    • At least 2 years of experience in designing and implementing modern data integration solutions.
    • Master’s degree in Computer Science or a related field.
    • Proficiency in Python and SQL, particularly for data engineering tasks.
    • Hands-on experience with data processing, ETL (Extract, Transform, Load), ELT (Extract, Load, Transform) processes, and data pipeline development.
    • Experience with DBT framework and Airflow orchestration.
    • Practical experience with both SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
    • Experience with Snowflake.
    • Working knowledge of cloud services, particularly AWS (S3, Glue, Redshift, Lambda, RDS, Athena).
    • Experience in managing data warehouses and data lakes.
    • Familiarity with star and snowflake schema design.
    • Understanding of the difference between OLAP and OLTP.

    Would be a plus:

    • Experience with other cloud data services (e.g., AWS Redshift, Google BigQuery).
    • Experience with version control tools (e.g., GitHub, GitLab, Bitbucket).
    • Experience with real-time data processing (e.g., Kafka, Flink).
    • Familiarity with orchestration tools (e.g., Airflow, Luigi).
    • Experience with monitoring and logging tools (e.g., ELK Stack, Prometheus, CloudWatch).
    • Knowledge of data security and privacy practices.

    We offer:

    • 30 day off β€” we value rest and recreation;
    • Medical insurance for employees and the possibility of training employees at the expense of the company and gym membership;
    • Remote work or the opportunity β€” our own modern lofty office with spacious workplace, and brand-new work equipment (near Pochaina metro station);
    • Flexible work schedule β€” we expect a full-time commitment but do not track your working hours;
    • Flat hierarchy without micromanagement β€” our doors are open, and all teammates are approachable.

    During the war, the company actively supports the Ministry of Digital Transformation of Ukraine in the initiative to deploy an IT army and has already organized its own cyber warfare unit, which makes a crushing blow to the enemy’s IT infrastructure 24/7, coordinates with other cyber volunteers and plans offensive actions on its IT front line.

    More
  • Β· 26 views Β· 0 applications Β· 17d

    Lead Data Engineer IRC277440

    Full Remote Β· Ukraine Β· 7 years of experience Β· B2 - Upper Intermediate
    The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of our broader...

    The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of our broader team, with a focus on patient engagement, care coordination, AI, healthcare analytics, and interoperability. These advanced technologies enhance our product portfolio with new services while improving clinical and patient experiences.

     

    As part of the GlobalLogic team, you will grow, be challenged, and expand your skill set working alongside highly experienced and talented people.

    If this sounds like an exciting opportunity for you, send over your CV!

     

     

    Requirements

    MUST HAVE

    • AWS Platform: Working experience with AWS data technologies, including S3 and AWS SageMaker (SageMaker Unified is a plus)
    • Programming Languages: Strong programming skills in Python
    • Data Formats: Experience with JSON, XML and other relevant data formats
    • CI/CD Tools: experience setting up and managing CI/CD pipelines using GitLab CI, Jenkins, or similar tools
      Scripting and automation: experience in scripting languages such as Python, PowerShell, etc…
    • Monitoring and Logging: Familiarity with monitoring & logging tools like CloudWatch, ELK, Dynatrace, Prometheus, etc…
    • Source Code Management: Expertise with git commands and associated VCS (Gitlab, Github, Gitea or similar)
    • Documentation: Experience with markdown and, in particular, Antora for creating technical documentation

     

     

    NICE TO HAVE
    Previous Healthcare or Medical Device experience

    HealthCare Interoperability Tools: Previous experience with integration engines such as Intersystems, Lyniate, Redox, Mirth Connect, etc…
    Other data technologies, such as Snowflake, Trino/Starburst
    Experience working with Healthcare Data, including HL7v2, FHIR and DICOM
    FHIR and/or HL7 Certifications
    Building software classified as Software as a Medical Device (SaMD)
    Understanding of EHR technologies such as EPIC, Cerner, etc…
    Experience implementation of enterprise-grade cyber security & privacy by design into software products
    Experience working in Digital Health software
    Experience developing global applications
    Strong understanding of SDLC – Waterfall & Agile methodologies
    Experience leading software development teams onshore and offshore

     

    Job responsibilities

    – Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.

    – Involved in planning of system and development deployment as well as responsible for meeting compliance and security standards.

    – API development using AWS services in a scalable, microservices-based architecture

    – Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.

    – May document testing and maintenance of system updates, modifications, and configurations.

    – May act as a liaison with key technology vendor technologists or other business functions.

    – Function Specific: Strategically design technology solutions that meet the needs and goals of the company and its customers/users.

    – Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or customisation solution would be required.

    – Test the quality of a product and its ability to perform a task or solve a problem.

    – Perform basic maintenance and performance optimisation procedures in each of the primary operating systems.

    – Ability to document detailed technical system specifications based on business system requirements

    – Ensures system implementation compliance with global & local regulatory and security standards (i.e. HIPAA, SOCII, ISO27001, etc.)

    More
  • Β· 31 views Β· 2 applications Β· 13d

    Senior Data Engineer

    Full Remote Β· Spain, Poland, Portugal, Romania Β· 5 years of experience Β· B2 - Upper Intermediate
    Project tech stack: Snowflake, AWS, Python/dbt, DWH design & implementation of medallion architecture, strong integration experience, data modelling for analytical solutions, CI/CD We are looking for a Senior Data Engineer to build and scale a...

    Project tech stack: Snowflake, AWS, Python/dbt, DWH design & implementation of medallion architecture, strong integration experience, data modelling for analytical solutions,  CI/CD

     

    We are looking for a Senior Data Engineer to build and scale a Snowflake-based data platform supporting Credit Asset Management and Wealth Solutions. The role involves ingesting data from SaaS investment platforms via data shares and custom ETL, establishing a medallion architecture, and modeling data into appropriate data marts for exposing it to analytical consumption. 

     

     

    About the project

    Our client is a global real estate services company specializing in the management and development of commercial properties. Over the past several years, the organization has made significant strides in systematizing and standardizing its reporting infrastructure and capabilities. Due to the increased demand for reporting, the organization is seeking a dedicated team to expand capacity and free up existing resources.

     

    Skills & Experience

    • Bachelor's degree in Computer Science, Engineering, or related field;
    • 5+ years of experience in data engineering roles;
    • Strong knowledge in SQL,data modeling, database management system and optimization;
    • Strong experience in Snowflake,  proven experience building scalable data pipelines into Snowflake,  data shares and custom connectors.
    • Hands-on ETL/ELT experience; Workato experience strongly preferred.
    • Solid Python and/or dbt experience for transformations and testing.
    • Proficiency with AWS platforms for scalable solutions, Azure is a plus;
    • Understanding of data governance, data modeling & analysis and data quality concepts and requirements;
    • Experience in implementing medallion architecture and data quality frameworks.
    • Understanding of data lifecycle, DataOps concepts, and basic design patterns;
    • Experience setting up IAM, access controls, catalog/lineage, and CI/CD for data.
    • Excellent communication and ability to work with business stakeholders to shape requirements.

       

    Nice to Have

    • Domain exposure to credit/investments and insurance data
    • Familiarity with schemas and data models from:BlackRock Aladdin, Clearwater, WSO, SSNC PLM
    • Experience with Databricks, Airflow, or similar orchestration tools
    • Prior vendor/staff augmentation experience in fast-moving environments

     

     

    Responsibilities

    • Build and maintain scalable data pipelines into Snowflake using Workato and native Snowflake capabilities 
    • Integrate heterogeneous vendor data via data shares and custom ETL
    • Implement and enforce medallion architecture (bronze/silver/gold) and data quality checks.
    • Collaborate with tech lead and business partners to define logical data marts for analytics and reporting.
    • Contribute to non-functional setup: IAM/role-based access, data cataloging, lineage, access provisioning, monitoring, and cost optimization.
    • Document data models, schemas, pipelines, and operational runbooks.
    • Operate effectively in a less-structured environment; proactively clarify priorities and drive outcomes.
    • Collaborate closely with the team members and other stakeholders;
    • Provide technical support and mentoring to junior data engineers;
    • Participate in data governance and compliance efforts;
    • Document data pipelines, processes, and best practices;
    • Evaluate and recommend new data technologies.

     

    More
  • Β· 101 views Β· 8 applications Β· 12d

    Data Analyst

    Ukraine Β· Product Β· 1 year of experience Β· A2 - Elementary
    Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country. At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT...

    Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country.

    At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT teams, consisting of over 800 professionals. Every day, we collaborate to ensure that more than 2.7 million of our clients receive quality service, use the bank’s products and services, and develop their businesses because we are #Together_with_Ukraine.

    Your future responsibilities:

    • Preparing samples and datasets from the data warehouse (AWS, SQL)
    • Developing and supporting BI reports for business units
    • Optimizing existing reports, automating reporting processes
    • Participating in all stages of the development life cycle: from requirements gathering and analytics to testing and delivery
    • Testing results and organizing the data validation process
    • Working with ad-hoc queries (analytical queries based on business needs)
    • Participating in script migration processes from the old data warehouse to the new one (AWS)
    • Documenting solutions and development processes
    • Communicating with business customers and other units (gathering and agreeing on requirements, presenting results)

    Your skills and experience:

    • Higher education (economic, technical or mathematical)
    • Experience in BI / reporting from 1 year
    • Confident knowledge of SQL (writing complex queries, optimization, procedures)
    • Experience with Power BI / Report Builder
    • Knowledge of Python (pandas, pyspark) and Airflow will be an advantage
    • Understanding of data warehouse architecture
    • Analytical thinking, ability to work with large amounts of data
    • Ability to transform business requirements into technical tasks
    • Willingness to work in a team, openness to new technologies

    We offer what matters most to you:

    • Competitive salary: we guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
    • Social package: official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
    • Comfortable working conditions: possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment. Classification: PUBLIC
    • Wellbeing program: all employees have access to medical insurance from the first working day; consultations with a psychologist, nutritionist, or lawyer; discount programs for sports and purchases; family days for children and adults; in-office massages
    • Training and development: access to over 130 online training resources; corporate training programs in CX, Data, IT Security, Leadership, Agile. Corporate library and English lessons. Great team: our colleagues form a community where curiosity, talent, and innovation are welcome. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
    • Career opportunities: we encourage advancement within the bank across functions
    • Innovations and technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (IOS), Kotlin (Android). Data stores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
    • Support program for defenders: we maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and develop the Bank’s veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes

    Why Raiffeisen Bank?

    • Our main value is people, and we support and recognize them, educate them and involve them in changes. Join Raif’s team because for us YOU matter!
    • One of the largest lenders to the economy and agricultural business among private banks
    • Recognized as the best employer by EY, Forbes, Randstad, Franklin Covey, and Delo.UA
    • The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)
    • One of the largest IT product teams among the country’s banks. One of the largest taxpayers in Ukraine; 6.6 billion UAH were paid in taxes in 2023

    Opportunities for Everyone:

    • Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
    • We support the principles of diversity, equality and inclusiveness
    • We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
    • We cooperate with students and older people, creating conditions for growth at any career stage

    Want to learn more? β€” Follow us on social media:

    Facebook, Instagram, LinkedIn

    ___________________________________________________________________________________________

    Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ β€” Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ ΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· Ρ–Π½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡ–Ρ‚Π°Π»ΠΎΠΌ. Π‘Ρ–Π»ΡŒΡˆΠ΅ 30 Ρ€ΠΎΠΊΡ–Π² ΠΌΠΈ ΡΡ‚Π²ΠΎΡ€ΡŽΡ”ΠΌΠΎ Ρ‚Π° Π²ΠΈΠ±ΡƒΠ΄ΠΎΠ²ΡƒΡ”ΠΌΠΎ Π±Π°Π½ΠΊΡ–Π²ΡΡŒΠΊΡƒ систСму Π½Π°ΡˆΠΎΡ— Π΄Π΅Ρ€ΠΆΠ°Π²ΠΈ.

    Π£ Π Π°ΠΉΡ„Ρ– ΠΏΡ€Π°Ρ†ΡŽΡ” ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡ–Π²Ρ€ΠΎΠ±Ρ–Ρ‚Π½ΠΈΠΊΡ–Π², сСрСд Π½ΠΈΡ… ΠΎΠ΄Π½Π° Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΎΠ²ΠΈΡ… Π†Π’-ΠΊΠΎΠΌΠ°Π½Π΄, Ρ‰ΠΎ Π½Π°Π»Ρ–Ρ‡ΡƒΡ” ΠΏΠΎΠ½Π°Π΄ 800 Ρ„Π°Ρ…Ρ–Π²Ρ†Ρ–Π². Щодня ΠΏΠ»Ρ–Ρ‡-ΠΎ-ΠΏΠ»Ρ–Ρ‡ ΠΌΠΈ ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ, Ρ‰ΠΎΠ± Π±Ρ–Π»ΡŒΡˆ Π½Ρ–ΠΆ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½Π° Π½Π°ΡˆΠΈΡ… ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡ‚Ρ€ΠΈΠΌΠ°Ρ‚ΠΈ якіснС обслуговування, користуватися ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Π°ΠΌΠΈ Ρ– ΡΠ΅Ρ€Π²Ρ–сами Π±Π°Π½ΠΊΡƒ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ‚ΠΈ бізнСс, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡ€Π°Ρ—Π½ΠΎΡŽ.β€―

    Π’Π²ΠΎΡ— ΠΌΠ°ΠΉΠ±ΡƒΡ‚Π½Ρ– обов’язки:β€―

    • ΠŸΡ–Π΄Π³ΠΎΡ‚ΠΎΠ²ΠΊΠ° Π²ΠΈΠ±Ρ–Ρ€ΠΎΠΊ Ρ‚Π° Π΄Π°Ρ‚асСтів Π·Ρ– ΡΡ…ΠΎΠ²ΠΈΡ‰Π° Π΄Π°Π½ΠΈΡ… (AWS, SQL)
    • Π ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠ° Ρ‚Π° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠ° BI-Π·Π²Ρ–Ρ‚Ρ–Π² для бізнСс-ΠΏΡ–Π΄Ρ€ΠΎΠ·Π΄Ρ–Π»Ρ–Π²
    • ΠžΠΏΡ‚ΠΈΠΌΡ–Π·Π°Ρ†Ρ–Ρ Ρ–ΡΠ½ΡƒΡŽΡ‡ΠΈΡ… Π·Π²Ρ–Ρ‚Ρ–Π², автоматизація процСсів формування звітності
    • Π£Ρ‡Π°ΡΡ‚ΡŒ Ρƒ Π²ΡΡ–Ρ… Π΅Ρ‚Π°ΠΏΠ°Ρ… ΠΆΠΈΡ‚Ρ‚Ρ”Π²ΠΎΠ³ΠΎ Ρ†ΠΈΠΊΠ»Ρƒ Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΈ: Π²Ρ–Π΄ Π·Π±ΠΎΡ€Ρƒ Π²ΠΈΠΌΠΎΠ³ Ρ– Π°Π½Π°Π»Ρ–Ρ‚ΠΈΠΊΠΈ Π΄ΠΎ Ρ‚Сстування Ρ‚Π° ΠΏΠΎΡΡ‚Π°Π²ΠΊΠΈ
    • ВСстування Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚Ρ–Π² Ρ‚Π° ΠΎΡ€Π³Π°Π½Ρ–зація процСсу ΠΏΠ΅Ρ€Π΅Π²Ρ–Ρ€ΠΊΠΈ Π΄Π°Π½ΠΈΡ…
    • Π ΠΎΠ±ΠΎΡ‚Π° Π· ad-hoc Π·Π°ΠΏΠΈΡ‚Π°ΠΌΠΈ (Π°Π½Π°Π»Ρ–Ρ‚ΠΈΡ‡Π½Ρ– Π·Π°ΠΏΠΈΡ‚ΠΈ Π·Π° ΠΏΠΎΡ‚Ρ€Π΅Π±ΠΎΡŽ бізнСсу)
    • Π£Ρ‡Π°ΡΡ‚ΡŒ Π² ΠΏΡ€ΠΎΡ†Π΅ΡΠ°Ρ… ΠΌΡ–Π³Ρ€Π°Ρ†Ρ–Ρ— скриптів Π·Ρ– ΡΡ‚Π°Ρ€ΠΎΠ³ΠΎ сховища Π΄Π°Π½ΠΈΡ… Π½Π° Π½ΠΎΠ²Π΅ (AWS)
    • ДокумСнтування Ρ€Ρ–ΡˆΠ΅Π½ΡŒ Ρ‚Π° ΠΏΡ€ΠΎΡ†Π΅ΡΡ–Π² Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠΈ
    • ΠšΠΎΠΌΡƒΠ½Ρ–ΠΊΠ°Ρ†Ρ–Ρ Π· Π±Ρ–знСс-Π·Π°ΠΌΠΎΠ²Π½ΠΈΠΊΠ°ΠΌΠΈ Ρ‚Π° Ρ–Π½ΡˆΠΈΠΌΠΈ ΠΏΡ–Π΄Ρ€ΠΎΠ·Π΄Ρ–Π»Π°ΠΌΠΈ (Π·Π±Ρ–Ρ€ Ρ‚Π° ΡƒΠ·Π³ΠΎΠ΄ΠΆΠ΅Π½Π½Ρ Π²ΠΈΠΌΠΎΠ³, прСзСнтація Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚Ρ–Π²)

    Π’Π²Ρ–ΠΉ досвід Ρ‚Π° Π½Π°Π²ΠΈΡ‡ΠΊΠΈ:

    • Π’ΠΈΡ‰Π° освіта (Π΅ΠΊΠΎΠ½ΠΎΠΌΡ–Ρ‡Π½Π°, Ρ‚Π΅Ρ…Π½Ρ–Ρ‡Π½Π° Π°Π±ΠΎ ΠΌΠ°Ρ‚Π΅ΠΌΠ°Ρ‚ΠΈΡ‡Π½Π°)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Ρƒ ΡΡ„Π΅Ρ€Ρ– BI / звітності Π²Ρ–Π΄ 1 Ρ€ΠΎΠΊΡƒ
    • Π’ΠΏΠ΅Π²Π½Π΅Π½Π΅ знання SQL (написання складних Π·Π°ΠΏΠΈΡ‚Ρ–Π², оптимізація, ΠΏΡ€ΠΎΡ†Π΅Π΄ΡƒΡ€ΠΈ)
    • Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Power BI /Report Builder
    • Знання Python (pandas, pyspark) Ρ‚Π° Πirflow Π±ΡƒΠ΄Π΅ ΠΏΠ΅Ρ€Π΅Π²Π°Π³ΠΎΡŽ
    • Розуміння Π°Ρ€Ρ…Ρ–Ρ‚Π΅ΠΊΡ‚ΡƒΡ€ΠΈ сховищ Π΄Π°Π½ΠΈΡ…
    • Аналітичний мислСння, Π·Π΄Π°Ρ‚Π½Ρ–ΡΡ‚ΡŒ ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Ρ‚ΠΈ Π· Π²Π΅Π»ΠΈΠΊΠΈΠΌΠΈ обсягами Π΄Π°Π½ΠΈΡ…
    • Уміння траснсформувати бізнСс-Π²ΠΈΠΌΠΎΠ³ΠΈ Ρƒ Ρ‚Π΅Ρ…Π½Ρ–Ρ‡Π½Ρ– завдання
    • Π“ΠΎΡ‚ΠΎΠ²Π½Ρ–ΡΡ‚ΡŒ ΠΏΡ€Π°Ρ†ΡŽΠ²Π°Ρ‚ΠΈ Π² ΠΊΠΎΠΌΠ°Π½Π΄Ρ–, Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Ρ–ΡΡ‚ΡŒ Π΄ΠΎ Π½ΠΎΠ²ΠΈΡ… Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–ΠΉ

    ΠŸΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ Ρ‚Π΅, Ρ‰ΠΎ ΠΌΠ°Ρ” значСння самС для Ρ‚Π΅Π±Π΅:β€―

    • ΠšΠΎΠ½ΠΊΡƒΡ€Π΅Π½Ρ‚Π½Π° Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Π° ΠΏΠ»Π°Ρ‚Π°: Π³Π°Ρ€Π°Π½Ρ‚ΡƒΡ”ΠΌΠΎ ΡΡ‚Π°Π±Ρ–Π»ΡŒΠ½ΠΈΠΉ Π΄ΠΎΡ…Ρ–Π΄ Ρ‚Π° Ρ€Ρ–Ρ‡Π½Ρ– бонуси Π·Π° Ρ‚Π²Ρ–ΠΉ особистий внСсок. Π”ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΎ, Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” Ρ€Π΅Ρ„Π΅Ρ€Π°Π»ΡŒΠ½Π° ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡ€ΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡƒΡ‡Π΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ… ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊΡƒ.
    • Π‘ΠΎΡ†Ρ–Π°Π»ΡŒΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ‚: ΠΎΡ„Ρ–Ρ†Ρ–ΠΉΠ½Π΅ ΠΏΡ€Π°Ρ†Π΅Π²Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ, 28 Π΄Π½Ρ–Π² ΠΎΠΏΠ»Π°Ρ‡ΡƒΠ²Π°Π½ΠΎΡ— відпустки, Π΄ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²ΠΈΠΉ β€œΠ΄Π΅ΠΊΡ€Π΅Ρ‚β€ для татусів, Ρ‚Π° ΠΌΠ°Ρ‚Π΅Ρ€Ρ–Π°Π»ΡŒΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° для Π±Π°Ρ‚ΡŒΠΊΡ–Π² ΠΏΡ€ΠΈ Π½Π°Ρ€ΠΎΠ΄ΠΆΠ΅Π½Π½Ρ– Π΄Ρ–Ρ‚Π΅ΠΉ.
    • ΠšΠΎΠΌΡ„ΠΎΡ€Ρ‚Π½Ρ– ΡƒΠΌΠΎΠ²ΠΈ ΠΏΡ€Π°Ρ†Ρ–: ΠΌΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ Π³Ρ–Π±Ρ€ΠΈΠ΄Π½ΠΎΠ³ΠΎ Ρ„ΠΎΡ€ΠΌΠ°Ρ‚Ρƒ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ, офіси Π·Π°Π±Π΅Π·ΠΏΠ΅Ρ‡Π΅Π½Π½Ρ– укриттями Ρ‚Π° Π³Π΅Π½Π΅Ρ€Π°Ρ‚ΠΎΡ€Π°ΠΌΠΈ, забСзпСчСння ΡΡƒΡ‡Π°ΡΠ½ΠΎΡŽ Ρ‚Π΅Ρ…Π½Ρ–ΠΊΠΎΡŽ.
    • Wellbeing ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ°: для всіх співробітників доступні ΠΌΠ΅Π΄ΠΈΡ‡Π½Π΅ страхування Π· ΠΏΠ΅Ρ€ΡˆΠΎΠ³ΠΎ Ρ€ΠΎΠ±ΠΎΡ‡ΠΎΠ³ΠΎ дня; ΠΊΠΎΠ½ΡΡƒΠ»ΡŒΡ‚Π°Ρ†Ρ–Ρ— психолога, Π½ΡƒΡ‚Ρ€ΠΈΡ†Ρ–ΠΎΠ»ΠΎΠ³Π° Ρ‡ΠΈ ΡŽΡ€ΠΈΡΡ‚Π°; дисконт ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡ€Ρ‚ Ρ‚Π° ΠΏΠΎΠΊΡƒΠΏΠΊΠΈ; family days для Π΄Ρ–Ρ‚Π΅ΠΉ Ρ‚Π° Π΄ΠΎΡ€ΠΎΡΠ»ΠΈΡ…; масаТ Π² ΠΎΡ„ісі.
    • Навчання Ρ‚Π° Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ: доступ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½ΠΈΡ… ΠΎΠ½Π»Π°ΠΉΠ½-рСсурсів; ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Ρ– Π½Π°Π²Ρ‡Π°Π»ΡŒΠ½Ρ– ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ Π· CX, Data, IT Security, ЛідСрства, Agile. ΠšΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²Π½Π° Π±Ρ–Π±Π»Ρ–ΠΎΡ‚Π΅ΠΊΠ° Ρ‚Π° ΡƒΡ€ΠΎΠΊΠΈ Π°Π½Π³Π»Ρ–ΠΉΡΡŒΠΊΠΎΡ—.
    • ΠšΡ€ΡƒΡ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡˆΡ– ΠΊΠΎΠ»Π΅Π³ΠΈ β€” Ρ†Π΅ ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π°, Π΄Π΅ Π²Ρ–Ρ‚Π°ΡŽΡ‚ΡŒΡΡ Π΄ΠΎΠΏΠΈΡ‚Π»ΠΈΠ²Ρ–ΡΡ‚ΡŒ, Ρ‚Π°Π»Π°Π½Ρ‚ Ρ‚Π° Ρ–Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ—. Ми ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, вчимося Ρ€Π°Π·ΠΎΠΌ Ρ‚Π° Π·Ρ€ΠΎΡΡ‚Π°Ρ”ΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρˆ Π·Π½Π°ΠΉΡ‚ΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡƒΠΌΡ†Ρ–Π² Ρƒ ΠΏΠΎΠ½Π°Π΄ 15-Ρ‚ΠΈ профСсійних ΠΊΠΎΠΌβ€™ΡŽΠ½Ρ–Ρ‚Ρ–, Ρ‡ΠΈΡ‚Π°Ρ†ΡŒΠΊΠΎΠΌΡƒ Ρ‡ΠΈ ΡΠΏΠΎΡ€Ρ‚ΠΈΠ²Π½ΠΎΠΌΡƒ ΠΊΠ»ΡƒΠ±Π°Ρ….
    • ΠšΠ°Ρ€β€™Ρ”Ρ€Π½Ρ– моТливості: ΠΌΠΈ Π·Π°ΠΎΡ…ΠΎΡ‡ΡƒΡ”ΠΌΠΎ просування всСрСдині Π±Π°Π½ΠΊΡƒ ΠΌΡ–ΠΆ функціями.
    • Π†Π½Π½ΠΎΠ²Π°Ρ†Ρ–Ρ— Ρ‚Π° Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ—. Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink.
    • ΠŸΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ захисників Ρ– Π·Π°Ρ…ΠΈΡΠ½ΠΈΡ†ΡŒ: ΠΌΠΈ Π·Π±Π΅Ρ€Ρ–Π³Π°Ρ”ΠΌΠΎ Ρ€ΠΎΠ±ΠΎΡ‡Ρ– місця Ρ‚Π° Π²ΠΈΠΏΠ»Π°Ρ‡ΡƒΡ”ΠΌΠΎ ΡΠ΅Ρ€Π΅Π΄Π½ΡŽ Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Ρƒ ΠΏΠ»Π°Ρ‚Ρƒ ΠΌΠΎΠ±Ρ–Π»Ρ–Π·ΠΎΠ²Π°Π½ΠΈΠΌ. Для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ‚Π° Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΠΎΠΊ Ρƒ Π½Π°Ρ Π΄Ρ–Ρ” ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ° ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ, Ρ€ΠΎΠ·Π²ΠΈΠ²Π°Ρ”Ρ‚ΡŒΡΡ Π²Π΅Ρ‚Π΅Ρ€Π°Π½ΡΡŒΠΊΠ° ΡΠΏΡ–Π»ΡŒΠ½ΠΎΡ‚Π° Π‘Π°Π½ΠΊΡƒ. Ми ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π½Π°Π΄ підвищСнням обізнаності ΠΊΠ΅Ρ€Ρ–Π²Π½ΠΈΠΊΡ–Π² Ρ‚Π° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡ‚Π°Π½ΡŒ повСрнСння Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Π΄ΠΎ Ρ†ΠΈΠ²Ρ–Π»ΡŒΠ½ΠΎΠ³ΠΎ Тиття. Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ Π²Ρ–Π΄Π·Π½Π°Ρ‡Π΅Π½ΠΈΠΉ як ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΡ… Ρ€ΠΎΠ±ΠΎΡ‚ΠΎΠ΄Π°Π²Ρ†Ρ–Π² для Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² (Forbes).

    Π§ΠΎΠΌΡƒ Π Π°ΠΉΡ„Ρ„Π°ΠΉΠ·Π΅Π½ Π‘Π°Π½ΠΊ?β€―

    • Наша Π³ΠΎΠ»ΠΎΠ²Π½Π° Ρ†Ρ–Π½Π½Ρ–ΡΡ‚ΡŒ β€” люди Ρ– ΠΌΠΈ Π΄Π°Ρ”ΠΌΠΎ Ρ—ΠΌ ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΡƒ Ρ– Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²Ρ‡Π°Ρ”ΠΌΠΎ, Π·Π°Π»ΡƒΡ‡Π°Ρ”ΠΌΠΎ Π΄ΠΎ Π·ΠΌΡ–Π½. ΠŸΡ€ΠΈΡ”Π΄Π½ΡƒΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡ„Ρƒ, Π°Π΄ΠΆΠ΅ для нас Π’И ΠΌΠ°Ρ”Ρˆ значСння!β€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΊΡ€Π΅Π΄ΠΈΡ‚ΠΎΡ€Ρ–Π² Π΅ΠΊΠΎΠ½ΠΎΠΌΡ–ΠΊΠΈ Ρ‚Π° Π°Π³Ρ€Π°Ρ€Π½ΠΎΠ³ΠΎ бізнСсу сСрСд ΠΏΡ€ΠΈΠ²Π°Ρ‚Π½ΠΈΡ… Π±Π°Π½ΠΊΡ–Π²β€―
    • Π’ΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡ€Π°Ρ‰ΠΈΠΌ ΠΏΡ€Π°Ρ†Π΅Π΄Π°Π²Ρ†Π΅ΠΌ Π·Π° Π²Π΅Ρ€ΡΡ–ями EY, Forbes, Randstad, Franklin Covey, Delo.UAβ€―
    • ΠΠ°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ€ Π³ΡƒΠΌΠ°Π½Ρ–Ρ‚Π°Ρ€Π½ΠΎΡ— допомогисСрСд Π±Π°Π½ΠΊΡ–Π² (Π§Π΅Ρ€Π²ΠΎΠ½ΠΈΠΉ Π₯рСст Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, UNITED24, Superhumans, Π‘ΠœΠ†Π›Π˜Π’Π†)β€―
    • Один Ρ–Π· Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… ΠΏΠ»Π°Ρ‚Π½ΠΈΠΊΡ–Π² ΠΏΠΎΠ΄Π°Ρ‚ΠΊΡ–Π² Π² Π£ΠΊΡ€Π°Ρ—Π½Ρ–, Π·Π° 2023 Ρ€Ρ–ΠΊ Π±ΡƒΠ»ΠΎ сплачСно 6,6 ΠΌΠ»Ρ€Π΄ Π³Ρ€ΠΈΠ²Π΅Π½ΡŒ

    ΠœΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡ‚Ρ– для всіх:β€―

    • Π Π°ΠΉΡ„ ΠΊΠ΅Ρ€ΡƒΡ”Ρ‚ΡŒΡΡ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠ°ΠΌΠΈ, Ρ‰ΠΎ Ρ„ΠΎΠΊΡƒΡΡƒΡŽΡ‚ΡŒΡΡ Π½Π° Π»ΡŽΠ΄ΠΈΠ½Ρ– Ρ‚Π° Ρ—Ρ— Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ, Ρƒ Ρ†Π΅Π½Ρ‚Ρ€Ρ– ΡƒΠ²Π°Π³ΠΈ 5β€―500 співробітників Ρ‚Π° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡ–Π»ΡŒΠΉΠΎΠ½ΠΈ ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π²β€―β€―
    • ΠŸΡ–Π΄Ρ‚Ρ€ΠΈΠΌΡƒΡ”ΠΌΠΎ ΠΏΡ€ΠΈΠ½Ρ†ΠΈΠΏΠΈ різноманіття, рівності Ρ‚Π° Ρ–Π½ΠΊΠ»ΡŽΠ·ΠΈΠ²Π½ΠΎΡΡ‚Ρ–
    • Ми Π²Ρ–Π΄ΠΊΡ€ΠΈΡ‚Ρ– Π΄ΠΎ Π½Π°ΠΉΠΌΡƒ Π²Π΅Ρ‚Π΅Ρ€Π°Π½Ρ–Π² Ρ– Π»ΡŽΠ΄Π΅ΠΉ Π· Ρ–Π½Π²Π°Π»Ρ–Π΄Π½Ρ–ΡΡ‚ΡŽ Ρ‚Π° Π³ΠΎΡ‚ΠΎΠ²Ρ– Π°Π΄Π°ΠΏΡ‚ΡƒΠ²Π°Ρ‚ΠΈ Ρ€ΠΎΠ±ΠΎΡ‡Π΅ сСрСдовищС ΠΏΡ–Π΄ Π²Π°ΡˆΡ– особливі ΠΏΠΎΡ‚Ρ€Π΅Π±ΠΈ
    • Π‘ΠΏΡ–Π²ΠΏΡ€Π°Ρ†ΡŽΡ”ΠΌΠΎ Π·Ρ– ΡΡ‚ΡƒΠ΄Π΅Π½Ρ‚Π°ΠΌΠΈ Ρ‚Π° Π»ΡŽΠ΄ΡŒΠΌΠΈ ΡΡ‚Π°Ρ€ΡˆΠΎΠ³ΠΎ Π²Ρ–ΠΊΡƒ,β€―ΡΡ‚Π²ΠΎΡ€ΡŽΡŽΡ‡ΠΈ ΡƒΠΌΠΎΠ²ΠΈ для зростання Π½Π° Π±ΡƒΠ΄ΡŒ-якому Π΅Ρ‚Π°ΠΏΡ– кар’єри

    Π‘Π°ΠΆΠ°Ρ”Ρˆ дізнатися Π±Ρ–Π»ΡŒΡˆΠ΅? β€” ΠŸΡ–дписуйся Π½Π° Π½Π°Ρ Ρƒ ΡΠΎΡ†.ΠΌΠ΅Ρ€Π΅ΠΆΠ°Ρ…:

    Facebook, Instagram, LinkedInβ€―

    More
Log In or Sign Up to see all posted jobs