Jobs
120-
Β· 25 views Β· 0 applications Β· 23d
Senior Data Engineer
Full Remote Β· Romania Β· 4 years of experience Β· C1 - AdvancedProject Description: We are looking for a Senior Data Engineer. This role focuses on enabling RM practice for mapping business applications and services delivered within this domain. The position offers an opportunity to take ownership of data product...Project Description:
We are looking for a Senior Data Engineer.
This role focuses on enabling RM practice for mapping business applications and services delivered within this domain. The position offers an opportunity to take ownership of data product pipelines, ensuring they are robust, maintainable, and aligned with business needs.
Responsibilities:
- Apply data engineering practices and standards to develop robust and maintainable data pipelines
- Analyze and organize raw data ingestion pipelines
- Evaluate business needs and objectives
- Support senior business stakeholders in defining new data product use cases and their value
- Take ownership of data product pipelines and their maintenance
- Explore ways to enhance data quality and reliability, be the "Quality Gatekeeper" for developed Data Products
- Adapt and apply best practices from the Data One community
- Be constantly on the lookout for ways to improve best practices and efficiencies and make concrete proposals.
- Take leadership and collaborate with other teams proactively to keep things moving
- Be flexible and take on other responsibilities within the scope of the Agile Team
Requirements:
Must have:
- Hands-on experience with Snowflake
- Proven experience as a Data Engineer
- Solid knowledge of data modeling techniques (e.g., Data Vault)
- Advanced expertise with ETL tools (Talend, Alteryx, etc.)
- Strong SQL programming skills; Working knowledge of Python is an advantage.
- Experience with data transformation tools (DBT)
- 2 β 3 years of experience in DB/ETL development (Talend and DBT preferred)
- Hold a B.Sc., B.Eng., or higher, or equivalent in Computer Science, Data Engineering, or related fields
- Be able to communicate in English at the level of C1+
Nice to have:
- Snowflake certification is a plus
- Experience with Agile methodologies in software development
- Familiarity with DevOps/DataOps practices (CI/CD, GitLab, DataOps.live)
- Experience with the full lifecycle management of data products
- Knowledge of Data Mesh and FAIR principles
We offer:
- Long-term B2B contract
- Friendly atmosphere and Trust-based managerial culture
- 100% remote work
- Innovative Environment: Work on cutting-edge AI technologies in a highly impactful program
- Growth Opportunities: Opportunities for professional development and learning in the rapidly evolving field of AI
- Collaborative Culture: Be a part of a diverse and inclusive team that values collaboration and innovation
- Participate only in international projects
- Referral bonuses for recommending your friends to the Unitask Group
- Paid Time Off (Vacation, Sick & Public Holidays in your country)
-
Β· 36 views Β· 0 applications Β· 23d
Data Engineer
Full Remote Β· Romania Β· 4 years of experience Β· C1 - AdvancedDescription: We are looking for a MidβSenior Data Engineer to join our team and contribute to the development of robust, scalable, and high-quality data solutions. This role blends hands-on data engineering with analytical expertise, focusing on building...Description:
We are looking for a MidβSenior Data Engineer to join our team and contribute to the development of robust, scalable, and high-quality data solutions.
This role blends hands-on data engineering with analytical expertise, focusing on building efficient pipelines, ensuring data reliability, and enabling advanced analytics to support business insights.
As part of our team, you will work with modern technologies such as Snowflake, DBT, and Python, and play a key role in enhancing data quality, implementing business logic, and applying statistical methods to real-world challenges.
This position offers the opportunity to work in an innovative environment, contribute to impactful AI-driven projects, and grow professionally within a collaborative and supportive culture.
Responsibilities:
- Build and maintain data pipelines on Snowflake (pipes and streams)
- Implement business logic to ensure scalable and reliable data workflows
- Perform data quality assurance and checks using DBT
- Conduct exploratory data analysis (EDA) to support business insights
- Apply statistical methods and decision tree techniques to data challenges
- Ensure model reliability through cross-validation
Requirements:
- Proven experience with Snowflake β hands-on expertise in building and optimizing data pipelines.
- Strong knowledge of DBT β capable of implementing robust data quality checks and transformations.
- Proficiency in Python, with experience using at least one of the following libraries: Pandas, Matplotlib, or Scikit-learn.
- Familiarity with Jupyter for data exploration, analysis, and prototyping.
- Hold a B.Sc., B.Eng., or higher, or equivalent in Computer Science, Data Engineering, or related fields
- Be able to communicate in English at the level of C1+
We offer:
- Long-term B2B contract
- Friendly atmosphere and Trust-based managerial culture
- 100% remote work
- Innovative Environment: Work on cutting-edge AI technologies in a highly impactful program
- Growth Opportunities: Opportunities for professional development and learning in the rapidly evolving field of AI
- Collaborative Culture: Be a part of a diverse and inclusive team that values collaboration and innovation
- Participate only in international projects
- Referral bonuses for recommending your friends to the Unitask Group
- Paid Time Off (Vacation, Sick & Public Holidays in your country)
-
Β· 44 views Β· 8 applications Β· 23d
ETL Architect (Informatica / Talend / SSIS)
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· C1 - AdvancedWe are seeking an experienced ETL Architect with strong expertise in data integration, ETL pipelines, and data warehousing. The ideal candidate will have hands-on experience with tools such as Informatica PowerCenter/Cloud, Talend, and Microsoft SSIS, and...We are seeking an experienced ETL Architect with strong expertise in data integration, ETL pipelines, and data warehousing. The ideal candidate will have hands-on experience with tools such as Informatica PowerCenter/Cloud, Talend, and Microsoft SSIS, and will be responsible for architecting scalable, secure, and high-performing ETL solutions. This role involves collaborating with business stakeholders, data engineers, and BI teams to deliver clean, consistent, and reliable data for analytics, reporting, and enterprise systems.
Key Responsibilities
- Design, architect, and implement ETL pipelines to extract, transform, and load data across multiple sources and targets.
- Define ETL architecture standards, frameworks, and best practices for performance and scalability.
- Lead the development of data integration solutions using Informatica, Talend, SSIS, or equivalent ETL tools.
- Collaborate with business analysts, data engineers, and BI developers to translate business requirements into data models and ETL workflows.
- Ensure data quality, security, and compliance across all ETL processes.
- Troubleshoot and optimize ETL jobs for performance, scalability, and reliability.
- Support data warehouse / data lake design and integration.
- Manage ETL environments, upgrades, and migration to cloud platforms (AWS, Azure, GCP).
- Provide mentoring, code reviews, and technical leadership to junior ETL developers.
Requirements
- 7+ years of experience in ETL development, with at least 3+ years in a lead/architect role.
- Strong expertise in one or more major ETL tools: Informatica (PowerCenter/Cloud), Talend, SSIS.
- Experience with relational databases (Oracle, SQL Server, PostgreSQL) and data warehousing concepts (Kimball, Inmon).
- Strong knowledge of SQL, PL/SQL, stored procedures, performance tuning.
- Familiarity with cloud data integration (AWS Glue, Azure Data Factory, GCP Dataflow/Dataproc).
- Experience in handling large-scale data migrations, batch and real-time ETL processing.
- Strong problem-solving, analytical, and architectural design skills.
- Excellent communication skills, with the ability to engage technical and non-technical stakeholders.
Nice to Have
- Hands-on experience with big data platforms (Hadoop, Spark, Kafka, Databricks).
- Knowledge of data governance, MDM, and metadata management.
- Familiarity with API-based integrations and microservices architectures.
- Prior experience in industries such as banking, insurance, healthcare, or telecom.
- Certification in Informatica, Talend, or cloud ETL platforms.
-
Β· 27 views Β· 3 applications Β· 20d
Oracle Cloud Architect
Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateDescription You will be joining GlobalLogicβs Media and Entertainment (M&E) practice, a specialized team within a leading digital engineering company. Our practice is at the forefront of the media industryβs technological evolution, partnering with the...Description
You will be joining GlobalLogicβs Media and Entertainment (M&E) practice, a specialized team within a leading digital engineering company. Our practice is at the forefront of the media industryβs technological evolution, partnering with the worldβs largest broadcasters, content creators, and distributors. We have a proven track record of engineering complex solutions, including cloud-based OTT platforms (like VOS360), Media/Production Asset Management (MAM/PAM) systems, software-defined broadcast infrastructure, and innovative contribution/distribution workflows.
This engagement is for a landmark cloud transformation project for a major client in the media sector. The objective is to architect the strategic migration of a large-scale linear broadcasting platform from its current foundation on AWS to Oracle Cloud Infrastructure (OCI). You will be a key advisor on a project aimed at modernizing critical broadcast operations, enhancing efficiency, and building a future-proof cloud architecture.
Requirements
We are seeking a seasoned cloud professional with a deep understanding of both cloud infrastructure and the unique demands of the media industry.
- Expert-Level OCI Experience: Proven hands-on experience designing, building, and managing complex enterprise workloads on Oracle Cloud Infrastructure (OCI).
- Cloud Migration Expertise: Demonstrable experience architecting and leading at least one significant cloud-to-cloud migration project, preferably from AWS to OCI.
- Strong Architectural Acumen: Deep understanding of cloud architecture principles across compute, storage, networking, security, and identity/access management.
- Client-Facing & Consulting Skills: Exceptional communication and presentation skills, with the ability to act as a credible and trusted advisor to senior-level clients.
- Media & Entertainment Domain Knowledge (Highly Preferred): Experience with broadcast and media workflows is a significant advantage. Familiarity with concepts like linear channel playout, live video streaming, media asset management (MAM), and IP video standards (e.g., SMPTE 2110) is highly desirable.
- Infrastructure as Code (IaC): Proficiency with IaC tools, particularly Terraform, for automating OCI environment provisioning.
- Professional Certifications: An OCI Architect Professional certification is strongly preferred. Equivalent certifications in AWS are also valued.
Job responsibilities
As the OCI Architect, you will be the primary technical authority and trusted advisor for this cloud migration initiative. Your responsibilities will include:
- Migration Strategy & Planning: Assess the clientβs existing AWS-based media workflows and architect a comprehensive, phased migration strategy to OCI.
- Architecture Design: Design a secure, scalable, resilient, and cost-efficient OCI architecture tailored for demanding, 24/7 linear broadcast operations. This includes defining compute, storage, networking (including IP video transport), and security models.
- Technical Leadership: Serve as the subject matter expert on OCI for both the client and GlobalLogic engineering teams, providing hands-on guidance, best practices, and technical oversight.
- Stakeholder Engagement: Effectively communicate complex architectural concepts and migration plans to senior client stakeholders, technical teams, and project managers.
- Proof of Concept (PoC) Execution: Lead and participate in PoCs to validate architectural designs and de-risk critical components of the migration.
- Cost Optimization: Develop cost models and identify opportunities for optimizing operational expenses on OCI, ensuring the solution is commercially viable.
- Documentation: Create and maintain high-quality documentation, including architectural diagrams, design specifications, and operational runbooks.
-
Β· 122 views Β· 15 applications Β· 20d
Data Engineer
Full Remote Β· Ukraine Β· Product Β· 1 year of experience Β· B1 - IntermediateReady to design scalable data solutions and influence product growth? Softsich is a young and ambitious international product tech company that develops scalable B2B digital platforms. Weβre looking for a Data Engineer eager to grow with us and bring...πReady to design scalable data solutions and influence product growth?
Softsich is a young and ambitious international product tech company that develops scalable B2B digital platforms. Weβre looking for a Data Engineer eager to grow with us and bring modern data engineering practices into high-load solutions.
Your key responsibilities will include:
- Extending the existing data warehouse (AWS: Redshift, S3, EMR) with dbt.
- Developing and maintaining data pipelines (Kafka, MongoDB, PostgreSQL, messaging systems) using AWS Glue.
- Building and optimizing data models for analytics and reporting (dbt, SQL).
- Creating data verification scripts in Python (pandas, numpy, marimo / Jupyter Notebook).
- Maintaining infrastructure for efficient and secure data access.
- Collaborating with product owners and analysts to provide insights.
- Ensuring data quality, integrity, and security across the lifecycle.
- Keeping up with emerging data engineering technologies and trends.
Itβs a match if you have:
- 1+ year of experience as a Data Engineer.
- Strong understanding of data warehousing concepts and practices.
- Hands-on experience with AWS (EC2, S3, IAM, VPC, CloudWatch).
- Experience with dbt.
- Proficiency in SQL, PostgreSQL, and MongoDB.
- Experience with AWS Glue.
- Knowledge of Kafka, SQS, SNS.
- Strong Python skills for automation and data processing.
- Ukrainian β C1 level or native.
- English β Intermediate (written and spoken).
- You are proactive, communicative, and ready to ask questions and offer solutions instead of waiting for answers.
Nice to have:
- Knowledge of other cloud platforms (Azure, GCP).
- Experience with Kubernetes, Docker.
- Java/Scala as additional tools.
- Exposure to ML/AI technologies.
- Experience with data security tools and practices.
What we offer:
- Flexible schedule and remote format or offices in Warsaw/Kyiv β you choose.
- 24 paid vacation days, sick leaves, and health insurance (UA-based, other locations in progress).
- A supportive, friendly team where knowledge-sharing is part of the culture.
- Coverage for professional events and learning.
- Birthday greetings, team buildings, and warm human connection beyond work.
- Zero joules of energy to the aggressor state, its affiliated businesses, or partners.
π If youβre ready to build scalable and impactful data solutions β send us your CV now, weβd love to get to know you better!
More -
Β· 51 views Β· 13 applications Β· 20d
Data Engineer (with Azure)
Full Remote Β· Countries of Europe or Ukraine Β· 2 years of experience Β· B1 - IntermediateMain Responsibilities: Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements. You will work on cutting-edge cloud technologies,...Main Responsibilities:
Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements.
You will work on cutting-edge cloud technologies, including Microsoft Fabric, Azure Synapse Analytics, Apache Spark, Data Lake, Data Bricks, Data Factory, Cosmos DB, HD Insights, Stream Analytics, Event Grid in the implementation projects for corporate clients all over EU, CIS, United Kingdom, Middle East.
Our ideal candidate is a professional passionated with technologies, a curious and self-motivated person.
Responsibilities revolve around DevOps and include implementing ETL pipelines, monitoring/maintaining data pipeline performance, model optimization
Mandatory Requirements:
β 2+ years of experience, ideally within a Data Engineer role.
β understanding of data modeling, data warehousing concepts, and ETL processes
β experience with Azure Cloud technologies
β experience in distributed computing principles and familiarity with key architectures, broad experience across a set of data stores (Azure Data Lake Store, Azure Synapse Analytics, Apache Spark, Azure Data Factory)
β Understanding of landing, staging area, data cleansing, data profiling, data security and data architecture concepts (DWH, Data Lake, Delta Lake/Lakehouse, Datamart)
β SQL-skills
β communication and interpersonal skills
β English βΠ2
β Ukrainian language
Will be beneficial if a candidate has experience in SQL migration from on-premises to cloud, data modernization and migration, advanced analytics projects, and/or professional certification in data&analytics.
We offer:
β professional growth and international certification
β free of charge technical and business trainings and the best bootcamps (worldwide, including HQ Microsoft- Redmond courses)
β innovative data & analytics projects, practical experience with cutting-edge Azure data&analytics technologies at various customersβ projects
β great compensation and individual bonus remuneration
β medical insurance
β long-term employment
β ondividual development plan
More -
Β· 62 views Β· 7 applications Β· 20d
Big Data Engineer
Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B2 - Upper IntermediateWe are looking for a Data Engineer to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstarβs NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata,...We are looking for a Data Engineer to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstarβs NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling our data scientists and ML engineers to develop cutting-edge language models. You will work at the intersection of data engineering and machine learning, ensuring that our datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context. This is a unique opportunity to shape the data foundation of a pioneering AI project in Ukraine, working alongside NLP experts and leveraging modern big data technologies.
What you will do
- Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information. Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
- Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to our language modeling efforts.
- Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
- Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
- Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
- Automate data processing workflows and ensure their scalability and reliability. Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
- Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs. Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
- Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models. Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
- Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
- Manage data security, access, and compliance. Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.
Qualifications and experience needed
- Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
- NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given our projectβs focus. Understanding of FineWeb2 or a similar processing pipeline approach.
- Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
- Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
- Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as our NLP applications may require embedding storage and fast similarity search.
- Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
- Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
- Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.
A plus would be
- Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
- Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
- CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
- Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
- Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimising existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve our workflows.
What we offer
- Office or remote β itβs up to you. You can work from anywhere, and we will arrange your workplace.
- Remote onboarding.
- Performance bonuses.
- We train employees with the opportunity to learn through the companyβs library, internal resources, and programs from partners.β―
- Health and life insurance.
- Wellbeing program and corporate psychologist.
- Reimbursement of expenses for Kyivstar mobile communication.
-
Β· 49 views Β· 0 applications Β· 19d
Data Engineer to $7500
Full Remote Β· Poland Β· 5 years of experience Β· B2 - Upper IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Gett is a Ground Transportation Solution with the...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Gett is a Ground Transportation Solution with the mission to organize all the best mobility providers in one global platform with great UX - optimizing the entire experience from booking and riding to invoicing and analytics, to save businesses time and money.
About the Role:
We are looking for a talented Data Engineer to join us.As a Data Engineer at Gett, you will be a key member of the data team, at the core of a data-driven company, developing scalable, robust data platforms and data models and providing business intelligence. You will work in an evolving, challenging environment with a variety of data sources, technologies, and stakeholders to deliver the best solutions to support the business and provide operational excellence.
Key Responsibilities:
- Design, Develop & Deploy Data Pipelines and Data Models on various Data Lake / DWH layers;
- Ingest data from and export data to multiple third-party systems and platforms (e.g., Salesforce, Braze, SurveyMonkey);
- Architect and implement data-related microservices and products;
- Ensure the implementation of best practices in data management, including data lineage, observability, and data contracts;
- Maintain, support, and refactor legacy models and layers within the DWH;
- Planning and owning complex projects that involve business logic and technical implementation.
Required Competence and Skills:
- 5+ years of experience in data engineering;
- Proficiency in Python and SQL;
- Strong background in data modeling, ETL development, and data warehousing;
- Experience with data technologies, such as Airflow, Iceberg, Hive, Spark, Airbyte, Kafka,
Postgres;
- Experience with cloud environments like AWS, GCP, or Azure.
Nice to have:
- Experience with Terraform, Kubernetes (K8S), or ArgoCD;
- Experience in production-level Software development;
- A bachelorβs degree in Computer Science, Engineering, or a related field.
Why Us?
We provide 20 days of vacation leave per calendar year (plus official national holidays of a country you are based in).
We provide full accounting and legal support in all countries we operate.
We utilize a fully remote work model with a powerful workstation and co-working space in case you need it.
We offer a highly competitive package with yearly performance and compensation reviews.
More -
Β· 50 views Β· 9 applications Β· 19d
Data Engineer (Google Cloud Platform)
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· B2 - Upper IntermediateCloudfresh is a Global Google Cloud Premier Partner, Zendesk Premier Partner, Asana Solutions Partner, GitLab Select Partner, Hubspot Platinum Partner, Okta Activate Partner, and Microsoft Partner. Since 2017, weβve been specializing in the...Cloudfresh β οΈ is a Global Google Cloud Premier Partner, Zendesk Premier Partner, Asana Solutions Partner, GitLab Select Partner, Hubspot Platinum Partner, Okta Activate Partner, and Microsoft Partner.
Since 2017, weβve been specializing in the implementation, migration, integration, audit, administration, support, and training for top-tier cloud solutions. Our products focus on cutting-edge cloud computing, advanced location and mapping, seamless collaboration from anywhere, unparalleled customer service, and innovative DevSecOps.
We are looking for a Data Engineer with solid experience in Google Cloud Platform to strengthen our technical team and support projects for international clients.
Requirements:
- 3+ years of professional experience in Data Engineering.
- Hands-on expertise with Google Cloud services such as:
- BigQuery, Dataflow, Dataproc, Pub/Sub, Composer (Airflow), Cloud Storage, Cloud Functions, IAM.
- Strong SQL knowledge, including optimization of complex queries.
- Proficiency in Python for data processing and pipeline development.
- Solid understanding of data warehouse, data lake, and data mesh architectures.
- Experience building ETL/ELT pipelines and automating workflows in GCP.
- CI/CD (GitLab CI or Cloud Build) and basic Terraform (GCP).
- Familiarity with integrating data from APIs, relational databases, NoSQL systems, and SaaS platforms.
- English proficiency at Upper-Intermediate (B2) or higher.
Responsibilities:
- Design and build modern data platforms for enterprise clients.
- Develop, optimize, and maintain ETL/ELT processes in GCP.
- Migrate data from on-premises or other cloud platforms into GCP.
- Work with both structured and unstructured datasets at scale.
- Ensure performance optimization and cost-efficiency of pipelines.
- Engage with clients to gather requirements, run workshops, and deliver technical presentations.
- Collaborate with architects, DevOps, and ML engineers to deliver end-to-end cloud solutions.
- Document solutions and produce technical documentation for internal and client use.
Would be a plus:
- Experience with Vertex AI or Apigee.
- Datastream (CDC), BigQuery DTS, Looker / Looker Studio.
- Dataplex, Data Catalog, policy tags, basic DLP concepts.
- Google Cloud certifications (Data Engineer, Architect, Digital Leader).
- Background in building high-load solutions and optimizing GCP costs.
Work conditions:
- Competitive Salary: Receive a competitive base salary with employment or contractor arrangement depending on location.
- Flexible Work Format: Work remotely with flexible hours with core hours aligned to EET, allowing you to balance your professional and personal life efficiently.
- Training with Leading Cloud Products: Access in-depth training on cutting-edge cloud solutions, enhancing your expertise and equipping you with the tools to succeed in an ever-evolving industry.
- International Collaboration: Work alongside A-players and seasoned professionals in the cloud industry. Expand your expertise by engaging with international markets across the EMEA and CEE regions.
- When applying to this position, you consent to the processing of your personal data by CLOUDFRESH for the purposes necessary to conduct the recruitment process, in accordance with Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 (GDPR).
- Additionally, you agree that CLOUDFRESH may process your personal data for future recruitment processes.
-
Β· 51 views Β· 3 applications Β· 19d
Data Engineer
Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateOn behalf of our Client from France, Mobilunity is looking for a Senior Data Engineer for a 2-month engagement. Our client is a table management software and CRM that enables restaurant owners to welcome their customers easily. The app is useful to...On behalf of our Client from France, Mobilunity is looking for a Senior Data Engineer for a 2-month engagement.
Our client is a table management software and CRM that enables restaurant owners to welcome their customers easily. The app is useful to manage booking requests and register new bookings. You can view all your bookings, day after day, wherever you are and optimize your restaurantβs occupation rate. Our client offers a commission-free booking solution that guarantees freedom above all. New technologies thus become the restaurateurs best allies for saving time and gaining customers while ensuring a direct relationship with them.
Their goal is to become the #1 growth platform for Restaurants. They believe that restaurants have become lifestyle brands, and with forward-thinking digital products, restauranteurs will create the same perfect experience online as they already do offline, resulting in a more valuable, loyalty-led business.
Our client is looking for a Senior Engineer to align key customer data across Salesforce, Chargebee, Zendesk, other tools and their Backβoffice. The goal is a dedicated, historized βCustomer 360β³ table at restaurant and contact levels that exposes discrepancies and gaps, supports updates/cleaning across systems where appropriate, and includes monitoring and Slack alerts.
Tech Stack: Databricks (Delta/Unity Catalog), Python, SQL, Slack.
Responsibilities:
- Design and build a consolidated Customer 360 table in Databricks that links entities across Salesforce, Chargebee, Zendesk, and Backβoffice (entity resolution, deduplication, survivorship rules)
- Implement data cleaning and standardization rules; where safe and approved, update upstream systems via Python/API
- Historize customer attributes to track changes over time
- Create robust data quality checks (completeness, consistency across systems, referential integrity, unexpected changes) and surface issues via Slack alerts
- Establish operational monitoring: freshness SLAs, job success/failure notifications
- Document schemas, matching logic, cleaning rules, and alert thresholds; define ownership and escalation paths
Requirements:
- 5+ years in data engineering/analytics engineering with strong Python/SQL skills
- Handsβon experience with Databricks (Delta, SQL, PySpark optional) and building production data models
- Experience integrating with external SaaS APIs (e.g., Salesforce REST/Bulk, Zendesk, Chargebee) including auth, rate limiting, retries, and idempotency
- Solid grasp of entity resolution, deduplication, and survivorship strategies; strong SQL
- Experience implementing data quality checks and alerting (Slack/webhooks or similar)
- Securityβminded when handling PII (access control, minimization, logging)
- Proficient with Git and PR-based workflows (Databricks Repos, code review, versioning)
- Upper-intermediate, close to advance English
Nice to have:
- Experience with Databricks (Delta/Unity Catalog)
- Background in MDM/Golden Record/Customer 360 initiatives
- Experience with CI/CD for data (tests, code review, environments) and Databricks Jobs for scheduling
Success Criteria (by end of engagement):
- Production Customer 360 table with documented matching logic and survivorship rules
- Data is cleaned and consistent across systems where business rules permit; change history persisted
- Automated data quality checks and Slack alerts in place; clear runbooks for triage
- Documentation and ownership model delivered; stakeholders can self-serve the aligned view
In return we offer:
- The friendliest community of like-minded IT-people
- Open knowledge-sharing environment β exclusive access to a rich pool of colleagues willing to share their endless insights into the broadest variety of modern technologies
- Perfect office location in the city-center (900m from Lukyanivska metro station with a green and spacious neighborhood) or remote mode engagement: you can choose a convenient one for you, with a possibility to fit together both
- No open-spaces setup β separate rooms for every teamβs comfort and multiple lounge and gaming zones
- English classes in 1-to-1 & group modes with elements of gamification
- Neverending fun: sports events, tournaments, music band, multiple affinity groups
π³Come on board, and letβs grow together!π³
More -
Β· 71 views Β· 9 applications Β· 18d
Data Engineer
Full Remote Β· Poland, Ukraine, Romania, Bulgaria, Lithuania Β· Product Β· 5 years of experience Β· B2 - Upper IntermediateData Engineer (100% remote) in either Poland, Ukraine, Romania, Bulgaria, Lithuania, Latvia, or Estonia Point Wild helps customers monitor, manage, and protect against the risks associated with their identities and personal information in a digital...Data Engineer (100% remote) in either Poland, Ukraine, Romania, Bulgaria, Lithuania, Latvia, or Estonia
Point Wild helps customers monitor, manage, and protect against the risks associated with their identities and personal information in a digital world. Backed by WndrCo, Warburg Pincus and General Catalyst, Point Wild is dedicated to creating the worldβs most comprehensive portfolio of industry-leading cybersecurity solutions. Our vision is to become THE go-to resource for every cyber protection need individuals may face - today and in the future.
Join us for the ride!
About the Role:
We are seeking a highly skilled Data Engineer with deep experience in Databricks and modern lakehouse architectures to join the Lat61 platform team. This role is critical in designing, building, and optimizing the pipelines, data structures, and integrations that power Lat61.
You will collaborate closely with data architects, AI engineers, and product leaders to deliver a scalable, resilient, and secure foundation for advanced analytics, machine learning, and cryptographic risk management use cases.
Your Day to Day:
- Build and optimize data ingestion pipelines on Databricks (batch and streaming) to process structured, semi-structured, and unstructured data.
- Implement scalable data models and transformations leveraging Delta Lake and open data formats (Parquet, Delta).
- Design and manage workflows with Databricks Workflows, Airflow, or equivalent orchestration tools.
- Implement automated testing, lineage, and monitoring frameworks using tools like Great Expectations and Unity Catalog.
- Build integrations with enterprise and third-party systems via cloud APIs, Kafka/Kinesis, and connectors into Databricks.
- Partner with AI/ML teams to provision feature stores, integrate vector databases (Pinecone, Milvus, Weaviate), and support RAG-style architectures.
- Optimize Spark and SQL workloads for speed and cost efficiency across multi-cloud environments (AWS, Azure, GCP).
- Apply secure-by-design data engineering practices aligned with Point Wildβs cybersecurity standards and evolving post-quantum cryptographic frameworks.
What you bring to the table:
- At least 5 years in Data Engineering with strong experience building production data systems on Databricks.
- Expertise in PySpark, SQL, and Python.
- Strong expertise with various AWS services.
- Strong knowledge of Delta Lake, Parquet, and lakehouse architectures.
- Experience with streaming frameworks (Structured Streaming, Kafka, Kinesis, or Pub/Sub).
- Familiarity with DBT for transformation and analytics workflows.
- Strong understanding of data governance and security controls (Unity Catalog, IAM).
- Exposure to AI/ML data workflows (feature stores, embeddings, vector databases).
- Detail-oriented, collaborative, and comfortable working in a fast-paced innovation-driven environment.
Bonus Points:
- Bachelorβs or Masterβs degree in Computer Science, Engineering, or related field.
- Data Engineering experience in a B2B SaaS organization.
Lat61 Mission
The Lat61 platform will power the next generation of cybersecurity and AI-enabled decision-making. As a Data Engineer on this team, you will help deliver:
- Multi-Modal Data Ingestion: Bringing together logs, telemetry, threat intel, identity data, cryptographic assets, and third-party feeds into a unified lakehouse.
- AI Agent Enablement: Supporting Retrieval-Augmented Generation (RAG) workflows, embeddings, and feature stores to fuel advanced AI use cases across Point Wild products.
- Analytics & Decision Systems: Providing real-time insights into risk posture, compliance, and security events through scalable pipelines and APIs.
- Future-Proofing for Quantum: Laying the groundwork for automated remediation and transition to post-quantum cryptographic standards.
Your work wonβt just be about pipelines and data models - it will directly shape how enterprises anticipate, prevent, and respond to cybersecurity risks in an era of quantum disruption.
More -
Β· 68 views Β· 12 applications Β· 18d
Middle Data Engineer
Full Remote Β· EU Β· Product Β· 3 years of experience Β· B1 - IntermediateFAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment. We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable...FAVBET Tech develops software that is used by millions of players around the world for the international company FAVBET Entertainment.
We develop innovations in the field of gambling and betting through a complex multi-component platform which is capable to withstand enormous loads and provide a unique experience for players.
FAVBET Tech does not organize and conduct gambling on its platform. Its main focus is software development.We are looking for a Middle/Senior Data Engineer to join our Data Integration Team.
Main areas of work:- Betting/Gambling Platform Software Development β software development that is easy to use and personalized for each customer.
- Highload Development β development of highly loaded services and systems.
- CRM System Development β development of a number of services to ensure a high level of customer service, effective engagement of new customers and retention of existing ones.
- Big Data β development of complex systems for processing and analysis of big data.
- Cloud Services β we use cloud technologies for scaling and business efficiency
Responsibilities:
- Design, build, install, test, and maintain highly scalable data management systems.
- Develop ETL/ELT processes and frameworks for efficient data transformation and loading.
- Implement, optimize, and support reporting solutions for the Sportsbook domain.
- Ensure effective storage, retrieval, and management of large-scale data.
- Improve data query performance and overall system efficiency.
- Collaborate closely with data scientists and analysts to deliver data solutions and actionable insights.
Requirements:
- At least 2 years of experience in designing and implementing modern data integration solutions.
- Masterβs degree in Computer Science or a related field.
- Proficiency in Python and SQL, particularly for data engineering tasks.
- Hands-on experience with data processing, ETL (Extract, Transform, Load), ELT (Extract, Load, Transform) processes, and data pipeline development.
- Experience with DBT framework and Airflow orchestration.
- Practical experience with both SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
- Experience with Snowflake.
- Working knowledge of cloud services, particularly AWS (S3, Glue, Redshift, Lambda, RDS, Athena).
- Experience in managing data warehouses and data lakes.
- Familiarity with star and snowflake schema design.
- Understanding of the difference between OLAP and OLTP.
Would be a plus:
- Experience with other cloud data services (e.g., AWS Redshift, Google BigQuery).
- Experience with version control tools (e.g., GitHub, GitLab, Bitbucket).
- Experience with real-time data processing (e.g., Kafka, Flink).
- Familiarity with orchestration tools (e.g., Airflow, Luigi).
- Experience with monitoring and logging tools (e.g., ELK Stack, Prometheus, CloudWatch).
- Knowledge of data security and privacy practices.
We offer:
- 30 day off β we value rest and recreation;
- Medical insurance for employees and the possibility of training employees at the expense of the company and gym membership;
- Remote work or the opportunity β our own modern lofty office with spacious workplace, and brand-new work equipment (near Pochaina metro station);
- Flexible work schedule β we expect a full-time commitment but do not track your working hours;
- Flat hierarchy without micromanagement β our doors are open, and all teammates are approachable.
During the war, the company actively supports the Ministry of Digital Transformation of Ukraine in the initiative to deploy an IT army and has already organized its own cyber warfare unit, which makes a crushing blow to the enemyβs IT infrastructure 24/7, coordinates with other cyber volunteers and plans offensive actions on its IT front line.
More -
Β· 26 views Β· 0 applications Β· 17d
Lead Data Engineer IRC277440
Full Remote Β· Ukraine Β· 7 years of experience Β· B2 - Upper IntermediateThe GlobalLogic technology team is focused on next-generation health capabilities that align with the clientβs mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of our broader...The GlobalLogic technology team is focused on next-generation health capabilities that align with the clientβs mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of our broader team, with a focus on patient engagement, care coordination, AI, healthcare analytics, and interoperability. These advanced technologies enhance our product portfolio with new services while improving clinical and patient experiences.
As part of the GlobalLogic team, you will grow, be challenged, and expand your skill set working alongside highly experienced and talented people.
If this sounds like an exciting opportunity for you, send over your CV!
Requirements
MUST HAVE
- AWS Platform: Working experience with AWS data technologies, including S3 and AWS SageMaker (SageMaker Unified is a plus)
- Programming Languages: Strong programming skills in Python
- Data Formats: Experience with JSON, XML and other relevant data formats
- CI/CD Tools: experience setting up and managing CI/CD pipelines using GitLab CI, Jenkins, or similar tools
Scripting and automation: experience in scripting languages such as Python, PowerShell, etcβ¦ - Monitoring and Logging: Familiarity with monitoring & logging tools like CloudWatch, ELK, Dynatrace, Prometheus, etcβ¦
- Source Code Management: Expertise with git commands and associated VCS (Gitlab, Github, Gitea or similar)
- Documentation: Experience with markdown and, in particular, Antora for creating technical documentation
NICE TO HAVE
Previous Healthcare or Medical Device experienceHealthCare Interoperability Tools: Previous experience with integration engines such as Intersystems, Lyniate, Redox, Mirth Connect, etcβ¦
Other data technologies, such as Snowflake, Trino/Starburst
Experience working with Healthcare Data, including HL7v2, FHIR and DICOM
FHIR and/or HL7 Certifications
Building software classified as Software as a Medical Device (SaMD)
Understanding of EHR technologies such as EPIC, Cerner, etcβ¦
Experience implementation of enterprise-grade cyber security & privacy by design into software products
Experience working in Digital Health software
Experience developing global applications
Strong understanding of SDLC β Waterfall & Agile methodologies
Experience leading software development teams onshore and offshoreJob responsibilities
β Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.
β Involved in planning of system and development deployment as well as responsible for meeting compliance and security standards.
β API development using AWS services in a scalable, microservices-based architecture
β Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.
β May document testing and maintenance of system updates, modifications, and configurations.
β May act as a liaison with key technology vendor technologists or other business functions.
β Function Specific: Strategically design technology solutions that meet the needs and goals of the company and its customers/users.
β Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or customisation solution would be required.
β Test the quality of a product and its ability to perform a task or solve a problem.
β Perform basic maintenance and performance optimisation procedures in each of the primary operating systems.
β Ability to document detailed technical system specifications based on business system requirements
β Ensures system implementation compliance with global & local regulatory and security standards (i.e. HIPAA, SOCII, ISO27001, etc.)
More -
Β· 31 views Β· 2 applications Β· 13d
Senior Data Engineer
Full Remote Β· Spain, Poland, Portugal, Romania Β· 5 years of experience Β· B2 - Upper IntermediateProject tech stack: Snowflake, AWS, Python/dbt, DWH design & implementation of medallion architecture, strong integration experience, data modelling for analytical solutions, CI/CD We are looking for a Senior Data Engineer to build and scale a...Project tech stack: Snowflake, AWS, Python/dbt, DWH design & implementation of medallion architecture, strong integration experience, data modelling for analytical solutions, CI/CD
We are looking for a Senior Data Engineer to build and scale a Snowflake-based data platform supporting Credit Asset Management and Wealth Solutions. The role involves ingesting data from SaaS investment platforms via data shares and custom ETL, establishing a medallion architecture, and modeling data into appropriate data marts for exposing it to analytical consumption.
About the project
Our client is a global real estate services company specializing in the management and development of commercial properties. Over the past several years, the organization has made significant strides in systematizing and standardizing its reporting infrastructure and capabilities. Due to the increased demand for reporting, the organization is seeking a dedicated team to expand capacity and free up existing resources.
Skills & Experience
- Bachelor's degree in Computer Science, Engineering, or related field;
- 5+ years of experience in data engineering roles;
- Strong knowledge in SQL,data modeling, database management system and optimization;
- Strong experience in Snowflake, proven experience building scalable data pipelines into Snowflake, data shares and custom connectors.
- Hands-on ETL/ELT experience; Workato experience strongly preferred.
- Solid Python and/or dbt experience for transformations and testing.
- Proficiency with AWS platforms for scalable solutions, Azure is a plus;
- Understanding of data governance, data modeling & analysis and data quality concepts and requirements;
- Experience in implementing medallion architecture and data quality frameworks.
- Understanding of data lifecycle, DataOps concepts, and basic design patterns;
- Experience setting up IAM, access controls, catalog/lineage, and CI/CD for data.
- Excellent communication and ability to work with business stakeholders to shape requirements.
Nice to Have
- Domain exposure to credit/investments and insurance data
- Familiarity with schemas and data models from:BlackRock Aladdin, Clearwater, WSO, SSNC PLM
- Experience with Databricks, Airflow, or similar orchestration tools
- Prior vendor/staff augmentation experience in fast-moving environments
Responsibilities
- Build and maintain scalable data pipelines into Snowflake using Workato and native Snowflake capabilities
- Integrate heterogeneous vendor data via data shares and custom ETL
- Implement and enforce medallion architecture (bronze/silver/gold) and data quality checks.
- Collaborate with tech lead and business partners to define logical data marts for analytics and reporting.
- Contribute to non-functional setup: IAM/role-based access, data cataloging, lineage, access provisioning, monitoring, and cost optimization.
- Document data models, schemas, pipelines, and operational runbooks.
- Operate effectively in a less-structured environment; proactively clarify priorities and drive outcomes.
- Collaborate closely with the team members and other stakeholders;
- Provide technical support and mentoring to junior data engineers;
- Participate in data governance and compliance efforts;
- Document data pipelines, processes, and best practices;
- Evaluate and recommend new data technologies.
-
Β· 101 views Β· 8 applications Β· 12d
Data Analyst
Ukraine Β· Product Β· 1 year of experience Β· A2 - ElementaryRaiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country. At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT...Raiffeisen Bank is the largest Ukrainian bank with foreign capital. For over 30 years, we have been shaping and developing the banking system of our country.
At Raiffeisen, more than 5,500 employees work together, including one of the largest product IT teams, consisting of over 800 professionals. Every day, we collaborate to ensure that more than 2.7 million of our clients receive quality service, use the bankβs products and services, and develop their businesses because we are #Together_with_Ukraine.
Your future responsibilities:
- Preparing samples and datasets from the data warehouse (AWS, SQL)
- Developing and supporting BI reports for business units
- Optimizing existing reports, automating reporting processes
- Participating in all stages of the development life cycle: from requirements gathering and analytics to testing and delivery
- Testing results and organizing the data validation process
- Working with ad-hoc queries (analytical queries based on business needs)
- Participating in script migration processes from the old data warehouse to the new one (AWS)
- Documenting solutions and development processes
- Communicating with business customers and other units (gathering and agreeing on requirements, presenting results)
Your skills and experience:
- Higher education (economic, technical or mathematical)
- Experience in BI / reporting from 1 year
- Confident knowledge of SQL (writing complex queries, optimization, procedures)
- Experience with Power BI / Report Builder
- Knowledge of Python (pandas, pyspark) and Airflow will be an advantage
- Understanding of data warehouse architecture
- Analytical thinking, ability to work with large amounts of data
- Ability to transform business requirements into technical tasks
- Willingness to work in a team, openness to new technologies
We offer what matters most to you:
- Competitive salary: we guarantee a stable income and annual bonuses for your personal contribution. Additionally, we have a referral program with rewards for bringing in new colleagues to Raiffeisen Bank
- Social package: official employment, 28 days of paid leave, additional paternity leave, and financial assistance for parents with newborns
- Comfortable working conditions: possibility of a hybrid work format, offices equipped with shelters and generators, modern equipment. Classification: PUBLIC
- Wellbeing program: all employees have access to medical insurance from the first working day; consultations with a psychologist, nutritionist, or lawyer; discount programs for sports and purchases; family days for children and adults; in-office massages
- Training and development: access to over 130 online training resources; corporate training programs in CX, Data, IT Security, Leadership, Agile. Corporate library and English lessons. Great team: our colleagues form a community where curiosity, talent, and innovation are welcome. We support each other, learn together, and grow. You can find like-minded individuals in over 15 professional communities, reading clubs, or sports clubs
- Career opportunities: we encourage advancement within the bank across functions
- Innovations and technologies: Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go (infra, security), Swift (IOS), Kotlin (Android). Data stores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink
- Support program for defenders: we maintain jobs and pay average wages to mobilized individuals. For veterans, we have a support program and develop the Bankβs veterans community. We work on increasing awareness among leaders and teams about the return of veterans to civilian life. Raiffeisen Bank has been recognized as one of the best employers for veterans by Forbes
Why Raiffeisen Bank?
- Our main value is people, and we support and recognize them, educate them and involve them in changes. Join Raifβs team because for us YOU matter!
- One of the largest lenders to the economy and agricultural business among private banks
- Recognized as the best employer by EY, Forbes, Randstad, Franklin Covey, and Delo.UA
- The largest humanitarian aid donor among banks (Ukrainian Red Cross, UNITED24, Superhumans, Π‘ΠΠΠΠΠΠ)
- One of the largest IT product teams among the countryβs banks. One of the largest taxpayers in Ukraine; 6.6 billion UAH were paid in taxes in 2023
Opportunities for Everyone:
- Rife is guided by principles that focus on people and their development, with 5,500 employees and more than 2.7 million customers at the center of attention
- We support the principles of diversity, equality and inclusiveness
- We are open to hiring veterans and people with disabilities and are ready to adapt the work environment to your special needs
- We cooperate with students and older people, creating conditions for growth at any career stage
Want to learn more? β Follow us on social media:
Facebook, Instagram, LinkedIn
___________________________________________________________________________________________
Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ β Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ ΡΠΊΡΠ°ΡΠ½ΡΡΠΊΠΈΠΉ Π±Π°Π½ΠΊ Π· ΡΠ½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌ ΠΊΠ°ΠΏΡΡΠ°Π»ΠΎΠΌ. ΠΡΠ»ΡΡΠ΅ 30 ΡΠΎΠΊΡΠ² ΠΌΠΈ ΡΡΠ²ΠΎΡΡΡΠΌΠΎ ΡΠ° Π²ΠΈΠ±ΡΠ΄ΠΎΠ²ΡΡΠΌΠΎ Π±Π°Π½ΠΊΡΠ²ΡΡΠΊΡ ΡΠΈΡΡΠ΅ΠΌΡ Π½Π°ΡΠΎΡ Π΄Π΅ΡΠΆΠ°Π²ΠΈ.
Π£ Π Π°ΠΉΡΡ ΠΏΡΠ°ΡΡΡ ΠΏΠΎΠ½Π°Π΄ 5 500 ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ², ΡΠ΅ΡΠ΅Π΄ Π½ΠΈΡ ΠΎΠ΄Π½Π° ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠΎΠ²ΠΈΡ ΠΠ’-ΠΊΠΎΠΌΠ°Π½Π΄, ΡΠΎ Π½Π°Π»ΡΡΡΡ ΠΏΠΎΠ½Π°Π΄ 800 ΡΠ°Ρ ΡΠ²ΡΡΠ². Π©ΠΎΠ΄Π½Ρ ΠΏΠ»ΡΡ-ΠΎ-ΠΏΠ»ΡΡ ΠΌΠΈ ΠΏΡΠ°ΡΡΡΠΌΠΎ, ΡΠΎΠ± Π±ΡΠ»ΡΡ Π½ΡΠΆ 2,7 ΠΌΡΠ»ΡΠΉΠΎΠ½Π° Π½Π°ΡΠΈΡ ΠΊΠ»ΡΡΠ½ΡΡΠ² ΠΌΠΎΠ³Π»ΠΈ ΠΎΡΡΠΈΠΌΠ°ΡΠΈ ΡΠΊΡΡΠ½Π΅ ΠΎΠ±ΡΠ»ΡΠ³ΠΎΠ²ΡΠ²Π°Π½Π½Ρ, ΠΊΠΎΡΠΈΡΡΡΠ²Π°ΡΠΈΡΡ ΠΏΡΠΎΠ΄ΡΠΊΡΠ°ΠΌΠΈ Ρ ΡΠ΅ΡΠ²ΡΡΠ°ΠΌΠΈ Π±Π°Π½ΠΊΡ, ΡΠΎΠ·Π²ΠΈΠ²Π°ΡΠΈ Π±ΡΠ·Π½Π΅Ρ, Π°Π΄ΠΆΠ΅ ΠΌΠΈ #Π Π°Π·ΠΎΠΌ_Π·_Π£ΠΊΡΠ°ΡΠ½ΠΎΡ.β―
Π’Π²ΠΎΡ ΠΌΠ°ΠΉΠ±ΡΡΠ½Ρ ΠΎΠ±ΠΎΠ²βΡΠ·ΠΊΠΈ:β―
- ΠΡΠ΄Π³ΠΎΡΠΎΠ²ΠΊΠ° Π²ΠΈΠ±ΡΡΠΎΠΊ ΡΠ° Π΄Π°ΡΠ°ΡΠ΅ΡΡΠ² Π·Ρ ΡΡ ΠΎΠ²ΠΈΡΠ° Π΄Π°Π½ΠΈΡ (AWS, SQL)
- Π ΠΎΠ·ΡΠΎΠ±ΠΊΠ° ΡΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠ° BI-Π·Π²ΡΡΡΠ² Π΄Π»Ρ Π±ΡΠ·Π½Π΅Ρ-ΠΏΡΠ΄ΡΠΎΠ·Π΄ΡΠ»ΡΠ²
- ΠΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡ ΡΡΠ½ΡΡΡΠΈΡ Π·Π²ΡΡΡΠ², Π°Π²ΡΠΎΠΌΠ°ΡΠΈΠ·Π°ΡΡΡ ΠΏΡΠΎΡΠ΅ΡΡΠ² ΡΠΎΡΠΌΡΠ²Π°Π½Π½Ρ Π·Π²ΡΡΠ½ΠΎΡΡΡ
- Π£ΡΠ°ΡΡΡ Ρ Π²ΡΡΡ Π΅ΡΠ°ΠΏΠ°Ρ ΠΆΠΈΡΡΡΠ²ΠΎΠ³ΠΎ ΡΠΈΠΊΠ»Ρ ΡΠΎΠ·ΡΠΎΠ±ΠΊΠΈ: Π²ΡΠ΄ Π·Π±ΠΎΡΡ Π²ΠΈΠΌΠΎΠ³ Ρ Π°Π½Π°Π»ΡΡΠΈΠΊΠΈ Π΄ΠΎ ΡΠ΅ΡΡΡΠ²Π°Π½Π½Ρ ΡΠ° ΠΏΠΎΡΡΠ°Π²ΠΊΠΈ
- Π’Π΅ΡΡΡΠ²Π°Π½Π½Ρ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΡΠ² ΡΠ° ΠΎΡΠ³Π°Π½ΡΠ·Π°ΡΡΡ ΠΏΡΠΎΡΠ΅ΡΡ ΠΏΠ΅ΡΠ΅Π²ΡΡΠΊΠΈ Π΄Π°Π½ΠΈΡ
- Π ΠΎΠ±ΠΎΡΠ° Π· ad-hoc Π·Π°ΠΏΠΈΡΠ°ΠΌΠΈ (Π°Π½Π°Π»ΡΡΠΈΡΠ½Ρ Π·Π°ΠΏΠΈΡΠΈ Π·Π° ΠΏΠΎΡΡΠ΅Π±ΠΎΡ Π±ΡΠ·Π½Π΅ΡΡ)
- Π£ΡΠ°ΡΡΡ Π² ΠΏΡΠΎΡΠ΅ΡΠ°Ρ ΠΌΡΠ³ΡΠ°ΡΡΡ ΡΠΊΡΠΈΠΏΡΡΠ² Π·Ρ ΡΡΠ°ΡΠΎΠ³ΠΎ ΡΡ ΠΎΠ²ΠΈΡΠ° Π΄Π°Π½ΠΈΡ Π½Π° Π½ΠΎΠ²Π΅ (AWS)
- ΠΠΎΠΊΡΠΌΠ΅Π½ΡΡΠ²Π°Π½Π½Ρ ΡΡΡΠ΅Π½Ρ ΡΠ° ΠΏΡΠΎΡΠ΅ΡΡΠ² ΡΠΎΠ·ΡΠΎΠ±ΠΊΠΈ
- ΠΠΎΠΌΡΠ½ΡΠΊΠ°ΡΡΡ Π· Π±ΡΠ·Π½Π΅Ρ-Π·Π°ΠΌΠΎΠ²Π½ΠΈΠΊΠ°ΠΌΠΈ ΡΠ° ΡΠ½ΡΠΈΠΌΠΈ ΠΏΡΠ΄ΡΠΎΠ·Π΄ΡΠ»Π°ΠΌΠΈ (Π·Π±ΡΡ ΡΠ° ΡΠ·Π³ΠΎΠ΄ΠΆΠ΅Π½Π½Ρ Π²ΠΈΠΌΠΎΠ³, ΠΏΡΠ΅Π·Π΅Π½ΡΠ°ΡΡΡ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΡΠ²)
Π’Π²ΡΠΉ Π΄ΠΎΡΠ²ΡΠ΄ ΡΠ° Π½Π°Π²ΠΈΡΠΊΠΈ:
- ΠΠΈΡΠ° ΠΎΡΠ²ΡΡΠ° (Π΅ΠΊΠΎΠ½ΠΎΠΌΡΡΠ½Π°, ΡΠ΅Ρ Π½ΡΡΠ½Π° Π°Π±ΠΎ ΠΌΠ°ΡΠ΅ΠΌΠ°ΡΠΈΡΠ½Π°)
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Ρ ΡΡΠ΅ΡΡ BI / Π·Π²ΡΡΠ½ΠΎΡΡΡ Π²ΡΠ΄ 1 ΡΠΎΠΊΡ
- ΠΠΏΠ΅Π²Π½Π΅Π½Π΅ Π·Π½Π°Π½Π½Ρ SQL (Π½Π°ΠΏΠΈΡΠ°Π½Π½Ρ ΡΠΊΠ»Π°Π΄Π½ΠΈΡ Π·Π°ΠΏΠΈΡΡΠ², ΠΎΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡ, ΠΏΡΠΎΡΠ΅Π΄ΡΡΠΈ)
- ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Power BI /Report Builder
- ΠΠ½Π°Π½Π½Ρ Python (pandas, pyspark) ΡΠ° Πirflow Π±ΡΠ΄Π΅ ΠΏΠ΅ΡΠ΅Π²Π°Π³ΠΎΡ
- Π ΠΎΠ·ΡΠΌΡΠ½Π½Ρ Π°ΡΡ ΡΡΠ΅ΠΊΡΡΡΠΈ ΡΡ ΠΎΠ²ΠΈΡ Π΄Π°Π½ΠΈΡ
- ΠΠ½Π°Π»ΡΡΠΈΡΠ½ΠΈΠΉ ΠΌΠΈΡΠ»Π΅Π½Π½Ρ, Π·Π΄Π°ΡΠ½ΡΡΡΡ ΠΏΡΠ°ΡΡΠ²Π°ΡΠΈ Π· Π²Π΅Π»ΠΈΠΊΠΈΠΌΠΈ ΠΎΠ±ΡΡΠ³Π°ΠΌΠΈ Π΄Π°Π½ΠΈΡ
- Π£ΠΌΡΠ½Π½Ρ ΡΡΠ°ΡΠ½ΡΡΠΎΡΠΌΡΠ²Π°ΡΠΈ Π±ΡΠ·Π½Π΅Ρ-Π²ΠΈΠΌΠΎΠ³ΠΈ Ρ ΡΠ΅Ρ Π½ΡΡΠ½Ρ Π·Π°Π²Π΄Π°Π½Π½Ρ
- ΠΠΎΡΠΎΠ²Π½ΡΡΡΡ ΠΏΡΠ°ΡΡΠ²Π°ΡΠΈ Π² ΠΊΠΎΠΌΠ°Π½Π΄Ρ, Π²ΡΠ΄ΠΊΡΠΈΡΡΡΡΡ Π΄ΠΎ Π½ΠΎΠ²ΠΈΡ ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΠΉ
ΠΡΠΎΠΏΠΎΠ½ΡΡΠΌΠΎ ΡΠ΅, ΡΠΎ ΠΌΠ°Ρ Π·Π½Π°ΡΠ΅Π½Π½Ρ ΡΠ°ΠΌΠ΅ Π΄Π»Ρ ΡΠ΅Π±Π΅:β―
- ΠΠΎΠ½ΠΊΡΡΠ΅Π½ΡΠ½Π° Π·Π°ΡΠΎΠ±ΡΡΠ½Π° ΠΏΠ»Π°ΡΠ°: Π³Π°ΡΠ°Π½ΡΡΡΠΌΠΎ ΡΡΠ°Π±ΡΠ»ΡΠ½ΠΈΠΉ Π΄ΠΎΡ ΡΠ΄ ΡΠ° ΡΡΡΠ½Ρ Π±ΠΎΠ½ΡΡΠΈ Π·Π° ΡΠ²ΡΠΉ ΠΎΡΠΎΠ±ΠΈΡΡΠΈΠΉ Π²Π½Π΅ΡΠΎΠΊ. ΠΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΎ, Ρ Π½Π°Ρ Π΄ΡΡ ΡΠ΅ΡΠ΅ΡΠ°Π»ΡΠ½Π° ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° Π²ΠΈΠ½Π°Π³ΠΎΡΠΎΠ΄ΠΈ Π·Π° Π·Π°Π»ΡΡΠ΅Π½Π½Ρ Π½ΠΎΠ²ΠΈΡ ΠΊΠΎΠ»Π΅Π³ Π΄ΠΎ Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊΡ.
- Π‘ΠΎΡΡΠ°Π»ΡΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ: ΠΎΡΡΡΡΠΉΠ½Π΅ ΠΏΡΠ°ΡΠ΅Π²Π»Π°ΡΡΡΠ²Π°Π½Π½Ρ, 28 Π΄Π½ΡΠ² ΠΎΠΏΠ»Π°ΡΡΠ²Π°Π½ΠΎΡ Π²ΡΠ΄ΠΏΡΡΡΠΊΠΈ, Π΄ΠΎΠ΄Π°ΡΠΊΠΎΠ²ΠΈΠΉ βΠ΄Π΅ΠΊΡΠ΅Ρβ Π΄Π»Ρ ΡΠ°ΡΡΡΡΠ², ΡΠ° ΠΌΠ°ΡΠ΅ΡΡΠ°Π»ΡΠ½Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³Π° Π΄Π»Ρ Π±Π°ΡΡΠΊΡΠ² ΠΏΡΠΈ Π½Π°ΡΠΎΠ΄ΠΆΠ΅Π½Π½Ρ Π΄ΡΡΠ΅ΠΉ.
- ΠΠΎΠΌΡΠΎΡΡΠ½Ρ ΡΠΌΠΎΠ²ΠΈ ΠΏΡΠ°ΡΡ: ΠΌΠΎΠΆΠ»ΠΈΠ²ΡΡΡΡ Π³ΡΠ±ΡΠΈΠ΄Π½ΠΎΠ³ΠΎ ΡΠΎΡΠΌΠ°ΡΡ ΡΠΎΠ±ΠΎΡΠΈ, ΠΎΡΡΡΠΈ Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΠΊΡΠΈΡΡΡΠΌΠΈ ΡΠ° Π³Π΅Π½Π΅ΡΠ°ΡΠΎΡΠ°ΠΌΠΈ, Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ ΡΡΡΠ°ΡΠ½ΠΎΡ ΡΠ΅Ρ Π½ΡΠΊΠΎΡ.
- Wellbeing ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ°: Π΄Π»Ρ Π²ΡΡΡ ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ² Π΄ΠΎΡΡΡΠΏΠ½Ρ ΠΌΠ΅Π΄ΠΈΡΠ½Π΅ ΡΡΡΠ°Ρ ΡΠ²Π°Π½Π½Ρ Π· ΠΏΠ΅ΡΡΠΎΠ³ΠΎ ΡΠΎΠ±ΠΎΡΠΎΠ³ΠΎ Π΄Π½Ρ; ΠΊΠΎΠ½ΡΡΠ»ΡΡΠ°ΡΡΡ ΠΏΡΠΈΡ ΠΎΠ»ΠΎΠ³Π°, Π½ΡΡΡΠΈΡΡΠΎΠ»ΠΎΠ³Π° ΡΠΈ ΡΡΠΈΡΡΠ°; Π΄ΠΈΡΠΊΠΎΠ½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ Π½Π° ΡΠΏΠΎΡΡ ΡΠ° ΠΏΠΎΠΊΡΠΏΠΊΠΈ; family days Π΄Π»Ρ Π΄ΡΡΠ΅ΠΉ ΡΠ° Π΄ΠΎΡΠΎΡΠ»ΠΈΡ ; ΠΌΠ°ΡΠ°ΠΆ Π² ΠΎΡΡΡΡ.
- ΠΠ°Π²ΡΠ°Π½Π½Ρ ΡΠ° ΡΠΎΠ·Π²ΠΈΡΠΎΠΊ: Π΄ΠΎΡΡΡΠΏ Π΄ΠΎ ΠΏΠΎΠ½Π°Π΄ 130 Π½Π°Π²ΡΠ°Π»ΡΠ½ΠΈΡ ΠΎΠ½Π»Π°ΠΉΠ½-ΡΠ΅ΡΡΡΡΡΠ²; ΠΊΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Ρ Π½Π°Π²ΡΠ°Π»ΡΠ½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ Π· CX, Data, IT Security, ΠΡΠ΄Π΅ΡΡΡΠ²Π°, Agile. ΠΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½Π° Π±ΡΠ±Π»ΡΠΎΡΠ΅ΠΊΠ° ΡΠ° ΡΡΠΎΠΊΠΈ Π°Π½Π³Π»ΡΠΉΡΡΠΊΠΎΡ.
- ΠΡΡΡΠ° ΠΊΠΎΠΌΠ°Π½Π΄Π°: Π½Π°ΡΡ ΠΊΠΎΠ»Π΅Π³ΠΈ β ΡΠ΅ ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ°, Π΄Π΅ Π²ΡΡΠ°ΡΡΡΡΡ Π΄ΠΎΠΏΠΈΡΠ»ΠΈΠ²ΡΡΡΡ, ΡΠ°Π»Π°Π½Ρ ΡΠ° ΡΠ½Π½ΠΎΠ²Π°ΡΡΡ. ΠΠΈ ΠΏΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ ΠΎΠ΄ΠΈΠ½ ΠΎΠ΄Π½ΠΎΠ³ΠΎ, Π²ΡΠΈΠΌΠΎΡΡ ΡΠ°Π·ΠΎΠΌ ΡΠ° Π·ΡΠΎΡΡΠ°ΡΠΌΠΎ. Π’ΠΈ ΠΌΠΎΠΆΠ΅Ρ Π·Π½Π°ΠΉΡΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡΠΌΡΡΠ² Ρ ΠΏΠΎΠ½Π°Π΄ 15-ΡΠΈ ΠΏΡΠΎΡΠ΅ΡΡΠΉΠ½ΠΈΡ ΠΊΠΎΠΌβΡΠ½ΡΡΡ, ΡΠΈΡΠ°ΡΡΠΊΠΎΠΌΡ ΡΠΈ ΡΠΏΠΎΡΡΠΈΠ²Π½ΠΎΠΌΡ ΠΊΠ»ΡΠ±Π°Ρ .
- ΠΠ°ΡβΡΡΠ½Ρ ΠΌΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ: ΠΌΠΈ Π·Π°ΠΎΡ ΠΎΡΡΡΠΌΠΎ ΠΏΡΠΎΡΡΠ²Π°Π½Π½Ρ Π²ΡΠ΅ΡΠ΅Π΄ΠΈΠ½Ρ Π±Π°Π½ΠΊΡ ΠΌΡΠΆ ΡΡΠ½ΠΊΡΡΡΠΌΠΈ.
- ΠΠ½Π½ΠΎΠ²Π°ΡΡΡ ΡΠ° ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡ. Infrastructure: AWS, Kubernetes, Docker, GitHub, GitHub actions, ArgoCD, Prometheus, Victoria, Vault, OpenTelemetry, ElasticSearch, Crossplain, Grafana. Languages: Java (main), Python (data), Go(infra,security), Swift (IOS), Kotlin (Andorid). Datastores: Sql-Oracle, PgSql, MsSql, Sybase. Data management: Kafka, AirFlow, Spark, Flink.
- ΠΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ Π·Π°Ρ ΠΈΡΠ½ΠΈΠΊΡΠ² Ρ Π·Π°Ρ ΠΈΡΠ½ΠΈΡΡ: ΠΌΠΈ Π·Π±Π΅ΡΡΠ³Π°ΡΠΌΠΎ ΡΠΎΠ±ΠΎΡΡ ΠΌΡΡΡΡ ΡΠ° Π²ΠΈΠΏΠ»Π°ΡΡΡΠΌΠΎ ΡΠ΅ΡΠ΅Π΄Π½Ρ Π·Π°ΡΠΎΠ±ΡΡΠ½Ρ ΠΏΠ»Π°ΡΡ ΠΌΠΎΠ±ΡΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠΌ. ΠΠ»Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² ΡΠ° Π²Π΅ΡΠ΅ΡΠ°Π½ΠΎΠΊ Ρ Π½Π°Ρ Π΄ΡΡ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ, ΡΠΎΠ·Π²ΠΈΠ²Π°ΡΡΡΡΡ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΡΠΊΠ° ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ° ΠΠ°Π½ΠΊΡ. ΠΠΈ ΠΏΡΠ°ΡΡΡΠΌΠΎ Π½Π°Π΄ ΠΏΡΠ΄Π²ΠΈΡΠ΅Π½Π½ΡΠΌ ΠΎΠ±ΡΠ·Π½Π°Π½ΠΎΡΡΡ ΠΊΠ΅ΡΡΠ²Π½ΠΈΠΊΡΠ² ΡΠ° ΠΊΠΎΠΌΠ°Π½Π΄ Π· ΠΏΠΈΡΠ°Π½Ρ ΠΏΠΎΠ²Π΅ΡΠ½Π΅Π½Π½Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² Π΄ΠΎ ΡΠΈΠ²ΡΠ»ΡΠ½ΠΎΠ³ΠΎ ΠΆΠΈΡΡΡ. Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ Π²ΡΠ΄Π·Π½Π°ΡΠ΅Π½ΠΈΠΉ ΡΠΊ ΠΎΠ΄ΠΈΠ½ Π· Π½Π°ΠΉΠΊΡΠ°ΡΠΈΡ ΡΠΎΠ±ΠΎΡΠΎΠ΄Π°Π²ΡΡΠ² Π΄Π»Ρ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² (Forbes).
Π§ΠΎΠΌΡ Π Π°ΠΉΡΡΠ°ΠΉΠ·Π΅Π½ ΠΠ°Π½ΠΊ?β―
- ΠΠ°ΡΠ° Π³ΠΎΠ»ΠΎΠ²Π½Π° ΡΡΠ½Π½ΡΡΡΡ β Π»ΡΠ΄ΠΈ Ρ ΠΌΠΈ Π΄Π°ΡΠΌΠΎ ΡΠΌ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΡ Ρ Π²ΠΈΠ·Π½Π°Π½Π½Ρ, Π½Π°Π²ΡΠ°ΡΠΌΠΎ, Π·Π°Π»ΡΡΠ°ΡΠΌΠΎ Π΄ΠΎ Π·ΠΌΡΠ½. ΠΡΠΈΡΠ΄Π½ΡΠΉΡΡ Π΄ΠΎ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ Π Π°ΠΉΡΡ, Π°Π΄ΠΆΠ΅ Π΄Π»Ρ Π½Π°Ρ Π’Π ΠΌΠ°ΡΡ Π·Π½Π°ΡΠ΅Π½Π½Ρ!β―
- ΠΠ΄ΠΈΠ½ ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΊΡΠ΅Π΄ΠΈΡΠΎΡΡΠ² Π΅ΠΊΠΎΠ½ΠΎΠΌΡΠΊΠΈ ΡΠ° Π°Π³ΡΠ°ΡΠ½ΠΎΠ³ΠΎ Π±ΡΠ·Π½Π΅ΡΡ ΡΠ΅ΡΠ΅Π΄ ΠΏΡΠΈΠ²Π°ΡΠ½ΠΈΡ Π±Π°Π½ΠΊΡΠ²β―
- ΠΠΈΠ·Π½Π°Π½ΠΈΠΉ Π½Π°ΠΉΠΊΡΠ°ΡΠΈΠΌ ΠΏΡΠ°ΡΠ΅Π΄Π°Π²ΡΠ΅ΠΌ Π·Π° Π²Π΅ΡΡΡΡΠΌΠΈ EY, Forbes, Randstad, Franklin Covey, Delo.UAβ―
- ΠΠ°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ Π΄ΠΎΠ½ΠΎΡ Π³ΡΠΌΠ°Π½ΡΡΠ°ΡΠ½ΠΎΡ Π΄ΠΎΠΏΠΎΠΌΠΎΠ³ΠΈΡΠ΅ΡΠ΅Π΄ Π±Π°Π½ΠΊΡΠ² (Π§Π΅ΡΠ²ΠΎΠ½ΠΈΠΉ Π₯ΡΠ΅ΡΡ Π£ΠΊΡΠ°ΡΠ½ΠΈ, UNITED24, Superhumans, Π‘ΠΠΠΠΠΠ)β―
- ΠΠ΄ΠΈΠ½ ΡΠ· Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΠΏΠ»Π°ΡΠ½ΠΈΠΊΡΠ² ΠΏΠΎΠ΄Π°ΡΠΊΡΠ² Π² Π£ΠΊΡΠ°ΡΠ½Ρ, Π·Π° 2023 ΡΡΠΊ Π±ΡΠ»ΠΎ ΡΠΏΠ»Π°ΡΠ΅Π½ΠΎ 6,6 ΠΌΠ»ΡΠ΄ Π³ΡΠΈΠ²Π΅Π½Ρ
ΠΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ Π΄Π»Ρ Π²ΡΡΡ :β―
- Π Π°ΠΉΡ ΠΊΠ΅ΡΡΡΡΡΡΡ ΠΏΡΠΈΠ½ΡΠΈΠΏΠ°ΠΌΠΈ, ΡΠΎ ΡΠΎΠΊΡΡΡΡΡΡΡΡ Π½Π° Π»ΡΠ΄ΠΈΠ½Ρ ΡΠ° ΡΡ ΡΠΎΠ·Π²ΠΈΡΠΊΡ, Ρ ΡΠ΅Π½ΡΡΡ ΡΠ²Π°Π³ΠΈ 5β―500 ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ² ΡΠ° ΠΏΠΎΠ½Π°Π΄ 2,7 ΠΌΡΠ»ΡΠΉΠΎΠ½ΠΈ ΠΊΠ»ΡΡΠ½ΡΡΠ²β―β―
- ΠΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ ΠΏΡΠΈΠ½ΡΠΈΠΏΠΈ ΡΡΠ·Π½ΠΎΠΌΠ°Π½ΡΡΡΡ, ΡΡΠ²Π½ΠΎΡΡΡ ΡΠ° ΡΠ½ΠΊΠ»ΡΠ·ΠΈΠ²Π½ΠΎΡΡΡ
- ΠΠΈ Π²ΡΠ΄ΠΊΡΠΈΡΡ Π΄ΠΎ Π½Π°ΠΉΠΌΡ Π²Π΅ΡΠ΅ΡΠ°Π½ΡΠ² Ρ Π»ΡΠ΄Π΅ΠΉ Π· ΡΠ½Π²Π°Π»ΡΠ΄Π½ΡΡΡΡ ΡΠ° Π³ΠΎΡΠΎΠ²Ρ Π°Π΄Π°ΠΏΡΡΠ²Π°ΡΠΈ ΡΠΎΠ±ΠΎΡΠ΅ ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΠ΅ ΠΏΡΠ΄ Π²Π°ΡΡ ΠΎΡΠΎΠ±Π»ΠΈΠ²Ρ ΠΏΠΎΡΡΠ΅Π±ΠΈ
- Π‘ΠΏΡΠ²ΠΏΡΠ°ΡΡΡΠΌΠΎ Π·Ρ ΡΡΡΠ΄Π΅Π½ΡΠ°ΠΌΠΈ ΡΠ° Π»ΡΠ΄ΡΠΌΠΈ ΡΡΠ°ΡΡΠΎΠ³ΠΎ Π²ΡΠΊΡ,β―ΡΡΠ²ΠΎΡΡΡΡΠΈ ΡΠΌΠΎΠ²ΠΈ Π΄Π»Ρ Π·ΡΠΎΡΡΠ°Π½Π½Ρ Π½Π° Π±ΡΠ΄Ρ-ΡΠΊΠΎΠΌΡ Π΅ΡΠ°ΠΏΡ ΠΊΠ°ΡβΡΡΠΈ
ΠΠ°ΠΆΠ°ΡΡ Π΄ΡΠ·Π½Π°ΡΠΈΡΡ Π±ΡΠ»ΡΡΠ΅? β ΠΡΠ΄ΠΏΠΈΡΡΠΉΡΡ Π½Π° Π½Π°Ρ Ρ ΡΠΎΡ.ΠΌΠ΅ΡΠ΅ΠΆΠ°Ρ :
Facebook, Instagram, LinkedInβ―
More