Jobs

120
  • Β· 18 views Β· 0 applications Β· 5d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    Job Description Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics Proficiency in data engineering with Apache Spark,...

    Job Description

    • Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
    • Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
    • Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
    • Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
    • Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
    • Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
    • Strong understanding of data modeling, schema design, and database performance optimization
    • Practical experience working with various file formats, including JSON, Parquet, and ORC
    • Familiarity with machine learning and AI integration within the data platform context
    • Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
    • Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
    • Strong analytical and problem-solving skills with attention to detail
    • Excellent teamwork and communication skills
    • Upper-Intermediate English (spoken and written)

     

    Job Responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
    • Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
    • Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
    • Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
    • Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, estimation, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment

     

    Department/Project Description

    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
    You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
    We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people's lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.

    More
  • Β· 33 views Β· 12 applications Β· 5d

    Senior Python / Data Developer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· C1 - Advanced
    Project Description A next-generation analytics platform for the media industry, designed to empower sales teams with actionable insights β€” even those without analytics expertise. The platform consolidates large datasets and exposes insights through an...

    Project Description

    A next-generation analytics platform for the media industry, designed to empower sales teams with actionable insights β€” even those without analytics expertise. The platform consolidates large datasets and exposes insights through an AI-driven, intuitive frontend interface, bridging the gap between complex data and everyday users.

    The Client is looking for experienced Senior Python Developers to join our growing engineering team and drive backend development. You will work closely with a small, highly skilled group of engineers, contributing to the architecture, data processing workflows, and backend infrastructure that powers the platform.

    This role requires a hands-on developer who thrives in a fast-paced environment, can work independently, and enjoys solving complex technical challenges.

     

    Requirements

    • 5+ years of experience in backend development with Python.
    • Strong experience with data processing, ETL workflows, and APIs.
    • Proficiency with PostgreSQL and working knowledge of stored procedures.
    • Experience with AWS (EC2, S3, Lambda, etc.) for scalable and cost-efficient architecture.
    • Familiarity with Databricks, Alteryx, or similar data processing tools (and a willingness to replace/optimize them).
    • Experience with Docker and containerized environments.
    • Ability to work independently, think critically, and propose practical solutions under tight deadlines.
    • Excellent communication and teamwork skills β€” able to collaborate across time zones.
    • Fluent in English.

    Nice to have

    • Experience in media analytics or related data-heavy industries.
    • Knowledge of React/Fastify APIs or general frontend integration concepts.
    • Background in AI/ML model deployment or working with AI-driven applications.

    Duties and responsibilities

    • Design, build and optimize backend systems using Python for data processing, integration, and orchestration.
    • Refactor and modernize existing legacy data models for scalability and maintainability.
    • Develop and automate ETL pipelines to replace manual workflows, improving efficiency and data quality.
    • Collaborate with frontend and AI/ML teams to ensure seamless data delivery to user-facing applications.
    • Contribute to architectural decisions and propose innovative solutions balancing speed, cost, and quality.
    • Leverage AWS infrastructure for cost-effective computation (e.g., spot instances) and scalability.
    • Participate in code reviews, design discussions, and continuous process improvements.
    • Ensure secure, and maintainable code aligned with project timelines and quality standards.

    Working conditions

    • Mon β€” Fri 9-5 (US EST) overlap with team at least 4 hours.
    • Duration: 6 months with possible extension.
    More
  • Β· 21 views Β· 0 applications Β· 5d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    We are seeking a proactive Senior Data Engineer to join our vibrant team. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within...

    We are seeking a proactive Senior Data Engineer to join our vibrant team. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.

     

    Technical stack: Palantir Foundry, Python, PySpark, SQL, TypeScript.

     

    Responsibilities:

    • Collaborate with cross-functional teams to understand data requirements, and design, implement, and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
    • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide the development process.
    • Develop, implement, optimize, and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
    • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency. 
    • Assist in optimizing data pipelines to improve machine learning workflows.
    • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
    • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.

       

    Requirements:

    • 5+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
    • Strong proficiency in Python and PySpark;
    • Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
    • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
    • Expertise in data modeling, data warehousing, and ETL/ELT concepts;
    • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
    • Hands-on experience in containerization technologies (e.g., Docker, Kubernetes);
    • Experience working with feature engineering and data preparation for machine learning models.
    • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities;
    • Strong communication and teamwork abilities;
    • Understanding of data security and privacy best practices;
    • Strong mathematical, statistical, and algorithmic skills;

     

    Nice to have:

    • Familiarity with ML Ops concepts, including model deployment and monitoring.
    • Basic understanding of machine learning frameworks such as TensorFlow or PyTorch.
    • Exposure to cloud-based AI/ML services (e.g., AWS SageMaker, Azure ML, Google Vertex AI);
    • Certification in Cloud platforms, or related areas;
    • Experience with search engine Apache Lucene, Webservice Rest API;
    • Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
    • Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
    • Previous experience working with JavaScript and TypeScript.

     

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 22 views Β· 0 applications Β· 5d

    Senior/Principal Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    Drivers of change, it’s your time to pave new ways. Intellias, a leading software provider in the automotive industry, invites you to develop the future of driving. Join the team and create products used by 2 billion people in the world. What project we...

    Drivers of change, it’s your time to pave new ways. Intellias, a leading software provider in the automotive industry, invites you to develop the future of driving. Join the team and create products used by 2 billion people in the world.

     

    What project we have for you

    Our client is a leading European B2B platform for on-the-road payments and solutions.

    Our dynamic cooperation is aimed at reaching long-term success and technology excellence. That’s why we’re hiring top-tier engineers who will contribute towards an efficient and sustainable future of mobility. Developing a routing service using road data and EV station maps to optimize journeys across Europe – that’s what you’re going to deal with in our ambitious and passionate tech team.

     

    What you will do

    • Working in an innovative and fast-growing environment as a strong business communicator
    • Developing and maintaining a scalable, cloud-native data landscape by laying a new foundation for gaining insights and business value
    • Working together with the team and business partners to develop data products using an agile approach
    • Creating products that allow us to address mission-critical business challenges (development of data pipelines, topics in the area of product, reporting & analytics)
    • Breaking new ground: You regularly optimize solutions to improve performance, quality, and costs.

     

    What you need for this

    • 4+ years of experience in Data Modelling / Data Analytics, with a focus on developing cloud-based architectures and products (preferably using AWS).
    • Experience working with DWH data modelling, Snowflake, and DBT.
    • Proven knowledge of Python and SQL across multiple projects.
    • Excellent communication skills and a proactive mindset.
    • Hands-on experience with Kafka and Databricks.
    • Background in working within a scaling environment (comparable or larger company size).
    • Proficiency in Infrastructure as Code (IaC) using Terraform to describe and maintain infrastructure.
    • Strong focus on IT security in design and implementation decisions.
    • Advanced analytical and project management skills.
    • Ability to translate technical results into clear, self-explanatory presentations for business stakeholders.

     

    What it’s like to work at Intellias

    At Intellias, where technology takes center stage, people always come before processes. By creating a comfortable atmosphere in our team, we empower individuals to unlock their true potential and achieve extraordinary results. That’s why we offer a range of benefits that support your well-being and charge your professional growth.

    We are committed to fostering equity, diversity, and inclusion as an equal opportunity employer. All applicants will be considered for employment without discrimination based on race, color, religion, age, gender, nationality, disability, sexual orientation, gender identity or expression, veteran status, or any other characteristic protected by applicable law.

    We welcome and celebrate the uniqueness of every individual. Join Intellias for a career where your perspectives and contributions are vital to our shared success.

    More
  • Β· 49 views Β· 1 application Β· 5d

    Senior Software/Data Engineer

    Full Remote Β· Ukraine Β· Product Β· 4 years of experience Β· B2 - Upper Intermediate
    The company is a global marketing tech company, recognized as a Leader by Forrester and a Challenger by Gartner. We work with some of the world’s most exciting brands, such as Sephora, Staples, and Entain, who love our thought-provoking combination of art...

    The company is a global marketing tech company, recognized as a Leader by Forrester and a Challenger by Gartner. We work with some of the world’s most exciting brands, such as Sephora, Staples, and Entain, who love our thought-provoking combination of art and science. With a strong product, a proven business, and the DNA of a vibrant, fast-growing startup, we’re on the cusp of our next growth spurt. It’s the perfect time to join our team of ~450 thinkers and doers across NYC, LDN, TLV, and other locations, where 2 of every 3 managers were promoted from within. Growing your career with the company is basically guaranteed.

     

    Requirements

    • At least 5 years of experience with .NET with some experience in Python, or, alternatively, at least 5 years of experience in Python with some experience with .NET.
    • At least 3 years of experience in processing structured terabyte-scale data.
    • Solid experience in SQL (advanced skills in DML).
    • Hands-on experience with OLAP storage (at least one of Snowflake, BigQuery, ClickHouse, etc.).
    • Deep understanding of data processing services (at least one of Apache Airflow, GCP Dataflow, Apache Hadoop, Apache Spark, etc.).
    • Experience in designing distributed cloud-native systems.
    • Experience in automated test creation (TDD).
    • Experience in working with AI tools.

    Advantages

    • Being fearless of mathematical algorithms (part of our team’s responsibility is developing ML models for data analysis; although knowledge of ML is not required for the current position, it would be awesome if a person felt some passion for algorithms).
    • Experience in DevOps.
    • Familiarity with Docker and Kubernetes.
    • Experience with GCP services would be a plus.
    • Experience with IaC would be a plus.

    Π’ΠΈΠΌΠΎΠ³ΠΈ Π΄ΠΎ володіння ΠΌΠΎΠ²Π°ΠΌΠΈ

    ΠΠ½Π³Π»Ρ–ΠΉΡΡŒΠΊΠ°

    B2 – Π’ΠΈΡ‰Π΅ ΡΠ΅Ρ€Π΅Π΄Π½ΡŒΠΎΠ³ΠΎ

     

    BigQuery, ClickHouse, GCP Dataflow, Apache Hadoop, Apache Spark, Docker, Python, .NET, SQL

    ΠŸΡ€ΠΎ ΠΊΠΎΠΌΠΏΠ°Π½Ρ–ΡŽ Gemicle

    Gemicle β€” an innovative, highly technological company with a broad range of expertise in spheres of the development of apps, complex e-commerce projects, and B2B solutions. Qualified teams of developers, designers, engineers, QAs, and animators deliver excellent products and solutions to branded and well-known companies. Knowledge and experience of specialists in different technologies allow the company to be at the top level of IT-industry. Gemicle is a fusion of team spirit, professionalism, and dedication. Gemicle is not just a company, it’s a lifestyle.


     

    More
  • Β· 24 views Β· 1 application Β· 4d

    Senior Data Engineer

    Full Remote Β· Ukraine, Poland, Romania, Croatia Β· 5 years of experience Β· B2 - Upper Intermediate
    Description Our customer (originally the Minnesota Mining and Manufacturing Company) is an American multinational conglomerate operating in the fields of industry, worker safety, and consumer goods. Based in the Saint Paul suburb of Maplewood, the company...

    Description

    Our customer (originally the Minnesota Mining and Manufacturing Company) is an American multinational conglomerate operating in the fields of industry, worker safety, and consumer goods. Based in the Saint Paul suburb of Maplewood, the company produces over 60,000 products, including adhesives, abrasives, laminates, passive fire protection, personal protective equipment, window films, paint protection film, electrical, electronic connecting, insulating materials, car-care products, electronic circuits, and optical films.

     

    Requirements

    We are looking for a highly skilled and experienced Senior Data Engineer to join our team. In this role, you will be a key player in designing, building, and optimizing our data architecture and pipelines. You will be working on a complex data project, transforming raw data into reliable, high-quality assets ready for analytics, data science, and business intelligence. As a senior member of the team, you will also be expected to help junior/middle engineers, drive technical best practices, and contribute to the strategic direction of our data platform.

     

    Required Qualifications & Skills

    • 5+ years of professional experience in data engineering or a related role.
    • A minimum of 3 years of deep, hands-on experience using Python for data processing, automation, and building data pipelines.
    • A minimum of 3 years of strong, hands-on experience with advanced SQL for complex querying, data manipulation, and performance tuning.
    • Proven experience with cloud data services, preferably Azure (Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Data Lake Storage).
    • Hands-on experience with big data processing frameworks like Spark (PySpark) and platforms such as Databricks.
    • Solid experience working with large, complex data environments, including data processing, data integration, and data warehousing.
    • Proficiency in data quality assessment and improvement techniques.
    • Experience working with and cleansing a variety of data formats, including unstructured and semi-structured data (e.g., CSV, JSON, Parquet, XML).
    • Familiarity with Agile and Scrum methodologies and project management tools (e.g., Azure DevOps, Jira).
    • Excellent problem-solving skills and the ability to communicate complex technical concepts effectively to both technical and non-technical audiences.

    Preferred Qualifications & Skills

    • Knowledge of DevOps methodologies and CI/CD practices for data pipelines.
    • Experience with modern data platforms like Microsoft Fabric for data modeling and integration.
    • Experience with consuming data from REST APIs.
    • Experience with database design, optimization, and performance tuning for software application backends.
    • Knowledge of dimensional data modeling concepts (Star Schema, Snowflake Schema).
    • Familiarity with modern data architecture concepts such as Data Mesh.
    • Real-world experience supporting and troubleshooting critical, end-to-end production data pipelines.

     

    Job responsibilities

    Key Responsibilities

    • Architect & Build Data Pipelines: Design, develop, and maintain robust, scalable, and reliable data pipelines using Python, SQL, and Spark on the Azure cloud platform.
    • End-to-End Data Solutions: Architect and implement end-to-end data solutions, from data ingestion and processing to storage in our data lake (Azure Data Lake Storage, Delta Lake) and data warehouse.
    • Cloud Data Services Management: Utilize Azure services like Azure Data Factory, Databricks, and Azure SQL Database to build, orchestrate, and manage complex data workflows.
    • Data Quality & Governance: Implement and enforce comprehensive data quality frameworks, including data profiling, cleansing, and validation routines to ensure the highest levels of data integrity and trust.
    • Performance Optimization: Analyze and optimize data pipelines for performance, scalability, and cost-efficiency, ensuring our systems can handle growing data volumes.
    • Mentorship & Best Practices: Mentor and provide technical guidance to junior and mid-level data engineers. Lead code reviews and champion best practices in data engineering, coding standards, and data modeling.
    • Stakeholder Collaboration: Work closely with data analysts, data scientists, and business stakeholders to understand data requirements, provide technical solutions, and deliver actionable data products.
    • System Maintenance: Support and troubleshoot production data pipelines, identify root causes of issues, and implement effective, long-term solutions.
    More
  • Β· 42 views Β· 10 applications Β· 4d

    Senior Data Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· B2 - Upper Intermediate
    We’re currently looking for a Senior Data Engineer for a long-term project, with immediate start. The role requires: - Databricks certification (mandatory) - Solid hands-on experience with Spark - Strong SQL (Microsoft SQL Server) knowledge The...

    We’re currently looking for a Senior Data Engineer for a long-term project, with immediate start.

     

    The role requires:

    - Databricks certification (mandatory)

    - Solid hands-on experience with Spark

    - Strong SQL (Microsoft SQL Server) knowledge

     

    The project involves the migration from Microsoft SQL Server to Databricks, along with data-structure optimization and enhancements.

    More
  • Β· 36 views Β· 0 applications Β· 4d

    Data Engineer (GCP, Big Query, DBT, Python, Data Modeling, ML) to $6500

    Full Remote Β· Argentina, Bulgaria, Spain, Poland, Portugal Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    Π¨ΡƒΠΊΠ°Ρ”ΠΌΠΎ Data Engineer Π· досвідом BigQuery Ρ‚Π° GCP Π² Π΄ΡƒΠΆΠ΅ Π²Π΅Π»ΠΈΠΊΡƒ Ρ– ΡΡ‚Π°Π±Ρ–Π»ΡŒΠ½Ρƒ ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΎΠ²Ρƒ ΠΊΠΎΠΌΠΏΠ°Π½Ρ–ΡŽ. ΠšΠΎΠΌΠΏΠ°Π½Ρ–Ρ розробляє софт для Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… страхових ΠΊΠΎΠΌΠΏΠ°Π½Ρ–ΠΉ світу. Π―ΠΊΡ‰ΠΎ Ρƒ вас Ρ” Ρ‚Π°ΠΊΡ– скіли, Π΄Π°Π»Ρ– ΠΌΠΎΠΆΠ½Π° Π½Π΅ Ρ‡ΠΈΡ‚Π°Ρ‚ΠΈ – відправляйтС Ρ€Π΅Π·ΡŽΠΌΠ΅, Π±ΡƒΠ΄ΡŒ ласка. АлС якщо...

    Π¨ΡƒΠΊΠ°Ρ”ΠΌΠΎ Data Engineer Π· досвідом BigQuery Ρ‚Π° GCP Π² Π΄ΡƒΠΆΠ΅ Π²Π΅Π»ΠΈΠΊΡƒ Ρ– ΡΡ‚Π°Π±Ρ–Π»ΡŒΠ½Ρƒ ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚ΠΎΠ²Ρƒ ΠΊΠΎΠΌΠΏΠ°Π½Ρ–ΡŽ. ΠšΠΎΠΌΠΏΠ°Π½Ρ–Ρ розробляє софт для Π½Π°ΠΉΠ±Ρ–Π»ΡŒΡˆΠΈΡ… страхових ΠΊΠΎΠΌΠΏΠ°Π½Ρ–ΠΉ світу. Π―ΠΊΡ‰ΠΎ Ρƒ вас Ρ” Ρ‚Π°ΠΊΡ– скіли, Π΄Π°Π»Ρ– ΠΌΠΎΠΆΠ½Π° Π½Π΅ Ρ‡ΠΈΡ‚Π°Ρ‚ΠΈ – відправляйтС Ρ€Π΅Π·ΡŽΠΌΠ΅, Π±ΡƒΠ΄ΡŒ ласка.

     

    АлС якщо Ρ†Ρ–ΠΊΠ°Π²ΠΎ:

     

    ΠŸΡ€ΠΎΠ΅ΠΊΡ‚ – ΠΏΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° Π½ΠΎΠ²ΠΎΡ— Data Platform (Data Lake, Lake House, Data Warehouse) Π½Π° Π±Π°Π·Ρ– BigQuery Π· використанням DBT, Python, AI/ML + Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠ° Ρ†Ρ–Π»ΠΎΡ— ΠΊΡƒΠΏΠΈ Ρ–Π΄Π΅ΠΉ ΠΏΠΎ Π°Π½Π°Π»Ρ–Π·Ρƒ Π΄Π°Π½ΠΈΡ…, RAG, LLM, etc.

     

    Π¨ΡƒΠΊΠ°Ρ”ΠΌΠΎ Π²ΠΈΠΊΠ»ΡŽΡ‡Π½ΠΎ ΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΈΡ… Π΄Π΅Π²Π΅Π»ΠΎΠΏΠ΅Ρ€Ρ–Π² Π·Π°ΠΊΠΎΡ€Π΄ΠΎΠ½ΠΎΠΌ. НаТаль, Π² Π£ΠΊΡ€Π°Ρ—Π½Ρ– Ρ†ΡŒΠΎΠ³ΠΎ Ρ€Π°Π·Ρƒ Π½Π΅ розглядаємо, Ρ‡Π΅Ρ€Π΅Π· обмСТСння ΠΏΠΎ доступу Π΄ΠΎ ΠΊΡ€ΠΈΡ‚ΠΈΡ‡Π½ΠΈΡ… Π΄Π°Π½ΠΈΡ…. ΠŸΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ Π΄ΡƒΠΆΠ΅ Π³Π½ΡƒΡ‡ΠΊΡ– ΡƒΠΌΠΎΠ²ΠΈ: Π²Ρ–Π΄Π΄Π°Π»Π΅Π½Π° Ρ€ΠΎΠ±ΠΎΡ‚Π°, Ρ†Ρ–ΠΊΠ°Π²Ρ– Π·Π°Π΄Π°Ρ‡Ρ– Ρ– Ρ…ΠΎΡ€ΠΎΡˆΠ΅/спокійнС ΠΊΠ΅Ρ€Ρ–Π²Π½ΠΈΡ†Ρ‚Π²ΠΎ.

     

    Опис вакансії:

     

    What You’ll Do: 

    • Design & run pipelines – create, deploy, and monitor robust data flows on GCP. 
    • Write BigQuery SQL – build procedures, views, and functions. ● Build ML pipelines – automate training, validation, deployment, and model monitoring. 
    • Solve business problems with AI/ML – frame questions, choose methods, deliver insights. 
    • Optimize ETL – speed up workflows and cut costs. 
    • Use the GCP stack – BigQuery, Dataflow, Dataproc, Airflow/Composer, DBT, Celigo, Python, Java. 
    • Model data – design star/snowflake schemas for analytics and reporting. 
    • Guard quality & uptime – add tests, validation, and alerting; fix issues fast. 
    • Document everything – pipelines, models, and processes. 
    • Keep learning – track new tools and best practices. 

       

    What You’ll Need: 

    • 5+ yrs building data/ETL solutions; 2+ yrs heavy GCP work. 
    • 2+ yrs hands‑on AI/ML pipeline experience. 
    • Proven BigQuery warehouse design and scaling. 
    • Deep SQL, Python, DBT, Git; Talend, Fivetran, or similar ETL tools. 
    • Strong data‑modeling skills (star, snowflake, normalization).
    • Solid grasp of Data Lake vs. Data Warehouse concepts. 
    • Problem‑solver who works well solo or with a team. 
    • Clear communicator with non‑technical partners. 
    • Bachelor’s in CS, MIS, CIS, or equivalent experience. 

     

    More
  • Β· 15 views Β· 0 applications Β· 3d

    Palantir Foundry Engineer

    Full Remote Β· Ukraine Β· 10 years of experience Β· B2 - Upper Intermediate
    Project Description: We are seeking a Palantir Foundry & AIP Engineer with hands-on experience across the full Foundry ecosystem and Palantir's Artificial Intelligence Platform (AIP). This role goes beyond data engineering: you will design, build, and...
    • Project Description:

      We are seeking a Palantir Foundry & AIP Engineer with hands-on experience across the full Foundry ecosystem and Palantir's Artificial Intelligence Platform (AIP). This role goes beyond data engineering: you will design, build, and operationalize AI-powered workflows, agents, and applications that drive tangible business outcomes. 

      The ideal candidate is a self-starter, able to translate complex business needs into scalable technical solutions, and confident working directly with stakeholders to maximize the value of Foundry and AIP.

       

    • Responsibilities:

      β€’ Data & Workflow Engineering: Design, develop, and maintain scalable pipelines, transformations, and applications within Palantir Foundry.
      β€’ AIP & AI Enablement:
      o Support the design and deployment of AIP use cases such as copilots, retrieval workflows, and decision-support agents.
      o Ground agents and logic flows using RAG (retrieval‐augmented generation) by connecting to relevant data sources, embedding/vector search, ontology content.
      o Use Ontology-Augmented Generation (OAG) when needed: operational decision making where logic, data, actions and relationships are embedded in the Ontology.
      o Collaborate with senior engineers on agent design, instructions, and evaluation using AIP's native features.
      β€’ End-to-End Delivery: Work with stakeholders to capture requirements, design solutions, and deliver working applications.
      β€’ User Engagement: Provide training and support for business teams adopting Foundry and AIP.
      β€’ Governance & Trust: Ensure solutions meet standards for data quality, governance, and responsible use of AI.
      β€’ Continuous Improvement: Identify opportunities to expand AIP adoption and improve workflow automation.

       

    • Mandatory Skills Description:

      Required Qualifications:
      β€’ 10+ years of overall experience as a Data and AI Engineer;
      β€’ 2+ years of professional experience with the Palantir Foundry ecosystem (data integration, ontology, pipelines, applications).
      β€’ Strong technical skills in Python, PySpark, SQL, and data modelling.
      β€’ Practical experience using or supporting AIP features such as RAG workflows, copilots, or agent-based applications.
      β€’ Ability to work independently and engage directly with non-technical business users.
      β€’ Strong problem-solving mindset and ownership of delivery.

      Preferred Qualifications:
      β€’ Familiarity with AIP Agent Studio concepts (agents, instructions, tools, testing).
      β€’ Exposure to AIP Evals and evaluation/test-driven approaches.
      β€’ Experience with integration patterns (APIs, MCP, cloud services).
      β€’ Consulting or applied AI/ML background.
      β€’ Experience in Abu Dhabi or the broader MENA region.

    More
  • Β· 13 views Β· 0 applications Β· 3d

    Senior Data Platform Architect

    Full Remote Β· Ukraine Β· 10 years of experience Β· B2 - Upper Intermediate
    Project Description: We are seeking an expert with deep proficiency as a Platform Engineer, possessing experience in data engineering. This individual should have a comprehensive understanding of both data platforms and software engineering, enabling them...
    • Project Description:

      We are seeking an expert with deep proficiency as a Platform Engineer, possessing experience in data engineering. This individual should have a comprehensive understanding of both data platforms and software engineering, enabling them to integrate the platform effectively within an IT ecosystem.

       

    • Responsibilities:

      β€’ Manage and optimize data platforms (Databricks, Palantir).
      β€’ Ensure high availability, security, and performance of data systems.
      β€’ Provide valuable insights about data platform usage.
      β€’ Optimize computing and storage for large-scale data processing.
      β€’ Design and maintain system libraries (Python) used in ETL pipelines and platform governance.
      β€’ Optimize ETL Processes – Enhance and tune existing ETL processes for better performance, scalability, and reliability.

       

    • Mandatory Skills Description:

      β€’ Minimum 10 Years of experience in IT/Data.
      β€’ Minimum 5 years of experience as a Data Platform Engineer/Data Engineer.
      β€’ Bachelor's in IT or related field.
      β€’ Infrastructure & Cloud: Azure, AWS (expertise in storage, networking, compute).
      β€’ Data Platform Tools: Any of Palantir, Databricks, Snowflake.
      β€’ Programming: Proficiency in PySpark for distributed computing and Python for ETL development.
      β€’ SQL: Expertise in writing and optimizing SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake.
      β€’ Data Warehousing: Experience working with data warehousing concepts and platforms, ideally Databricks.
      β€’ ETL Tools: Familiarity with ETL tools & processes
      β€’ Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design.
      β€’ Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development.
      β€’ Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance.
      β€’ Data Quality Tools: Experience implementing data validation, cleaning, and quality frameworks, ideally Monte Carlo.

    More
  • Β· 35 views Β· 2 applications Β· 3d

    Senior Data Platform Engineer to $7500

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 7 years of experience Β· B2 - Upper Intermediate
    Who we are: Adaptiq is a technology hub specialising in building, scaling and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Finaloop is building the data backbone of modern finance β€”...

    Who we are:

    Adaptiq is a technology hub specialising in building, scaling and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.

     

    About the Product: 

    Finaloop is building the data backbone of modern finance β€” a real-time platform that turns billions of eCommerce transactions into live, trustworthy financial intelligence. We deal with high-volume, low-latency data at scale, designing systems that off-the-shelf tech simply can’t handle. Every line of code you write keeps thousands of businesses financially aware β€” instantly.

     

    About the Role:

    We’re hiring a Senior Data Platform Engineer to build the core systems that move, transform, and power financial data in real time. You’ll be part of the core engineering group building the foundational infrastructure that powers our entire company.
    You’ll work closely with senior engineers and the VP of Engineering on high-scale architecture, distributed pipelines, and orchestration frameworks that define how our platform runs.
    It’s pure deep engineering β€” complex, impactful, and built to last.

     

    Key Responsibilities:

    • Designing, building, and maintaining scalable data pipelines and ETL processes for our financial data platform
    • Developing and optimizing data infrastructure to support real-time analytics and reporting
    • Implementing data governance, security, and privacy controls to ensure data quality and compliance
    • Creating and maintaining documentation for data platforms and processes
    • Collaborating with data scientists and analysts to deliver actionable insights to our customers
    • Troubleshooting and resolving data infrastructure issues efficiently
    • Monitoring system performance and implementing optimizations
    • Staying current with emerging technologies and implementing innovative solutions

    Required Competence and Skills:

    • 7+ years experience in Data Engineering or Platform Engineering roles
    • Strong programming skills in Python and SQL
    • Experience with orchestration platforms and tools (Airflow, Dagster, Temporal or similar)
    • Experience with MPP platforms (e.g., Snowflake, Redshift, Databricks)
    • Hands-on experience with cloud platforms (AWS) and their data services
    • Understanding of data modeling, data warehousing, and data lake concepts
    • Ability to optimize data infrastructure for performance and reliability
    • Ability to design, build, and optimize Docker images to support scalable data pipelines
    • Familiarity with CI/CD concepts and principles 
    • Fluent English (written and spoken)

    Nice to have skills:

    • Experience with big data processing frameworks (Apache Spark, Hadoop)
    • Experience with stream processing technologies (Flink, Kafka, Kinesis)
    • Knowledge of infrastructure as code (Terraform)
    • Experience deploying, managing, and maintaining services on Kubernetes clusters
    • Experience building analytics platforms or clickstream pipelines
    • Familiarity with ML workflows and MLOps
    • Experience working in a startup environment or fintech industry

    The main components of our current technology stack:

    • AWS Serverless, Python, Airflow, Airbyte, Temporal, PostgreSQL, Snowflake, Kubernetes, Terraform, Docker.
    More
  • Β· 6 views Β· 1 application Β· 3d

    Salesforce Consumer Goods Cloud (CGC)

    Full Remote Β· Worldwide Β· 3 years of experience Β· B2 - Upper Intermediate
    1. Job Description: Salesforce Consumer Goods Cloud (CGC) Subject Matter Expert (SME) About the Role We are seeking a highly skilled Salesforce Consumer Goods Cloud (CGC) Subject Matter Expert (SME) to serve as the key consultant for our FMCG/CPG clients....

    1. Job Description: Salesforce Consumer Goods Cloud (CGC) Subject Matter Expert (SME)

    About the Role

    We are seeking a highly skilled Salesforce Consumer Goods Cloud (CGC) Subject Matter Expert (SME) to serve as the key consultant for our FMCG/CPG clients. Your mission is to ensure that CGC implementations perfectly align with the client’s best business practices in retail and distribution. You will act as the bridge between complex business processes (e.g., Retail Execution, Trade Promotion Management) and standard (out-of-the-box) Salesforce functionality.

    Key Responsibilities

    • Conduct a Business Process Audit to identify misalignments between the client’s current crippled processes and native CGC capabilities.
    • Consult clients on CGC best practices for Retail Execution, Trade Promotion Management (TPM), Order Management, and Direct Store Delivery (DSD).
    • Develop "De-Customization" strategies to replace complex, inefficient custom logic with standard Salesforce features.
    • Collaborate with Solution Architects and Developers to ensure the technical design aligns with business requirements and the CGC data model.
    • Participate in the Discovery and Gap Analysis phases, providing clear, prioritized recommendations to restore value to the implementation.
    • Support sales efforts and develop Statements of Work (SOW) for Phase 2 (Remediation Project).

    Requirements

    • Minimum 3+ years of experience working with Salesforce Consumer Goods Cloud (or deep experience in the FMCG/CPG segment with Salesforce).
    • Profound understanding of core CGC features: Visit Management, Retail Execution, Pricing & Promotions, Store/Route Planning.
    • Possession of Salesforce certifications, specifically Salesforce Certified Consumer Goods Cloud Accredited Professional or Salesforce Certified Sales Cloud Consultant (preferred).
    • Excellent communication and presentation skills for effective engagement with client executives.
    • Ability to translate complex business problems into clear, actionable CGC-based solutions.

     

     

    What We Offer (Benefits)

    1. Competitive Salary: Attractive, competitive salary and bonus structure commensurate with your experience and contribution.
    2. Professional and Supportive Team: Join a team of highly skilled Salesforce experts focused on shared success and continuous improvement.
    3. Flexibility and Remote Work: Opportunity to work fully remotely or with a flexible hybrid schedule, allowing you to balance work and personal life effectively.
    More
  • Β· 121 views Β· 20 applications Β· 3d

    Data Engineer (Junior/Middle)

    Full Remote Β· Worldwide Β· Product Β· 1 year of experience
    We operate an integrated sushi-restaurant business and require a Data Engineer to design and implement a centralised, well-governed data warehouse; develop and automate data pipelines that support critical reporting, including multi-platform...

    We operate an integrated sushi-restaurant business and require a Data Engineer to design and implement a centralised, well-governed data warehouse; develop and automate data pipelines that support critical reporting, including multi-platform customer-order analytics, marketing performance metrics, executive dashboards, and other business-essential analyses and collaborate on internal machine-learning projects by providing reliable, production-ready data assets.

     

     

    Our requirements:

     

    1. Professional experience (1–3 years) in data engineering, with demonstrable ownership of end-to-end ETL/ELT pipelines in production.
    2. Strong SQL and Python proficiency, including performance tuning, modular code design, and automated testing of data transformations.
    3. Hands-on expertise with modern data-stack components (e.g., Airflow, dbt, Spark, or comparable orchestration and processing frameworks).
    4. Cloud-native skills on AWS or Azure, covering at least two services from Glue, Athena, Lambda, Databricks, Data Factory, or Snowflake, plus cost- and performance-optimization best practices.
    5. Solid understanding of dimensional modelling, data-quality governance, and documentation standards, ensuring reliable, audited data assets for analytics and machine-learning use cases.

     

     

    Your responsibilities:

     

    • Designing, developing, and maintaining scalable data pipelines and ETL.
    • Optimizing data processing workflows for performance, reliability, and cost-efficiency.
    • Ensuring compliance with data quality standards and implementing governance best practices.
    • Driving and supporting the migration of on-premise data products to warehouse.
    More
  • Β· 619 views Β· 34 applications Β· 7d

    Strong Junior Data Engineer

    Worldwide Β· 1 year of experience Β· B1 - Intermediate
    Dataforest Π² ΠΏΠΎΡˆΡƒΠΊΡƒ Π²ΠΌΠΎΡ‚ΠΈΠ²ΠΎΠ²Π°Π½ΠΎΠ³ΠΎ Π½Π° Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ Data Engineer, який станС Ρ‡Π°ΡΡ‚ΠΈΠ½ΠΎΡŽ Π½Π°ΡˆΠΎΡ— Π΄Ρ€ΡƒΠΆΠ½ΡŒΠΎΡ— ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ. Π―ΠΊ Data Engineer, Ρ‚ΠΈ Π±ΡƒΠ΄Π΅Ρˆ Ρ€ΠΎΠ·Π²'язувати Ρ†Ρ–ΠΊΠ°Π²Ρ– Π·Π°Π΄Π°Ρ‡Ρ–, Π²ΠΈΠΊΠΎΡ€ΠΈΡΡ‚ΠΎΠ²ΡƒΡŽΡ‡ΠΈ ΠΏΠ΅Ρ€Π΅Π΄ΠΎΠ²Ρ– Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ— Π·Π±ΠΎΡ€Ρƒ, ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ, Π°Π½Π°Π»Ρ–Π·Ρƒ Ρ‚Π° ΠΌΠΎΠ½Ρ–Ρ‚ΠΎΡ€ΠΈΠ½Π³Ρƒ Π΄Π°Π½ΠΈΡ…. Π―ΠΊΡ‰ΠΎ Ρ‚ΠΈ Π½Π΅...

    Dataforest Π² ΠΏΠΎΡˆΡƒΠΊΡƒ Π²ΠΌΠΎΡ‚ΠΈΠ²ΠΎΠ²Π°Π½ΠΎΠ³ΠΎ Π½Π° Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΎΠΊ Data Engineer, який станС Ρ‡Π°ΡΡ‚ΠΈΠ½ΠΎΡŽ Π½Π°ΡˆΠΎΡ— Π΄Ρ€ΡƒΠΆΠ½ΡŒΠΎΡ— ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ. Π―ΠΊ Data Engineer, Ρ‚ΠΈ Π±ΡƒΠ΄Π΅Ρˆ Ρ€ΠΎΠ·Π²'язувати Ρ†Ρ–ΠΊΠ°Π²Ρ– Π·Π°Π΄Π°Ρ‡Ρ–, Π²ΠΈΠΊΠΎΡ€ΠΈΡΡ‚ΠΎΠ²ΡƒΡŽΡ‡ΠΈ ΠΏΠ΅Ρ€Π΅Π΄ΠΎΠ²Ρ– Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ— Π·Π±ΠΎΡ€Ρƒ, ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ, Π°Π½Π°Π»Ρ–Π·Ρƒ Ρ‚Π° ΠΌΠΎΠ½Ρ–Ρ‚ΠΎΡ€ΠΈΠ½Π³Ρƒ Π΄Π°Π½ΠΈΡ….

     

    Π―ΠΊΡ‰ΠΎ Ρ‚ΠΈ Π½Π΅ Π±ΠΎΡ—ΡˆΡΡ Π²ΠΈΠΊΠ»ΠΈΠΊΡ–Π², ця вакансія самС для Ρ‚Π΅Π±Π΅!

     

    Нам ваТливо:

    β€’ Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ як Data Engineer β€” 1+ Ρ€Ρ–ΠΊ;

    β€’ Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Python;
    β€’ Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Databricks Ρ‚Π° Datafactory;
    β€’ Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· AWS/Azure;

    β€’ Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· ETL / ELT pipelines;

    β€’ Досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· SQL.

     

    Обов'язки:

    β€’ БтворСння ETL/ELT pipelines Ρ‚Π° Ρ€Ρ–ΡˆΠ΅Π½ΡŒ для управління Π΄Π°Π½ΠΈΠΌΠΈ;

    β€’ Застосування Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΡ–Π² ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ…;

    β€’ Π ΠΎΠ±ΠΎΡ‚Π° Π· SQL-Π·Π°ΠΏΠΈΡ‚Π°ΠΌΠΈ для Π²ΠΈΠ΄ΠΎΠ±ΡƒΡ‚ΠΊΡƒ Ρ‚Π° Π°Π½Π°Π»Ρ–Π·Ρƒ Π΄Π°Π½ΠΈΡ…;

    β€’ Аналіз Π΄Π°Π½ΠΈΡ… Ρ‚Π° використання Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΡ–Π² ΠΎΠ±Ρ€ΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ… для Π²ΠΈΡ€Ρ–ΡˆΠ΅Π½Π½Ρ бізнСс-ΠΏΡ€ΠΎΠ±Π»Π΅ΠΌ;

     

    Ми ΠΏΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ:

    β€’ Π ΠΎΠ±ΠΎΡ‚Π° Π· high-skilled engineering team Π½Π°Π΄ Ρ†Ρ–ΠΊΠ°Π²ΠΈΠΌΠΈ Ρ‚Π° складними ΠΏΡ€ΠΎΡ”ΠΊΡ‚Π°ΠΌΠΈ;

    β€’ ВивчСння Π½ΠΎΠ²Ρ–Ρ‚Π½Ρ–Ρ… Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–ΠΉ;

    β€’ Бпілкування Π· Ρ–Π½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌΠΈ ΠΊΠ»Ρ–Ρ”Π½Ρ‚Π°ΠΌΠΈ, Ρ‡Π΅Π»Π΅Π½ΠΆΠΎΠ²Ρ– завдання;

    β€’ ΠœΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡ‚Ρ– особистого Ρ– профСсійного Ρ€ΠΎΠ·Π²ΠΈΡ‚ΠΊΡƒ;

    β€’ ΠšΠΎΠ½ΠΊΡƒΡ€Π΅Π½Ρ‚ΠΎΡΠΏΡ€ΠΎΠΌΠΎΠΆΠ½Π° Π·Π°Ρ€ΠΏΠ»Π°Ρ‚Π°, фіксована Π² USD;

    β€’ ΠžΠΏΠ»Π°Ρ‡ΡƒΠ²Π°Π½Π° відпустка Ρ– лікарняні;

    β€’ Π“Π½ΡƒΡ‡ΠΊΠΈΠΉ Π³Ρ€Π°Ρ„Ρ–ΠΊ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ;

    β€’ ДруТня Ρ€ΠΎΠ±ΠΎΡ‡Π° атмосфСра Π±Π΅Π· Π±ΡŽΡ€ΠΎΠΊΡ€Π°Ρ‚ΠΈΠ·ΠΌΡƒ;

    β€’ Π£ нас Π±Π°Π³Π°Ρ‚ΠΎ Ρ‚Ρ€Π°Π΄ΠΈΡ†Ρ–ΠΉ β€” ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ‚ΠΈΠ²ΠΈ, Ρ‚ΠΈΠΌΠ±Ρ–Π»Π΄ΠΈΠ½Π³ΠΈ Ρ‚Π° Ρ‚Π΅ΠΌΠ°Ρ‚ΠΈΡ‡Π½Ρ– Π·Π°Ρ…ΠΎΠ΄ΠΈ Ρ‚Π° Π±Π°Π³Π°Ρ‚ΠΎ Ρ–Π½ΡˆΠΎΠ³ΠΎ!

     

    Π―ΠΊΡ‰ΠΎ наша вакансія Ρ‚ΠΎΠ±Ρ– Π΄ΠΎ Π΄ΡƒΡˆΡ–, Ρ‚ΠΎΠ΄Ρ– відправляй своє Ρ€Π΅Π·ΡŽΠΌΠ΅ - Ρ– ставай Ρ‡Π°ΡΡ‚ΠΈΠ½ΠΎΡŽ Π½Π°ΡˆΠΎΡ— ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ.

    More
  • Β· 151 views Β· 9 applications Β· 13d

    System engineer Big Data

    Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· A2 - Elementary
    ПРО НАБ UKRSIB Tech β€” Ρ†Π΅ Π°ΠΌΠ±Ρ–Ρ‚Π½Π° Π†Π’-ΠΊΠΎΠΌΠ°Π½Π΄Π° Π· близько 400 спСціалістів, Ρ‰ΠΎ Π΄Ρ€Π°ΠΉΠ²ΠΈΡ‚ΡŒ Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ— UKRSIBBANK. Ми ΡΡ‚Π²ΠΎΡ€ΡŽΡ”ΠΌΠΎ Ρ‚ΠΎΠΏΠΎΠ²ΠΈΠΉ Π±Π°Π½ΠΊΡ–Π½Π³ для > 2 000 000 ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² Ρ‚Π° ΠΏΡ€Π°Π³Π½Π΅ΠΌΠΎ Π²ΠΈΠ²ΠΎΠ΄ΠΈΡ‚ΠΈ фінансову сфСру Π² Π£ΠΊΡ€Π°Ρ—Π½Ρ– Π½Π° Π½ΠΎΠ²ΠΈΠΉ Ρ€Ρ–Π²Π΅Π½ΡŒ. Нашими ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Π°ΠΌΠΈ ΠΊΠΎΡ€ΠΈΡΡ‚ΡƒΡŽΡ‚ΡŒΡΡ...

    ПРО НАБ

    UKRSIB Tech β€” Ρ†Π΅ Π°ΠΌΠ±Ρ–Ρ‚Π½Π° Π†Π’-ΠΊΠΎΠΌΠ°Π½Π΄Π° Π· близько 400 спСціалістів, Ρ‰ΠΎ Π΄Ρ€Π°ΠΉΠ²ΠΈΡ‚ΡŒ Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ— UKRSIBBANK.

    Ми ΡΡ‚Π²ΠΎΡ€ΡŽΡ”ΠΌΠΎ Ρ‚ΠΎΠΏΠΎΠ²ΠΈΠΉ Π±Π°Π½ΠΊΡ–Π½Π³ для > 2 000 000 ΠΊΠ»Ρ–Ρ”Π½Ρ‚Ρ–Π² Ρ‚Π° ΠΏΡ€Π°Π³Π½Π΅ΠΌΠΎ Π²ΠΈΠ²ΠΎΠ΄ΠΈΡ‚ΠΈ фінансову сфСру Π² Π£ΠΊΡ€Π°Ρ—Π½Ρ– Π½Π° Π½ΠΎΠ²ΠΈΠΉ Ρ€Ρ–Π²Π΅Π½ΡŒ. Нашими ΠΏΡ€ΠΎΠ΄ΡƒΠΊΡ‚Π°ΠΌΠΈ ΠΊΠΎΡ€ΠΈΡΡ‚ΡƒΡŽΡ‚ΡŒΡΡ ΡŽΠ·Π΅Ρ€ΠΈ Ρ‰ΠΎΠ΄Π΅Π½Π½ΠΎΠ³ΠΎ Π±Π°Π½ΠΊΡ–Π½Π³Ρƒ, Π»Ρ–Π΄Π΅Ρ€ΠΈ ΡƒΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠΎΡ— Π΅ΠΊΠΎΠ½ΠΎΠΌΡ–ΠΊΠΈ Ρ‚Π° Π²Π΅Π»ΠΈΠΊΡ– ΠΌΡ–ΠΆΠ½Π°Ρ€ΠΎΠ΄Π½Ρ– ΠΊΠΎΡ€ΠΏΠΎΡ€Π°Ρ†Ρ–Ρ—.

    Ми Π΄ΡΠΊΡƒΡ”ΠΌΠΎ нашим захисникам Ρ‚Π° Π·Π°Ρ…исницям, які Π²Ρ–Π΄Π΄Π°Π½ΠΎ Π±ΠΎΡ€ΠΎΠ½ΡΡ‚ΡŒ свободу Ρ‚Π° Π½Π΅Π·Π°Π»Π΅ΠΆΠ½Ρ–ΡΡ‚ΡŒ Π£ΠΊΡ€Π°Ρ—Π½ΠΈ, Ρ‚Π° ΡΡ‚Π²ΠΎΡ€ΡŽΡ”ΠΌΠΎ сприятливС сСрСдовищС для Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π² Π±Π°Π½ΠΊΡƒ.

    ΠœΠ°ΠΉΠ±ΡƒΡ‚Π½Ρ– Π·Π°Π΄Π°Ρ‡Ρ–:

    • оновлСння Ρ‚Π° Π²ΠΈΠΏΡ€Π°Π²Π»Π΅Π½Π½Ρ ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ½ΠΎΠ³ΠΎ забСзпСчСння Π· ΠΎΠ½ΠΎΠ²Π»Π΅Π½Π½ΡΠΌ Π²Π΅Π½Π΄ΠΎΡ€Π°
    • ΠΏΡ€ΠΎΠ΅ΠΊΡ‚Π½Π° Ρ€ΠΎΠ±ΠΎΡ‚Π° для ΠΎΠΏΡ‚ΠΈΠΌΡ–Π·Π°Ρ†Ρ–Ρ— підсистСм
    • взаємодія Π· Π²Π΅Π½Π΄ΠΎΡ€Π°ΠΌΠΈ систСм для Π²ΠΈΡ€Ρ–ΡˆΠ΅Π½Π½Ρ ΠΏΡ€ΠΎΠ±Π»Π΅ΠΌ інтСграція Π· IBM DataStage, Teradata, Oracle, Jupyterhub, Docker, Python
    • ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠ° користувачів (Π²ΠΈΡ€Ρ–ΡˆΠ΅Π½Π½Ρ Ρ–Π½Ρ†ΠΈΠ΄Π΅Π½Ρ‚Ρ–Π² Ρ‚Π° ΠΊΠΎΠ½ΡΡƒΠ»ΡŒΡ‚ування) Ρ‚Π° Π°Π΄ΠΌΡ–ністрування доступу
    • тСстування Π½ΠΎΠ²ΠΎΠ³ΠΎ Ρ„ΡƒΠ½ΠΊΡ†Ρ–ΠΎΠ½Π°Π»Ρƒ
    • Взаємодія Π· Ρ–Π½ΡˆΠΈΠΌΠΈ структурними ΠΏΡ–Π΄Ρ€ΠΎΠ·Π΄Ρ–Π»Π°ΠΌΠΈ Π†Π’ Π· ΠΌΠ΅Ρ‚ΠΎΡŽ визначСння ΠΎΠΏΡ‚ΠΈΠΌΠ°Π»ΡŒΠ½ΠΈΡ… процСсів Π½Π΅ΠΏΠ΅Ρ€Π΅Ρ€Π²Π½ΠΎΠ³ΠΎ розгортання (continuous integration/continuous delivery).
    • ΡƒΡ‡Π°ΡΡ‚ΡŒ Π² Ρ€ΠΎΠ·Ρ€ΠΎΠ±Ρ†Ρ– ΡΡ‚Π°Π±Ρ–Π»ΡŒΠ½ΠΎΡ— Ρ– Π½Π°Π΄Ρ–ΠΉΠ½ΠΎΡ— Π†Π’-інфраструктури для ΠΏΡ–Π΄Ρ‚Ρ€ΠΈΠΌΠΊΠΈ систСм DataHub, DataStage.
    • Π½Π°Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ ΠΌΠΎΠ½Ρ–Ρ‚ΠΎΡ€ΠΈΠ½Π³Ρƒ доступності систСм DataHub, DataStage, Π²Ρ–Π΄ΠΏΠΎΠ²Ρ–Π΄Π½ΠΎ Π΄ΠΎ Π·Π°Π΄Π°Π½ΠΈΡ… Π²ΠΈΠΌΠΎΠ³, Ρ„ΠΎΡ€ΠΌΡƒΡ” ΠΏΡ€ΠΎΠΏΠΎΠ·ΠΈΡ†Ρ–Ρ— Ρ‰ΠΎΠ΄ΠΎ ΠΎΠΏΡ‚ΠΈΠΌΡ–Π·Π°Ρ†Ρ–Ρ— Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ‡Π½ΠΈΡ… ΠΏΡ€ΠΎΡ†Π΅Π΄ΡƒΡ€ Ρ– Π±Ρ–знСс-процСсів.
    • ΡƒΡ‡Π°ΡΡ‚ΡŒ Π² Ρ€Ρ–ΡˆΠ΅Π½Π½Ρ Ρ‚Π΅Ρ…Π½Ρ–Ρ‡Π½ΠΈΡ… Ρ– ΡΠΈΡΡ‚Π΅ΠΌΠ½ΠΈΡ… ΠΏΡ€ΠΎΠ±Π»Π΅ΠΌ, Ρ‰ΠΎ Π²ΠΈΠ½ΠΈΠΊΠ»ΠΈ Π² Ρ€ΠΎΠ±ΠΎΡ‚Ρ– систСм DataHub, DataStage, провСдСння Ρ€ΠΎΠ·ΡΠ»Ρ–Π΄ΡƒΠ²Π°Π½ΡŒ Π· Π½Π΅ΡΡ‚Π°Π½Π΄Π°Ρ€Ρ‚Π½ΠΈΡ… ситуацій Ρƒ ΡΠΈΡΡ‚Π΅ΠΌΡ–, формування висновків Ρ– ΠΏΡ€ΠΎΠΏΠΎΠ·ΠΈΡ†Ρ–ΠΉ Ρ‰ΠΎΠ΄ΠΎ усунСння.

    Ми Π² ΠΏΠΎΡˆΡƒΠΊΡƒ фахівця, який ΠΌΠ°Ρ”:

    • ΠŸΠΎΠ²Π½Ρƒ Π²ΠΈΡ‰Ρƒ Ρ‚Π΅Ρ…Π½Ρ–Ρ‡Π½Ρƒ\Ρ–Π½ΠΆΠ΅Π½Π΅Ρ€Π½Ρƒ освіту;
    • досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π·Π° Π½Π°ΠΏΡ€ΡΠΌΠΊΠΎΠΌ Π²Ρ–Π΄ 2Ρ… Ρ€ΠΎΠΊΡ–Π²
    • Π’Π΅Ρ…Π½Ρ–Ρ‡Π½Ρ– засоби, ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ½Π΅ забСзпСчСння, засоби ΠΏΠ΅Ρ€Π΅Π΄Π°Ρ‡Ρ– Π΄Π°Π½ΠΈΡ… Ρ– ΠΏΡ€ΠΈΠΊΠ»Π°Π΄Π½Ρ– ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΈ користувачів, Π·ΠΎΠΊΡ€Π΅ΠΌΠ°, використовувані Π² ΡΠΈΡΡ‚Π΅ΠΌΠ°Ρ… Hadoop
    • Π‘Π°Π·ΠΎΠ²Ρ– знання Π†Π’ інфраструктури Π±Π°Π½ΠΊΡƒ (Π΄ΠΎΠ΄Π°Ρ‚ΠΊΠ°, сСрвСра, ΠΌΠ΅Ρ€Π΅ΠΆΡ–)
    • Знання Ρ– Π½Π°Π²ΠΈΡ‡ΠΊΠΈ Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· Π‘Π£Π‘Π”
    • Π“Π»ΠΈΠ±ΠΎΠΊΡ– знання Π² Π°Π΄ΠΌΡ–ніструванні ELT/ETL засобів (IBM Datastage)
    • Володіння знаннями Ρ– Π½Π°Π²ΠΈΡ‡ΠΊΠ°ΠΌΠΈ Π² Ρ€ΠΎΠ±ΠΎΡ‚Ρ– рСляційної Π‘Π£Π‘Π” HIVE, Ρ‰ΠΎ Ρ” ΡΠΊΠ»Π°Π΄ΠΎΠ²ΠΎΡŽ Ρ‡Π°ΡΡ‚ΠΈΠ½ΠΎΡŽ DataHub
    • Знання Π²Π½ΡƒΡ‚Ρ€Ρ–ΡˆΠ½Ρ–Ρ… IT процСсів Ρ‚Π° ΡΡ‚Π°Π½Π΄Π°Ρ€Ρ‚Ρ–Π²
    • Знання Ρ– Π½Π°Π²ΠΈΡ‡ΠΊΠΈ написання SQL/HQL ΡΠΊΡ€ΠΈΠΏΡ‚Ρ–Π².

    ΠžΠΊΡ€Ρ–ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡƒΠΌΡ†Ρ–Π² Ρ‚Π° Ρ†Ρ–ΠΊΠ°Π²ΠΎΡ— Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Ρ‚ΠΈ ΠΎΡ‚Ρ€ΠΈΠΌΠ°Ρ”Ρˆ:

    Π‘Ρ‚Π°Π±Ρ–Π»ΡŒΠ½Ρ–ΡΡ‚ΡŒ:

    • ΠΎΡ„Ρ–Ρ†Ρ–ΠΉΠ½Π΅ ΠΏΡ€Π°Ρ†Π΅Π²Π»Π°ΡˆΡ‚ΡƒΠ²Π°Π½Π½Ρ
    • ΠΌΠ΅Π΄ΠΈΡ‡Π½Π΅ страхування Ρ‚Π° ΡΡ‚рахування Тиття, ΠΏΠΎΠ²Π½Ρ–ΡΡ‚ΡŽ ΠΎΠΏΠ»Π°Ρ‡Π΅Π½Π΅ Π‘Π°Π½ΠΊΠΎΠΌ
    • Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½Ρƒ ΠΏΠ»Π°Ρ‚Ρƒ Π½Π° Ρ€Ρ–Π²Π½Ρ– ΠΏΡ€ΠΎΠ²Ρ–Π΄Π½ΠΈΡ… ВОП-Ρ€ΠΎΠ±ΠΎΡ‚ΠΎΠ΄Π°Π²Ρ†Ρ–Π²
    • 25 Π΄Π½Ρ–Π² Ρ‰ΠΎΡ€Ρ–Ρ‡Π½ΠΎΡ— відпустки, Π΄ΠΎΠ΄Π°Ρ‚ΠΊΠΎΠ²Ρ– Π΄Π½Ρ– відпустки Π½Π° ΠΏΠ°ΠΌβ€™ΡΡ‚Π½Ρ– ΠΏΠΎΠ΄Ρ–Ρ—, ΡΠΎΡ†Ρ–Π°Π»ΡŒΠ½Ρ– відпустки Ρƒ Π²Ρ–дповідності Π΄ΠΎ Π·Π°ΠΊΠΎΠ½ΠΎΠ΄Π°Π²ΡΡ‚Π²Π° Π£ΠΊΡ€Π°Ρ—Π½ΠΈ
    • Ρ‰ΠΎΡ€Ρ–Ρ‡Π½Ρ– пСрСгляди Π·Π°Ρ€ΠΎΠ±Ρ–Ρ‚Π½ΠΎΡ— ΠΏΠ»Π°Ρ‚ΠΈ Π²Ρ–Π΄ΠΏΠΎΠ²Ρ–Π΄Π½ΠΎ Π΄ΠΎ Π²Π»Π°ΡΠ½ΠΎΡ— СфСктивності Ρ‚Π° Ρ„інансових ΠΏΠΎΠΊΠ°Π·Π½ΠΈΠΊΡ–Π² Π‘Π°Π½ΠΊΡƒ

     

    More
Log In or Sign Up to see all posted jobs