Jobs
120-
Β· 18 views Β· 0 applications Β· 5d
Senior Data Engineer
Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper IntermediateJob Description Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics Proficiency in data engineering with Apache Spark,...Job Description
- Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
- Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
- Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
- Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
- Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
- Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
- Strong understanding of data modeling, schema design, and database performance optimization
- Practical experience working with various file formats, including JSON, Parquet, and ORC
- Familiarity with machine learning and AI integration within the data platform context
- Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
- Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
- Strong analytical and problem-solving skills with attention to detail
- Excellent teamwork and communication skills
- Upper-Intermediate English (spoken and written)
Job Responsibilities
- Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
- Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
- Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
- Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
- Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
- Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
- Design and maintain data models and schemas optimized for analytical and operational workloads
- Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
- Participate in architecture discussions, backlog refinement, estimation, and sprint planning
- Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
- Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
- Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
Department/Project Description
GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
More
You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people's lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you. -
Β· 33 views Β· 12 applications Β· 5d
Senior Python / Data Developer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· C1 - AdvancedProject Description A next-generation analytics platform for the media industry, designed to empower sales teams with actionable insights β even those without analytics expertise. The platform consolidates large datasets and exposes insights through an...Project Description
A next-generation analytics platform for the media industry, designed to empower sales teams with actionable insights β even those without analytics expertise. The platform consolidates large datasets and exposes insights through an AI-driven, intuitive frontend interface, bridging the gap between complex data and everyday users.
The Client is looking for experienced Senior Python Developers to join our growing engineering team and drive backend development. You will work closely with a small, highly skilled group of engineers, contributing to the architecture, data processing workflows, and backend infrastructure that powers the platform.
This role requires a hands-on developer who thrives in a fast-paced environment, can work independently, and enjoys solving complex technical challenges.
Requirements
- 5+ years of experience in backend development with Python.
- Strong experience with data processing, ETL workflows, and APIs.
- Proficiency with PostgreSQL and working knowledge of stored procedures.
- Experience with AWS (EC2, S3, Lambda, etc.) for scalable and cost-efficient architecture.
- Familiarity with Databricks, Alteryx, or similar data processing tools (and a willingness to replace/optimize them).
- Experience with Docker and containerized environments.
- Ability to work independently, think critically, and propose practical solutions under tight deadlines.
- Excellent communication and teamwork skills β able to collaborate across time zones.
- Fluent in English.
Nice to have
- Experience in media analytics or related data-heavy industries.
- Knowledge of React/Fastify APIs or general frontend integration concepts.
- Background in AI/ML model deployment or working with AI-driven applications.
Duties and responsibilities
- Design, build and optimize backend systems using Python for data processing, integration, and orchestration.
- Refactor and modernize existing legacy data models for scalability and maintainability.
- Develop and automate ETL pipelines to replace manual workflows, improving efficiency and data quality.
- Collaborate with frontend and AI/ML teams to ensure seamless data delivery to user-facing applications.
- Contribute to architectural decisions and propose innovative solutions balancing speed, cost, and quality.
- Leverage AWS infrastructure for cost-effective computation (e.g., spot instances) and scalability.
- Participate in code reviews, design discussions, and continuous process improvements.
- Ensure secure, and maintainable code aligned with project timelines and quality standards.
Working conditions
- Mon β Fri 9-5 (US EST) overlap with team at least 4 hours.
- Duration: 6 months with possible extension.
-
Β· 21 views Β· 0 applications Β· 5d
Senior Data Engineer
Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper IntermediateWe are seeking a proactive Senior Data Engineer to join our vibrant team. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within...We are seeking a proactive Senior Data Engineer to join our vibrant team. As a Senior Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.
Technical stack: Palantir Foundry, Python, PySpark, SQL, TypeScript.
Responsibilities:
- Collaborate with cross-functional teams to understand data requirements, and design, implement, and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
- Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide the development process.
- Develop, implement, optimize, and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
- Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency.
- Assist in optimizing data pipelines to improve machine learning workflows.
- Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.
Requirements:
- 5+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
- Strong proficiency in Python and PySpark;
- Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
- Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
- Expertise in data modeling, data warehousing, and ETL/ELT concepts;
- Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
- Hands-on experience in containerization technologies (e.g., Docker, Kubernetes);
- Experience working with feature engineering and data preparation for machine learning models.
- Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities;
- Strong communication and teamwork abilities;
- Understanding of data security and privacy best practices;
- Strong mathematical, statistical, and algorithmic skills;
Nice to have:
- Familiarity with ML Ops concepts, including model deployment and monitoring.
- Basic understanding of machine learning frameworks such as TensorFlow or PyTorch.
- Exposure to cloud-based AI/ML services (e.g., AWS SageMaker, Azure ML, Google Vertex AI);
- Certification in Cloud platforms, or related areas;
- Experience with search engine Apache Lucene, Webservice Rest API;
- Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
- Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
- Previous experience working with JavaScript and TypeScript.
We offer*:
- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
More -
Β· 22 views Β· 0 applications Β· 5d
Senior/Principal Data Engineer
Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper IntermediateDrivers of change, itβs your time to pave new ways. Intellias, a leading software provider in the automotive industry, invites you to develop the future of driving. Join the team and create products used by 2 billion people in the world. What project we...Drivers of change, itβs your time to pave new ways. Intellias, a leading software provider in the automotive industry, invites you to develop the future of driving. Join the team and create products used by 2 billion people in the world.
What project we have for you
Our client is a leading European B2B platform for on-the-road payments and solutions.
Our dynamic cooperation is aimed at reaching long-term success and technology excellence. Thatβs why weβre hiring top-tier engineers who will contribute towards an efficient and sustainable future of mobility. Developing a routing service using road data and EV station maps to optimize journeys across Europe β thatβs what youβre going to deal with in our ambitious and passionate tech team.
What you will do
- Working in an innovative and fast-growing environment as a strong business communicator
- Developing and maintaining a scalable, cloud-native data landscape by laying a new foundation for gaining insights and business value
- Working together with the team and business partners to develop data products using an agile approach
- Creating products that allow us to address mission-critical business challenges (development of data pipelines, topics in the area of product, reporting & analytics)
- Breaking new ground: You regularly optimize solutions to improve performance, quality, and costs.
What you need for this
- 4+ years of experience in Data Modelling / Data Analytics, with a focus on developing cloud-based architectures and products (preferably using AWS).
- Experience working with DWH data modelling, Snowflake, and DBT.
- Proven knowledge of Python and SQL across multiple projects.
- Excellent communication skills and a proactive mindset.
- Hands-on experience with Kafka and Databricks.
- Background in working within a scaling environment (comparable or larger company size).
- Proficiency in Infrastructure as Code (IaC) using Terraform to describe and maintain infrastructure.
- Strong focus on IT security in design and implementation decisions.
- Advanced analytical and project management skills.
- Ability to translate technical results into clear, self-explanatory presentations for business stakeholders.
What itβs like to work at Intellias
At Intellias, where technology takes center stage, people always come before processes. By creating a comfortable atmosphere in our team, we empower individuals to unlock their true potential and achieve extraordinary results. Thatβs why we offer a range of benefits that support your well-being and charge your professional growth.
We are committed to fostering equity, diversity, and inclusion as an equal opportunity employer. All applicants will be considered for employment without discrimination based on race, color, religion, age, gender, nationality, disability, sexual orientation, gender identity or expression, veteran status, or any other characteristic protected by applicable law.
We welcome and celebrate the uniqueness of every individual. Join Intellias for a career where your perspectives and contributions are vital to our shared success.
More -
Β· 49 views Β· 1 application Β· 5d
Senior Software/Data Engineer
Full Remote Β· Ukraine Β· Product Β· 4 years of experience Β· B2 - Upper IntermediateThe company is a global marketing tech company, recognized as a Leader by Forrester and a Challenger by Gartner. We work with some of the worldβs most exciting brands, such as Sephora, Staples, and Entain, who love our thought-provoking combination of art...The company is a global marketing tech company, recognized as a Leader by Forrester and a Challenger by Gartner. We work with some of the worldβs most exciting brands, such as Sephora, Staples, and Entain, who love our thought-provoking combination of art and science. With a strong product, a proven business, and the DNA of a vibrant, fast-growing startup, weβre on the cusp of our next growth spurt. Itβs the perfect time to join our team of ~450 thinkers and doers across NYC, LDN, TLV, and other locations, where 2 of every 3 managers were promoted from within. Growing your career with the company is basically guaranteed.
Requirements
- At least 5 years of experience with .NET with some experience in Python, or, alternatively, at least 5 years of experience in Python with some experience with .NET.
- At least 3 years of experience in processing structured terabyte-scale data.
- Solid experience in SQL (advanced skills in DML).
- Hands-on experience with OLAP storage (at least one of Snowflake, BigQuery, ClickHouse, etc.).
- Deep understanding of data processing services (at least one of Apache Airflow, GCP Dataflow, Apache Hadoop, Apache Spark, etc.).
- Experience in designing distributed cloud-native systems.
- Experience in automated test creation (TDD).
- Experience in working with AI tools.
Advantages
- Being fearless of mathematical algorithms (part of our teamβs responsibility is developing ML models for data analysis; although knowledge of ML is not required for the current position, it would be awesome if a person felt some passion for algorithms).
- Experience in DevOps.
- Familiarity with Docker and Kubernetes.
- Experience with GCP services would be a plus.
- Experience with IaC would be a plus.
ΠΠΈΠΌΠΎΠ³ΠΈ Π΄ΠΎ Π²ΠΎΠ»ΠΎΠ΄ΡΠ½Π½Ρ ΠΌΠΎΠ²Π°ΠΌΠΈ
ΠΠ½Π³Π»ΡΠΉΡΡΠΊΠ°
B2 β ΠΠΈΡΠ΅ ΡΠ΅ΡΠ΅Π΄Π½ΡΠΎΠ³ΠΎ
BigQuery, ClickHouse, GCP Dataflow, Apache Hadoop, Apache Spark, Docker, Python, .NET, SQL
ΠΡΠΎ ΠΊΠΎΠΌΠΏΠ°Π½ΡΡ Gemicle
Gemicle β an innovative, highly technological company with a broad range of expertise in spheres of the development of apps, complex e-commerce projects, and B2B solutions. Qualified teams of developers, designers, engineers, QAs, and animators deliver excellent products and solutions to branded and well-known companies. Knowledge and experience of specialists in different technologies allow the company to be at the top level of IT-industry. Gemicle is a fusion of team spirit, professionalism, and dedication. Gemicle is not just a company, itβs a lifestyle.
More
-
Β· 24 views Β· 1 application Β· 4d
Senior Data Engineer
Full Remote Β· Ukraine, Poland, Romania, Croatia Β· 5 years of experience Β· B2 - Upper IntermediateDescription Our customer (originally the Minnesota Mining and Manufacturing Company) is an American multinational conglomerate operating in the fields of industry, worker safety, and consumer goods. Based in the Saint Paul suburb of Maplewood, the company...Description
Our customer (originally the Minnesota Mining and Manufacturing Company) is an American multinational conglomerate operating in the fields of industry, worker safety, and consumer goods. Based in the Saint Paul suburb of Maplewood, the company produces over 60,000 products, including adhesives, abrasives, laminates, passive fire protection, personal protective equipment, window films, paint protection film, electrical, electronic connecting, insulating materials, car-care products, electronic circuits, and optical films.
Requirements
We are looking for a highly skilled and experienced Senior Data Engineer to join our team. In this role, you will be a key player in designing, building, and optimizing our data architecture and pipelines. You will be working on a complex data project, transforming raw data into reliable, high-quality assets ready for analytics, data science, and business intelligence. As a senior member of the team, you will also be expected to help junior/middle engineers, drive technical best practices, and contribute to the strategic direction of our data platform.
Required Qualifications & Skills
- 5+ years of professional experience in data engineering or a related role.
- A minimum of 3 years of deep, hands-on experience using Python for data processing, automation, and building data pipelines.
- A minimum of 3 years of strong, hands-on experience with advanced SQL for complex querying, data manipulation, and performance tuning.
- Proven experience with cloud data services, preferably Azure (Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Data Lake Storage).
- Hands-on experience with big data processing frameworks like Spark (PySpark) and platforms such as Databricks.
- Solid experience working with large, complex data environments, including data processing, data integration, and data warehousing.
- Proficiency in data quality assessment and improvement techniques.
- Experience working with and cleansing a variety of data formats, including unstructured and semi-structured data (e.g., CSV, JSON, Parquet, XML).
- Familiarity with Agile and Scrum methodologies and project management tools (e.g., Azure DevOps, Jira).
- Excellent problem-solving skills and the ability to communicate complex technical concepts effectively to both technical and non-technical audiences.
Preferred Qualifications & Skills
- Knowledge of DevOps methodologies and CI/CD practices for data pipelines.
- Experience with modern data platforms like Microsoft Fabric for data modeling and integration.
- Experience with consuming data from REST APIs.
- Experience with database design, optimization, and performance tuning for software application backends.
- Knowledge of dimensional data modeling concepts (Star Schema, Snowflake Schema).
- Familiarity with modern data architecture concepts such as Data Mesh.
- Real-world experience supporting and troubleshooting critical, end-to-end production data pipelines.
Job responsibilities
Key Responsibilities
- Architect & Build Data Pipelines: Design, develop, and maintain robust, scalable, and reliable data pipelines using Python, SQL, and Spark on the Azure cloud platform.
- End-to-End Data Solutions: Architect and implement end-to-end data solutions, from data ingestion and processing to storage in our data lake (Azure Data Lake Storage, Delta Lake) and data warehouse.
- Cloud Data Services Management: Utilize Azure services like Azure Data Factory, Databricks, and Azure SQL Database to build, orchestrate, and manage complex data workflows.
- Data Quality & Governance: Implement and enforce comprehensive data quality frameworks, including data profiling, cleansing, and validation routines to ensure the highest levels of data integrity and trust.
- Performance Optimization: Analyze and optimize data pipelines for performance, scalability, and cost-efficiency, ensuring our systems can handle growing data volumes.
- Mentorship & Best Practices: Mentor and provide technical guidance to junior and mid-level data engineers. Lead code reviews and champion best practices in data engineering, coding standards, and data modeling.
- Stakeholder Collaboration: Work closely with data analysts, data scientists, and business stakeholders to understand data requirements, provide technical solutions, and deliver actionable data products.
- System Maintenance: Support and troubleshoot production data pipelines, identify root causes of issues, and implement effective, long-term solutions.
-
Β· 42 views Β· 10 applications Β· 4d
Senior Data Engineer
Full Remote Β· Worldwide Β· 4 years of experience Β· B2 - Upper IntermediateWeβre currently looking for a Senior Data Engineer for a long-term project, with immediate start. The role requires: - Databricks certification (mandatory) - Solid hands-on experience with Spark - Strong SQL (Microsoft SQL Server) knowledge The...Weβre currently looking for a Senior Data Engineer for a long-term project, with immediate start.
The role requires:
- Databricks certification (mandatory)
- Solid hands-on experience with Spark
- Strong SQL (Microsoft SQL Server) knowledge
The project involves the migration from Microsoft SQL Server to Databricks, along with data-structure optimization and enhancements.
More -
Β· 36 views Β· 0 applications Β· 4d
Data Engineer (GCP, Big Query, DBT, Python, Data Modeling, ML) to $6500
Full Remote Β· Argentina, Bulgaria, Spain, Poland, Portugal Β· Product Β· 5 years of experience Β· B2 - Upper IntermediateΠ¨ΡΠΊΠ°ΡΠΌΠΎ Data Engineer Π· Π΄ΠΎΡΠ²ΡΠ΄ΠΎΠΌ BigQuery ΡΠ° GCP Π² Π΄ΡΠΆΠ΅ Π²Π΅Π»ΠΈΠΊΡ Ρ ΡΡΠ°Π±ΡΠ»ΡΠ½Ρ ΠΏΡΠΎΠ΄ΡΠΊΡΠΎΠ²Ρ ΠΊΠΎΠΌΠΏΠ°Π½ΡΡ. ΠΠΎΠΌΠΏΠ°Π½ΡΡ ΡΠΎΠ·ΡΠΎΠ±Π»ΡΡ ΡΠΎΡΡ Π΄Π»Ρ Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΡΡΡΠ°Ρ ΠΎΠ²ΠΈΡ ΠΊΠΎΠΌΠΏΠ°Π½ΡΠΉ ΡΠ²ΡΡΡ. Π―ΠΊΡΠΎ Ρ Π²Π°Ρ Ρ ΡΠ°ΠΊΡ ΡΠΊΡΠ»ΠΈ, Π΄Π°Π»Ρ ΠΌΠΎΠΆΠ½Π° Π½Π΅ ΡΠΈΡΠ°ΡΠΈ β Π²ΡΠ΄ΠΏΡΠ°Π²Π»ΡΠΉΡΠ΅ ΡΠ΅Π·ΡΠΌΠ΅, Π±ΡΠ΄Ρ Π»Π°ΡΠΊΠ°. ΠΠ»Π΅ ΡΠΊΡΠΎ...Π¨ΡΠΊΠ°ΡΠΌΠΎ Data Engineer Π· Π΄ΠΎΡΠ²ΡΠ΄ΠΎΠΌ BigQuery ΡΠ° GCP Π² Π΄ΡΠΆΠ΅ Π²Π΅Π»ΠΈΠΊΡ Ρ ΡΡΠ°Π±ΡΠ»ΡΠ½Ρ ΠΏΡΠΎΠ΄ΡΠΊΡΠΎΠ²Ρ ΠΊΠΎΠΌΠΏΠ°Π½ΡΡ. ΠΠΎΠΌΠΏΠ°Π½ΡΡ ΡΠΎΠ·ΡΠΎΠ±Π»ΡΡ ΡΠΎΡΡ Π΄Π»Ρ Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΡ ΡΡΡΠ°Ρ ΠΎΠ²ΠΈΡ ΠΊΠΎΠΌΠΏΠ°Π½ΡΠΉ ΡΠ²ΡΡΡ. Π―ΠΊΡΠΎ Ρ Π²Π°Ρ Ρ ΡΠ°ΠΊΡ ΡΠΊΡΠ»ΠΈ, Π΄Π°Π»Ρ ΠΌΠΎΠΆΠ½Π° Π½Π΅ ΡΠΈΡΠ°ΡΠΈ β Π²ΡΠ΄ΠΏΡΠ°Π²Π»ΡΠΉΡΠ΅ ΡΠ΅Π·ΡΠΌΠ΅, Π±ΡΠ΄Ρ Π»Π°ΡΠΊΠ°.
ΠΠ»Π΅ ΡΠΊΡΠΎ ΡΡΠΊΠ°Π²ΠΎ:
ΠΡΠΎΠ΅ΠΊΡ β ΠΏΠΎΠ±ΡΠ΄ΠΎΠ²Π° Π½ΠΎΠ²ΠΎΡ Data Platform (Data Lake, Lake House, Data Warehouse) Π½Π° Π±Π°Π·Ρ BigQuery Π· Π²ΠΈΠΊΠΎΡΠΈΡΡΠ°Π½Π½ΡΠΌ DBT, Python, AI/ML + ΡΠΎΠ·ΡΠΎΠ±ΠΊΠ° ΡΡΠ»ΠΎΡ ΠΊΡΠΏΠΈ ΡΠ΄Π΅ΠΉ ΠΏΠΎ Π°Π½Π°Π»ΡΠ·Ρ Π΄Π°Π½ΠΈΡ , RAG, LLM, etc.
Π¨ΡΠΊΠ°ΡΠΌΠΎ Π²ΠΈΠΊΠ»ΡΡΠ½ΠΎ ΡΠΊΡΠ°ΡΠ½ΡΡΠΊΠΈΡ Π΄Π΅Π²Π΅Π»ΠΎΠΏΠ΅ΡΡΠ² Π·Π°ΠΊΠΎΡΠ΄ΠΎΠ½ΠΎΠΌ. ΠΠ°ΠΆΠ°Π»Ρ, Π² Π£ΠΊΡΠ°ΡΠ½Ρ ΡΡΠΎΠ³ΠΎ ΡΠ°Π·Ρ Π½Π΅ ΡΠΎΠ·Π³Π»ΡΠ΄Π°ΡΠΌΠΎ, ΡΠ΅ΡΠ΅Π· ΠΎΠ±ΠΌΠ΅ΠΆΠ΅Π½Π½Ρ ΠΏΠΎ Π΄ΠΎΡΡΡΠΏΡ Π΄ΠΎ ΠΊΡΠΈΡΠΈΡΠ½ΠΈΡ Π΄Π°Π½ΠΈΡ . ΠΡΠΎΠΏΠΎΠ½ΡΡΠΌΠΎ Π΄ΡΠΆΠ΅ Π³Π½ΡΡΠΊΡ ΡΠΌΠΎΠ²ΠΈ: Π²ΡΠ΄Π΄Π°Π»Π΅Π½Π° ΡΠΎΠ±ΠΎΡΠ°, ΡΡΠΊΠ°Π²Ρ Π·Π°Π΄Π°ΡΡ Ρ Ρ ΠΎΡΠΎΡΠ΅/ΡΠΏΠΎΠΊΡΠΉΠ½Π΅ ΠΊΠ΅ΡΡΠ²Π½ΠΈΡΡΠ²ΠΎ.
ΠΠΏΠΈΡ Π²Π°ΠΊΠ°Π½ΡΡΡ:
What Youβll Do:
- Design & run pipelines β create, deploy, and monitor robust data flows on GCP.
- Write BigQuery SQL β build procedures, views, and functions. β Build ML pipelines β automate training, validation, deployment, and model monitoring.
- Solve business problems with AI/ML β frame questions, choose methods, deliver insights.
- Optimize ETL β speed up workflows and cut costs.
- Use the GCP stack β BigQuery, Dataflow, Dataproc, Airflow/Composer, DBT, Celigo, Python, Java.
- Model data β design star/snowflake schemas for analytics and reporting.
- Guard quality & uptime β add tests, validation, and alerting; fix issues fast.
- Document everything β pipelines, models, and processes.
Keep learning β track new tools and best practices.
What Youβll Need:
- 5+ yrs building data/ETL solutions; 2+ yrs heavy GCP work.
- 2+ yrs handsβon AI/ML pipeline experience.
- Proven BigQuery warehouse design and scaling.
- Deep SQL, Python, DBT, Git; Talend, Fivetran, or similar ETL tools.
- Strong dataβmodeling skills (star, snowflake, normalization).
- Solid grasp of Data Lake vs. Data Warehouse concepts.
- Problemβsolver who works well solo or with a team.
- Clear communicator with nonβtechnical partners.
- Bachelorβs in CS, MIS, CIS, or equivalent experience.
-
Β· 15 views Β· 0 applications Β· 3d
Palantir Foundry Engineer
Full Remote Β· Ukraine Β· 10 years of experience Β· B2 - Upper IntermediateProject Description: We are seeking a Palantir Foundry & AIP Engineer with hands-on experience across the full Foundry ecosystem and Palantir's Artificial Intelligence Platform (AIP). This role goes beyond data engineering: you will design, build, and...Project Description:
We are seeking a Palantir Foundry & AIP Engineer with hands-on experience across the full Foundry ecosystem and Palantir's Artificial Intelligence Platform (AIP). This role goes beyond data engineering: you will design, build, and operationalize AI-powered workflows, agents, and applications that drive tangible business outcomes.
The ideal candidate is a self-starter, able to translate complex business needs into scalable technical solutions, and confident working directly with stakeholders to maximize the value of Foundry and AIP.Responsibilities:
β’ Data & Workflow Engineering: Design, develop, and maintain scalable pipelines, transformations, and applications within Palantir Foundry.
β’ AIP & AI Enablement:
o Support the design and deployment of AIP use cases such as copilots, retrieval workflows, and decision-support agents.
o Ground agents and logic flows using RAG (retrievalβaugmented generation) by connecting to relevant data sources, embedding/vector search, ontology content.
o Use Ontology-Augmented Generation (OAG) when needed: operational decision making where logic, data, actions and relationships are embedded in the Ontology.
o Collaborate with senior engineers on agent design, instructions, and evaluation using AIP's native features.
β’ End-to-End Delivery: Work with stakeholders to capture requirements, design solutions, and deliver working applications.
β’ User Engagement: Provide training and support for business teams adopting Foundry and AIP.
β’ Governance & Trust: Ensure solutions meet standards for data quality, governance, and responsible use of AI.
β’ Continuous Improvement: Identify opportunities to expand AIP adoption and improve workflow automation.Mandatory Skills Description:
Required Qualifications:
β’ 10+ years of overall experience as a Data and AI Engineer;
β’ 2+ years of professional experience with the Palantir Foundry ecosystem (data integration, ontology, pipelines, applications).
β’ Strong technical skills in Python, PySpark, SQL, and data modelling.
β’ Practical experience using or supporting AIP features such as RAG workflows, copilots, or agent-based applications.
β’ Ability to work independently and engage directly with non-technical business users.
β’ Strong problem-solving mindset and ownership of delivery.
Preferred Qualifications:
β’ Familiarity with AIP Agent Studio concepts (agents, instructions, tools, testing).
β’ Exposure to AIP Evals and evaluation/test-driven approaches.
β’ Experience with integration patterns (APIs, MCP, cloud services).
β’ Consulting or applied AI/ML background.
β’ Experience in Abu Dhabi or the broader MENA region.
-
Β· 13 views Β· 0 applications Β· 3d
Senior Data Platform Architect
Full Remote Β· Ukraine Β· 10 years of experience Β· B2 - Upper IntermediateProject Description: We are seeking an expert with deep proficiency as a Platform Engineer, possessing experience in data engineering. This individual should have a comprehensive understanding of both data platforms and software engineering, enabling them...Project Description:
We are seeking an expert with deep proficiency as a Platform Engineer, possessing experience in data engineering. This individual should have a comprehensive understanding of both data platforms and software engineering, enabling them to integrate the platform effectively within an IT ecosystem.
Responsibilities:
β’ Manage and optimize data platforms (Databricks, Palantir).
β’ Ensure high availability, security, and performance of data systems.
β’ Provide valuable insights about data platform usage.
β’ Optimize computing and storage for large-scale data processing.
β’ Design and maintain system libraries (Python) used in ETL pipelines and platform governance.
β’ Optimize ETL Processes β Enhance and tune existing ETL processes for better performance, scalability, and reliability.Mandatory Skills Description:
β’ Minimum 10 Years of experience in IT/Data.
β’ Minimum 5 years of experience as a Data Platform Engineer/Data Engineer.
β’ Bachelor's in IT or related field.
β’ Infrastructure & Cloud: Azure, AWS (expertise in storage, networking, compute).
β’ Data Platform Tools: Any of Palantir, Databricks, Snowflake.
β’ Programming: Proficiency in PySpark for distributed computing and Python for ETL development.
β’ SQL: Expertise in writing and optimizing SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake.
β’ Data Warehousing: Experience working with data warehousing concepts and platforms, ideally Databricks.
β’ ETL Tools: Familiarity with ETL tools & processes
β’ Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design.
β’ Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development.
β’ Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance.
β’ Data Quality Tools: Experience implementing data validation, cleaning, and quality frameworks, ideally Monte Carlo.
-
Β· 35 views Β· 2 applications Β· 3d
Senior Data Platform Engineer to $7500
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 7 years of experience Β· B2 - Upper IntermediateWho we are: Adaptiq is a technology hub specialising in building, scaling and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Finaloop is building the data backbone of modern finance β...Who we are:
Adaptiq is a technology hub specialising in building, scaling and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Finaloop is building the data backbone of modern finance β a real-time platform that turns billions of eCommerce transactions into live, trustworthy financial intelligence. We deal with high-volume, low-latency data at scale, designing systems that off-the-shelf tech simply canβt handle. Every line of code you write keeps thousands of businesses financially aware β instantly.
About the Role:
Weβre hiring a Senior Data Platform Engineer to build the core systems that move, transform, and power financial data in real time. Youβll be part of the core engineering group building the foundational infrastructure that powers our entire company.
Youβll work closely with senior engineers and the VP of Engineering on high-scale architecture, distributed pipelines, and orchestration frameworks that define how our platform runs.
Itβs pure deep engineering β complex, impactful, and built to last.Key Responsibilities:
- Designing, building, and maintaining scalable data pipelines and ETL processes for our financial data platform
- Developing and optimizing data infrastructure to support real-time analytics and reporting
- Implementing data governance, security, and privacy controls to ensure data quality and compliance
- Creating and maintaining documentation for data platforms and processes
- Collaborating with data scientists and analysts to deliver actionable insights to our customers
- Troubleshooting and resolving data infrastructure issues efficiently
- Monitoring system performance and implementing optimizations
- Staying current with emerging technologies and implementing innovative solutions
Required Competence and Skills:
- 7+ years experience in Data Engineering or Platform Engineering roles
- Strong programming skills in Python and SQL
- Experience with orchestration platforms and tools (Airflow, Dagster, Temporal or similar)
- Experience with MPP platforms (e.g., Snowflake, Redshift, Databricks)
- Hands-on experience with cloud platforms (AWS) and their data services
- Understanding of data modeling, data warehousing, and data lake concepts
- Ability to optimize data infrastructure for performance and reliability
- Ability to design, build, and optimize Docker images to support scalable data pipelines
- Familiarity with CI/CD concepts and principles
- Fluent English (written and spoken)
Nice to have skills:
- Experience with big data processing frameworks (Apache Spark, Hadoop)
- Experience with stream processing technologies (Flink, Kafka, Kinesis)
- Knowledge of infrastructure as code (Terraform)
- Experience deploying, managing, and maintaining services on Kubernetes clusters
- Experience building analytics platforms or clickstream pipelines
- Familiarity with ML workflows and MLOps
- Experience working in a startup environment or fintech industry
The main components of our current technology stack:
- AWS Serverless, Python, Airflow, Airbyte, Temporal, PostgreSQL, Snowflake, Kubernetes, Terraform, Docker.
-
Β· 6 views Β· 1 application Β· 3d
Salesforce Consumer Goods Cloud (CGC)
Full Remote Β· Worldwide Β· 3 years of experience Β· B2 - Upper Intermediate1. Job Description: Salesforce Consumer Goods Cloud (CGC) Subject Matter Expert (SME) About the Role We are seeking a highly skilled Salesforce Consumer Goods Cloud (CGC) Subject Matter Expert (SME) to serve as the key consultant for our FMCG/CPG clients....1. Job Description: Salesforce Consumer Goods Cloud (CGC) Subject Matter Expert (SME)
About the Role
We are seeking a highly skilled Salesforce Consumer Goods Cloud (CGC) Subject Matter Expert (SME) to serve as the key consultant for our FMCG/CPG clients. Your mission is to ensure that CGC implementations perfectly align with the clientβs best business practices in retail and distribution. You will act as the bridge between complex business processes (e.g., Retail Execution, Trade Promotion Management) and standard (out-of-the-box) Salesforce functionality.
Key Responsibilities
- Conduct a Business Process Audit to identify misalignments between the clientβs current crippled processes and native CGC capabilities.
- Consult clients on CGC best practices for Retail Execution, Trade Promotion Management (TPM), Order Management, and Direct Store Delivery (DSD).
- Develop "De-Customization" strategies to replace complex, inefficient custom logic with standard Salesforce features.
- Collaborate with Solution Architects and Developers to ensure the technical design aligns with business requirements and the CGC data model.
- Participate in the Discovery and Gap Analysis phases, providing clear, prioritized recommendations to restore value to the implementation.
- Support sales efforts and develop Statements of Work (SOW) for Phase 2 (Remediation Project).
Requirements
- Minimum 3+ years of experience working with Salesforce Consumer Goods Cloud (or deep experience in the FMCG/CPG segment with Salesforce).
- Profound understanding of core CGC features: Visit Management, Retail Execution, Pricing & Promotions, Store/Route Planning.
- Possession of Salesforce certifications, specifically Salesforce Certified Consumer Goods Cloud Accredited Professional or Salesforce Certified Sales Cloud Consultant (preferred).
- Excellent communication and presentation skills for effective engagement with client executives.
- Ability to translate complex business problems into clear, actionable CGC-based solutions.
What We Offer (Benefits)
- Competitive Salary: Attractive, competitive salary and bonus structure commensurate with your experience and contribution.
- Professional and Supportive Team: Join a team of highly skilled Salesforce experts focused on shared success and continuous improvement.
- Flexibility and Remote Work: Opportunity to work fully remotely or with a flexible hybrid schedule, allowing you to balance work and personal life effectively.
-
Β· 121 views Β· 20 applications Β· 3d
Data Engineer (Junior/Middle)
Full Remote Β· Worldwide Β· Product Β· 1 year of experienceWe operate an integrated sushi-restaurant business and require a Data Engineer to design and implement a centralised, well-governed data warehouse; develop and automate data pipelines that support critical reporting, including multi-platform...We operate an integrated sushi-restaurant business and require a Data Engineer to design and implement a centralised, well-governed data warehouse; develop and automate data pipelines that support critical reporting, including multi-platform customer-order analytics, marketing performance metrics, executive dashboards, and other business-essential analyses and collaborate on internal machine-learning projects by providing reliable, production-ready data assets.
Our requirements:
- Professional experience (1β3 years) in data engineering, with demonstrable ownership of end-to-end ETL/ELT pipelines in production.
- Strong SQL and Python proficiency, including performance tuning, modular code design, and automated testing of data transformations.
- Hands-on expertise with modern data-stack components (e.g., Airflow, dbt, Spark, or comparable orchestration and processing frameworks).
- Cloud-native skills on AWS or Azure, covering at least two services from Glue, Athena, Lambda, Databricks, Data Factory, or Snowflake, plus cost- and performance-optimization best practices.
- Solid understanding of dimensional modelling, data-quality governance, and documentation standards, ensuring reliable, audited data assets for analytics and machine-learning use cases.
Your responsibilities:
- Designing, developing, and maintaining scalable data pipelines and ETL.
- Optimizing data processing workflows for performance, reliability, and cost-efficiency.
- Ensuring compliance with data quality standards and implementing governance best practices.
- Driving and supporting the migration of on-premise data products to warehouse.
-
Β· 619 views Β· 34 applications Β· 7d
Strong Junior Data Engineer
Worldwide Β· 1 year of experience Β· B1 - IntermediateDataforest Π² ΠΏΠΎΡΡΠΊΡ Π²ΠΌΠΎΡΠΈΠ²ΠΎΠ²Π°Π½ΠΎΠ³ΠΎ Π½Π° ΡΠΎΠ·Π²ΠΈΡΠΎΠΊ Data Engineer, ΡΠΊΠΈΠΉ ΡΡΠ°Π½Π΅ ΡΠ°ΡΡΠΈΠ½ΠΎΡ Π½Π°ΡΠΎΡ Π΄ΡΡΠΆΠ½ΡΠΎΡ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ. Π―ΠΊ Data Engineer, ΡΠΈ Π±ΡΠ΄Π΅Ρ ΡΠΎΠ·Π²'ΡΠ·ΡΠ²Π°ΡΠΈ ΡΡΠΊΠ°Π²Ρ Π·Π°Π΄Π°ΡΡ, Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΡΡΠΈ ΠΏΠ΅ΡΠ΅Π΄ΠΎΠ²Ρ ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡ Π·Π±ΠΎΡΡ, ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ, Π°Π½Π°Π»ΡΠ·Ρ ΡΠ° ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³Ρ Π΄Π°Π½ΠΈΡ . Π―ΠΊΡΠΎ ΡΠΈ Π½Π΅...Dataforest Π² ΠΏΠΎΡΡΠΊΡ Π²ΠΌΠΎΡΠΈΠ²ΠΎΠ²Π°Π½ΠΎΠ³ΠΎ Π½Π° ΡΠΎΠ·Π²ΠΈΡΠΎΠΊ Data Engineer, ΡΠΊΠΈΠΉ ΡΡΠ°Π½Π΅ ΡΠ°ΡΡΠΈΠ½ΠΎΡ Π½Π°ΡΠΎΡ Π΄ΡΡΠΆΠ½ΡΠΎΡ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ. Π―ΠΊ Data Engineer, ΡΠΈ Π±ΡΠ΄Π΅Ρ ΡΠΎΠ·Π²'ΡΠ·ΡΠ²Π°ΡΠΈ ΡΡΠΊΠ°Π²Ρ Π·Π°Π΄Π°ΡΡ, Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΡΡΠΈ ΠΏΠ΅ΡΠ΅Π΄ΠΎΠ²Ρ ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡ Π·Π±ΠΎΡΡ, ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ, Π°Π½Π°Π»ΡΠ·Ρ ΡΠ° ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³Ρ Π΄Π°Π½ΠΈΡ .
Π―ΠΊΡΠΎ ΡΠΈ Π½Π΅ Π±ΠΎΡΡΡΡ Π²ΠΈΠΊΠ»ΠΈΠΊΡΠ², ΡΡ Π²Π°ΠΊΠ°Π½ΡΡΡ ΡΠ°ΠΌΠ΅ Π΄Π»Ρ ΡΠ΅Π±Π΅!
ΠΠ°ΠΌ Π²Π°ΠΆΠ»ΠΈΠ²ΠΎ:
β’ ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ ΡΠΊ Data Engineer β 1+ ΡΡΠΊ;
β’ ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Python;
β’ ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Databricks ΡΠ° Datafactory;
β’ ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· AWS/Azure;β’ ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· ETL / ELT pipelines;
β’ ΠΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· SQL.
ΠΠ±ΠΎΠ²'ΡΠ·ΠΊΠΈ:
β’ Π‘ΡΠ²ΠΎΡΠ΅Π½Π½Ρ ETL/ELT pipelines ΡΠ° ΡΡΡΠ΅Π½Ρ Π΄Π»Ρ ΡΠΏΡΠ°Π²Π»ΡΠ½Π½Ρ Π΄Π°Π½ΠΈΠΌΠΈ;
β’ ΠΠ°ΡΡΠΎΡΡΠ²Π°Π½Π½Ρ Π°Π»Π³ΠΎΡΠΈΡΠΌΡΠ² ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ ;
β’ Π ΠΎΠ±ΠΎΡΠ° Π· SQL-Π·Π°ΠΏΠΈΡΠ°ΠΌΠΈ Π΄Π»Ρ Π²ΠΈΠ΄ΠΎΠ±ΡΡΠΊΡ ΡΠ° Π°Π½Π°Π»ΡΠ·Ρ Π΄Π°Π½ΠΈΡ ;
β’ ΠΠ½Π°Π»ΡΠ· Π΄Π°Π½ΠΈΡ ΡΠ° Π²ΠΈΠΊΠΎΡΠΈΡΡΠ°Π½Π½Ρ Π°Π»Π³ΠΎΡΠΈΡΠΌΡΠ² ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ Π΄Π»Ρ Π²ΠΈΡΡΡΠ΅Π½Π½Ρ Π±ΡΠ·Π½Π΅Ρ-ΠΏΡΠΎΠ±Π»Π΅ΠΌ;
ΠΠΈ ΠΏΡΠΎΠΏΠΎΠ½ΡΡΠΌΠΎ:
β’ Π ΠΎΠ±ΠΎΡΠ° Π· high-skilled engineering team Π½Π°Π΄ ΡΡΠΊΠ°Π²ΠΈΠΌΠΈ ΡΠ° ΡΠΊΠ»Π°Π΄Π½ΠΈΠΌΠΈ ΠΏΡΠΎΡΠΊΡΠ°ΠΌΠΈ;
β’ ΠΠΈΠ²ΡΠ΅Π½Π½Ρ Π½ΠΎΠ²ΡΡΠ½ΡΡ ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΠΉ;
β’ Π‘ΠΏΡΠ»ΠΊΡΠ²Π°Π½Π½Ρ Π· ΡΠ½ΠΎΠ·Π΅ΠΌΠ½ΠΈΠΌΠΈ ΠΊΠ»ΡΡΠ½ΡΠ°ΠΌΠΈ, ΡΠ΅Π»Π΅Π½ΠΆΠΎΠ²Ρ Π·Π°Π²Π΄Π°Π½Π½Ρ;
β’ ΠΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ ΠΎΡΠΎΠ±ΠΈΡΡΠΎΠ³ΠΎ Ρ ΠΏΡΠΎΡΠ΅ΡΡΠΉΠ½ΠΎΠ³ΠΎ ΡΠΎΠ·Π²ΠΈΡΠΊΡ;
β’ ΠΠΎΠ½ΠΊΡΡΠ΅Π½ΡΠΎΡΠΏΡΠΎΠΌΠΎΠΆΠ½Π° Π·Π°ΡΠΏΠ»Π°ΡΠ°, ΡΡΠΊΡΠΎΠ²Π°Π½Π° Π² USD;
β’ ΠΠΏΠ»Π°ΡΡΠ²Π°Π½Π° Π²ΡΠ΄ΠΏΡΡΡΠΊΠ° Ρ Π»ΡΠΊΠ°ΡΠ½ΡΠ½Ρ;
β’ ΠΠ½ΡΡΠΊΠΈΠΉ Π³ΡΠ°ΡΡΠΊ ΡΠΎΠ±ΠΎΡΠΈ;
β’ ΠΡΡΠΆΠ½Ρ ΡΠΎΠ±ΠΎΡΠ° Π°ΡΠΌΠΎΡΡΠ΅ΡΠ° Π±Π΅Π· Π±ΡΡΠΎΠΊΡΠ°ΡΠΈΠ·ΠΌΡ;
β’ Π£ Π½Π°Ρ Π±Π°Π³Π°ΡΠΎ ΡΡΠ°Π΄ΠΈΡΡΠΉ β ΠΊΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²ΠΈ, ΡΠΈΠΌΠ±ΡΠ»Π΄ΠΈΠ½Π³ΠΈ ΡΠ° ΡΠ΅ΠΌΠ°ΡΠΈΡΠ½Ρ Π·Π°Ρ ΠΎΠ΄ΠΈ ΡΠ° Π±Π°Π³Π°ΡΠΎ ΡΠ½ΡΠΎΠ³ΠΎ!
Π―ΠΊΡΠΎ Π½Π°ΡΠ° Π²Π°ΠΊΠ°Π½ΡΡΡ ΡΠΎΠ±Ρ Π΄ΠΎ Π΄ΡΡΡ, ΡΠΎΠ΄Ρ Π²ΡΠ΄ΠΏΡΠ°Π²Π»ΡΠΉ ΡΠ²ΠΎΡ ΡΠ΅Π·ΡΠΌΠ΅ - Ρ ΡΡΠ°Π²Π°ΠΉ ΡΠ°ΡΡΠΈΠ½ΠΎΡ Π½Π°ΡΠΎΡ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ.
More -
Β· 151 views Β· 9 applications Β· 13d
System engineer Big Data
Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· A2 - ElementaryΠΠ Π ΠΠΠ‘ UKRSIB Tech β ΡΠ΅ Π°ΠΌΠ±ΡΡΠ½Π° ΠΠ’-ΠΊΠΎΠΌΠ°Π½Π΄Π° Π· Π±Π»ΠΈΠ·ΡΠΊΠΎ 400 ΡΠΏΠ΅ΡΡΠ°Π»ΡΡΡΡΠ², ΡΠΎ Π΄ΡΠ°ΠΉΠ²ΠΈΡΡ ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡ UKRSIBBANK. ΠΠΈ ΡΡΠ²ΠΎΡΡΡΠΌΠΎ ΡΠΎΠΏΠΎΠ²ΠΈΠΉ Π±Π°Π½ΠΊΡΠ½Π³ Π΄Π»Ρ > 2 000 000 ΠΊΠ»ΡΡΠ½ΡΡΠ² ΡΠ° ΠΏΡΠ°Π³Π½Π΅ΠΌΠΎ Π²ΠΈΠ²ΠΎΠ΄ΠΈΡΠΈ ΡΡΠ½Π°Π½ΡΠΎΠ²Ρ ΡΡΠ΅ΡΡ Π² Π£ΠΊΡΠ°ΡΠ½Ρ Π½Π° Π½ΠΎΠ²ΠΈΠΉ ΡΡΠ²Π΅Π½Ρ. ΠΠ°ΡΠΈΠΌΠΈ ΠΏΡΠΎΠ΄ΡΠΊΡΠ°ΠΌΠΈ ΠΊΠΎΡΠΈΡΡΡΡΡΡΡΡ...ΠΠ Π ΠΠΠ‘
UKRSIB Tech β ΡΠ΅ Π°ΠΌΠ±ΡΡΠ½Π° ΠΠ’-ΠΊΠΎΠΌΠ°Π½Π΄Π° Π· Π±Π»ΠΈΠ·ΡΠΊΠΎ 400 ΡΠΏΠ΅ΡΡΠ°Π»ΡΡΡΡΠ², ΡΠΎ Π΄ΡΠ°ΠΉΠ²ΠΈΡΡ ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡ UKRSIBBANK.
ΠΠΈ ΡΡΠ²ΠΎΡΡΡΠΌΠΎ ΡΠΎΠΏΠΎΠ²ΠΈΠΉ Π±Π°Π½ΠΊΡΠ½Π³ Π΄Π»Ρ > 2 000 000 ΠΊΠ»ΡΡΠ½ΡΡΠ² ΡΠ° ΠΏΡΠ°Π³Π½Π΅ΠΌΠΎ Π²ΠΈΠ²ΠΎΠ΄ΠΈΡΠΈ ΡΡΠ½Π°Π½ΡΠΎΠ²Ρ ΡΡΠ΅ΡΡ Π² Π£ΠΊΡΠ°ΡΠ½Ρ Π½Π° Π½ΠΎΠ²ΠΈΠΉ ΡΡΠ²Π΅Π½Ρ. ΠΠ°ΡΠΈΠΌΠΈ ΠΏΡΠΎΠ΄ΡΠΊΡΠ°ΠΌΠΈ ΠΊΠΎΡΠΈΡΡΡΡΡΡΡΡ ΡΠ·Π΅ΡΠΈ ΡΠΎΠ΄Π΅Π½Π½ΠΎΠ³ΠΎ Π±Π°Π½ΠΊΡΠ½Π³Ρ, Π»ΡΠ΄Π΅ΡΠΈ ΡΠΊΡΠ°ΡΠ½ΡΡΠΊΠΎΡ Π΅ΠΊΠΎΠ½ΠΎΠΌΡΠΊΠΈ ΡΠ° Π²Π΅Π»ΠΈΠΊΡ ΠΌΡΠΆΠ½Π°ΡΠΎΠ΄Π½Ρ ΠΊΠΎΡΠΏΠΎΡΠ°ΡΡΡ.
ΠΠΈ Π΄ΡΠΊΡΡΠΌΠΎ Π½Π°ΡΠΈΠΌ Π·Π°Ρ ΠΈΡΠ½ΠΈΠΊΠ°ΠΌ ΡΠ° Π·Π°Ρ ΠΈΡΠ½ΠΈΡΡΠΌ, ΡΠΊΡ Π²ΡΠ΄Π΄Π°Π½ΠΎ Π±ΠΎΡΠΎΠ½ΡΡΡ ΡΠ²ΠΎΠ±ΠΎΠ΄Ρ ΡΠ° Π½Π΅Π·Π°Π»Π΅ΠΆΠ½ΡΡΡΡ Π£ΠΊΡΠ°ΡΠ½ΠΈ, ΡΠ° ΡΡΠ²ΠΎΡΡΡΠΌΠΎ ΡΠΏΡΠΈΡΡΠ»ΠΈΠ²Π΅ ΡΠ΅ΡΠ΅Π΄ΠΎΠ²ΠΈΡΠ΅ Π΄Π»Ρ ΡΠΎΠ±ΠΎΡΠΈ Π² Π±Π°Π½ΠΊΡ.
ΠΠ°ΠΉΠ±ΡΡΠ½Ρ Π·Π°Π΄Π°ΡΡ:
- ΠΎΠ½ΠΎΠ²Π»Π΅Π½Π½Ρ ΡΠ° Π²ΠΈΠΏΡΠ°Π²Π»Π΅Π½Π½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ½ΠΎΠ³ΠΎ Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ Π· ΠΎΠ½ΠΎΠ²Π»Π΅Π½Π½ΡΠΌ Π²Π΅Π½Π΄ΠΎΡΠ°
- ΠΏΡΠΎΠ΅ΠΊΡΠ½Π° ΡΠΎΠ±ΠΎΡΠ° Π΄Π»Ρ ΠΎΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡ ΠΏΡΠ΄ΡΠΈΡΡΠ΅ΠΌ
- Π²Π·Π°ΡΠΌΠΎΠ΄ΡΡ Π· Π²Π΅Π½Π΄ΠΎΡΠ°ΠΌΠΈ ΡΠΈΡΡΠ΅ΠΌ Π΄Π»Ρ Π²ΠΈΡΡΡΠ΅Π½Π½Ρ ΠΏΡΠΎΠ±Π»Π΅ΠΌ ΡΠ½ΡΠ΅Π³ΡΠ°ΡΡΡ Π· IBM DataStage, Teradata, Oracle, Jupyterhub, Docker, Python
- ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠ° ΠΊΠΎΡΠΈΡΡΡΠ²Π°ΡΡΠ² (Π²ΠΈΡΡΡΠ΅Π½Π½Ρ ΡΠ½ΡΠΈΠ΄Π΅Π½ΡΡΠ² ΡΠ° ΠΊΠΎΠ½ΡΡΠ»ΡΡΡΠ²Π°Π½Π½Ρ) ΡΠ° Π°Π΄ΠΌΡΠ½ΡΡΡΡΡΠ²Π°Π½Π½Ρ Π΄ΠΎΡΡΡΠΏΡ
- ΡΠ΅ΡΡΡΠ²Π°Π½Π½Ρ Π½ΠΎΠ²ΠΎΠ³ΠΎ ΡΡΠ½ΠΊΡΡΠΎΠ½Π°Π»Ρ
- ΠΠ·Π°ΡΠΌΠΎΠ΄ΡΡ Π· ΡΠ½ΡΠΈΠΌΠΈ ΡΡΡΡΠΊΡΡΡΠ½ΠΈΠΌΠΈ ΠΏΡΠ΄ΡΠΎΠ·Π΄ΡΠ»Π°ΠΌΠΈ ΠΠ’ Π· ΠΌΠ΅ΡΠΎΡ Π²ΠΈΠ·Π½Π°ΡΠ΅Π½Π½Ρ ΠΎΠΏΡΠΈΠΌΠ°Π»ΡΠ½ΠΈΡ ΠΏΡΠΎΡΠ΅ΡΡΠ² Π½Π΅ΠΏΠ΅ΡΠ΅ΡΠ²Π½ΠΎΠ³ΠΎ ΡΠΎΠ·Π³ΠΎΡΡΠ°Π½Π½Ρ (continuous integration/continuous delivery).
- ΡΡΠ°ΡΡΡ Π² ΡΠΎΠ·ΡΠΎΠ±ΡΡ ΡΡΠ°Π±ΡΠ»ΡΠ½ΠΎΡ Ρ Π½Π°Π΄ΡΠΉΠ½ΠΎΡ ΠΠ’-ΡΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΠΈ Π΄Π»Ρ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠΈ ΡΠΈΡΡΠ΅ΠΌ DataHub, DataStage.
- Π½Π°Π»Π°ΡΡΡΠ²Π°Π½Π½Ρ ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³Ρ Π΄ΠΎΡΡΡΠΏΠ½ΠΎΡΡΡ ΡΠΈΡΡΠ΅ΠΌ DataHub, DataStage, Π²ΡΠ΄ΠΏΠΎΠ²ΡΠ΄Π½ΠΎ Π΄ΠΎ Π·Π°Π΄Π°Π½ΠΈΡ Π²ΠΈΠΌΠΎΠ³, ΡΠΎΡΠΌΡΡ ΠΏΡΠΎΠΏΠΎΠ·ΠΈΡΡΡ ΡΠΎΠ΄ΠΎ ΠΎΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡ ΡΠ΅Ρ Π½ΠΎΠ»ΠΎΠ³ΡΡΠ½ΠΈΡ ΠΏΡΠΎΡΠ΅Π΄ΡΡ Ρ Π±ΡΠ·Π½Π΅Ρ-ΠΏΡΠΎΡΠ΅ΡΡΠ².
- ΡΡΠ°ΡΡΡ Π² ΡΡΡΠ΅Π½Π½Ρ ΡΠ΅Ρ Π½ΡΡΠ½ΠΈΡ Ρ ΡΠΈΡΡΠ΅ΠΌΠ½ΠΈΡ ΠΏΡΠΎΠ±Π»Π΅ΠΌ, ΡΠΎ Π²ΠΈΠ½ΠΈΠΊΠ»ΠΈ Π² ΡΠΎΠ±ΠΎΡΡ ΡΠΈΡΡΠ΅ΠΌ DataHub, DataStage, ΠΏΡΠΎΠ²Π΅Π΄Π΅Π½Π½Ρ ΡΠΎΠ·ΡΠ»ΡΠ΄ΡΠ²Π°Π½Ρ Π· Π½Π΅ΡΡΠ°Π½Π΄Π°ΡΡΠ½ΠΈΡ ΡΠΈΡΡΠ°ΡΡΠΉ Ρ ΡΠΈΡΡΠ΅ΠΌΡ, ΡΠΎΡΠΌΡΠ²Π°Π½Π½Ρ Π²ΠΈΡΠ½ΠΎΠ²ΠΊΡΠ² Ρ ΠΏΡΠΎΠΏΠΎΠ·ΠΈΡΡΠΉ ΡΠΎΠ΄ΠΎ ΡΡΡΠ½Π΅Π½Π½Ρ.
ΠΠΈ Π² ΠΏΠΎΡΡΠΊΡ ΡΠ°Ρ ΡΠ²ΡΡ, ΡΠΊΠΈΠΉ ΠΌΠ°Ρ:
- ΠΠΎΠ²Π½Ρ Π²ΠΈΡΡ ΡΠ΅Ρ Π½ΡΡΠ½Ρ\ΡΠ½ΠΆΠ΅Π½Π΅ΡΠ½Ρ ΠΎΡΠ²ΡΡΡ;
- Π΄ΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π·Π° Π½Π°ΠΏΡΡΠΌΠΊΠΎΠΌ Π²ΡΠ΄ 2Ρ ΡΠΎΠΊΡΠ²
- Π’Π΅Ρ Π½ΡΡΠ½Ρ Π·Π°ΡΠΎΠ±ΠΈ, ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ½Π΅ Π·Π°Π±Π΅Π·ΠΏΠ΅ΡΠ΅Π½Π½Ρ, Π·Π°ΡΠΎΠ±ΠΈ ΠΏΠ΅ΡΠ΅Π΄Π°ΡΡ Π΄Π°Π½ΠΈΡ Ρ ΠΏΡΠΈΠΊΠ»Π°Π΄Π½Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΈ ΠΊΠΎΡΠΈΡΡΡΠ²Π°ΡΡΠ², Π·ΠΎΠΊΡΠ΅ΠΌΠ°, Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΠ²Π°Π½Ρ Π² ΡΠΈΡΡΠ΅ΠΌΠ°Ρ Hadoop
- ΠΠ°Π·ΠΎΠ²Ρ Π·Π½Π°Π½Π½Ρ ΠΠ’ ΡΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΠΈ Π±Π°Π½ΠΊΡ (Π΄ΠΎΠ΄Π°ΡΠΊΠ°, ΡΠ΅ΡΠ²Π΅ΡΠ°, ΠΌΠ΅ΡΠ΅ΠΆΡ)
- ΠΠ½Π°Π½Π½Ρ Ρ Π½Π°Π²ΠΈΡΠΊΠΈ ΡΠΎΠ±ΠΎΡΠΈ Π· Π‘Π£ΠΠ
- ΠΠ»ΠΈΠ±ΠΎΠΊΡ Π·Π½Π°Π½Π½Ρ Π² Π°Π΄ΠΌΡΠ½ΡΡΡΡΡΠ²Π°Π½Π½Ρ ELT/ETL Π·Π°ΡΠΎΠ±ΡΠ² (IBM Datastage)
- ΠΠΎΠ»ΠΎΠ΄ΡΠ½Π½Ρ Π·Π½Π°Π½Π½ΡΠΌΠΈ Ρ Π½Π°Π²ΠΈΡΠΊΠ°ΠΌΠΈ Π² ΡΠΎΠ±ΠΎΡΡ ΡΠ΅Π»ΡΡΡΠΉΠ½ΠΎΡ Π‘Π£ΠΠ HIVE, ΡΠΎ Ρ ΡΠΊΠ»Π°Π΄ΠΎΠ²ΠΎΡ ΡΠ°ΡΡΠΈΠ½ΠΎΡ DataHub
- ΠΠ½Π°Π½Π½Ρ Π²Π½ΡΡΡΡΡΠ½ΡΡ IT ΠΏΡΠΎΡΠ΅ΡΡΠ² ΡΠ° ΡΡΠ°Π½Π΄Π°ΡΡΡΠ²
- ΠΠ½Π°Π½Π½Ρ Ρ Π½Π°Π²ΠΈΡΠΊΠΈ Π½Π°ΠΏΠΈΡΠ°Π½Π½Ρ SQL/HQL ΡΠΊΡΠΈΠΏΡΡΠ².
ΠΠΊΡΡΠΌ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ ΠΎΠ΄Π½ΠΎΠ΄ΡΠΌΡΡΠ² ΡΠ° ΡΡΠΊΠ°Π²ΠΎΡ ΡΠΎΠ±ΠΎΡΠΈ ΡΠΈ ΠΎΡΡΠΈΠΌΠ°ΡΡ:
Π‘ΡΠ°Π±ΡΠ»ΡΠ½ΡΡΡΡ:
- ΠΎΡΡΡΡΠΉΠ½Π΅ ΠΏΡΠ°ΡΠ΅Π²Π»Π°ΡΡΡΠ²Π°Π½Π½Ρ
- ΠΌΠ΅Π΄ΠΈΡΠ½Π΅ ΡΡΡΠ°Ρ ΡΠ²Π°Π½Π½Ρ ΡΠ° ΡΡΡΠ°Ρ ΡΠ²Π°Π½Π½Ρ ΠΆΠΈΡΡΡ, ΠΏΠΎΠ²Π½ΡΡΡΡ ΠΎΠΏΠ»Π°ΡΠ΅Π½Π΅ ΠΠ°Π½ΠΊΠΎΠΌ
- Π·Π°ΡΠΎΠ±ΡΡΠ½Ρ ΠΏΠ»Π°ΡΡ Π½Π° ΡΡΠ²Π½Ρ ΠΏΡΠΎΠ²ΡΠ΄Π½ΠΈΡ Π’ΠΠ-ΡΠΎΠ±ΠΎΡΠΎΠ΄Π°Π²ΡΡΠ²
- 25 Π΄Π½ΡΠ² ΡΠΎΡΡΡΠ½ΠΎΡ Π²ΡΠ΄ΠΏΡΡΡΠΊΠΈ, Π΄ΠΎΠ΄Π°ΡΠΊΠΎΠ²Ρ Π΄Π½Ρ Π²ΡΠ΄ΠΏΡΡΡΠΊΠΈ Π½Π° ΠΏΠ°ΠΌβΡΡΠ½Ρ ΠΏΠΎΠ΄ΡΡ, ΡΠΎΡΡΠ°Π»ΡΠ½Ρ Π²ΡΠ΄ΠΏΡΡΡΠΊΠΈ Ρ Π²ΡΠ΄ΠΏΠΎΠ²ΡΠ΄Π½ΠΎΡΡΡ Π΄ΠΎ Π·Π°ΠΊΠΎΠ½ΠΎΠ΄Π°Π²ΡΡΠ²Π° Π£ΠΊΡΠ°ΡΠ½ΠΈ
- ΡΠΎΡΡΡΠ½Ρ ΠΏΠ΅ΡΠ΅Π³Π»ΡΠ΄ΠΈ Π·Π°ΡΠΎΠ±ΡΡΠ½ΠΎΡ ΠΏΠ»Π°ΡΠΈ Π²ΡΠ΄ΠΏΠΎΠ²ΡΠ΄Π½ΠΎ Π΄ΠΎ Π²Π»Π°ΡΠ½ΠΎΡ Π΅ΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΡ ΡΠ° ΡΡΠ½Π°Π½ΡΠΎΠ²ΠΈΡ ΠΏΠΎΠΊΠ°Π·Π½ΠΈΠΊΡΠ² ΠΠ°Π½ΠΊΡ