Jobs
109-
Β· 12 views Β· 0 applications Β· 6h
Data Engineer
Hybrid Remote Β· Slovakia Β· 4 years of experience Β· Upper-IntermediateNow is an amazing time to join our company as we continue to empower innovators to change the world. We provide top-tier technology consulting, R&D, design, and software development services across the USA, UK, and EU markets. And this is where you come...Now is an amazing time to join our company as we continue to empower innovators to change the world. We provide top-tier technology consulting, R&D, design, and software development services across the USA, UK, and EU markets. And this is where you come in!
We are looking for a Skilled Data Engineer to join our team.
About the Project
Weβre launching a Snowflake Proof of Concept (PoC) for a leading football organization in Germany. The project aims to demonstrate how structured and well-managed data can support strategic decision-making in the sports domain.
Key Responsibilities
- Define data scope and identify data sources
- Design and build the data architecture
- Implement ETL pipelines into a data lake
- Ensure data quality and consistency
- Collaborate with stakeholders to define analytics needs
- Deliver data visualizations using Power BI
Required Skills
- Strong experience with Snowflake, ETL pipelines, and data lakes
- Power BI proficiency
- Knowledge of data architecture and modeling
- Data quality assurance expertise
- Solid communication in English (B2+)
Nice to Have
- Familiarity with GDPR
- Experience in sports or media-related data projects
- Experience with short-term PoCs and agile delivery
What We Offer
- Contract for the PoC phase with potential long-term involvement
- All cloud resources and licenses provided by the client
- Hybrid/onsite work in Bratislava
- Opportunity to join a meaningful data-driven sports project with European visibility
π¬ Interested? Send us your CV and hourly rate (EUR).
Weβre prioritizing candidates based in Bratislava or in Europe
Interview Process:
1οΈβ£ internal technical interview
More
2οΈβ£ interview with the client -
Β· 7 views Β· 0 applications Β· 7h
Senior Data Engineer
Full Remote Β· Ukraine Β· 4 years of experience Β· Upper-IntermediateProject Description GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Job Description - Strong experience in design, building, and...Project Description
GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance.
Job Description
- Strong experience in design, building, and maintaining data pipelines using Azure Data Factory (ADF) for data ingestion and processing, and leveraging Databricks for data transformation and analytical workloads
- Strong experience with Databricks auto loader from cosmos/blobs
- Design, create and maintain data pipelines that leverage Delta tables for efficient data storage and processing within a Databricks environment
- Experience in provisioning Databricks/Synapse/ADF workspaces
- Experience with RDBMS, such as PostgreSQL or MySQL, as well as NoSQL
- Strong Experience with Azure Data Factory (ADF)
- Preferred experience in analytical tools (Splunk, Kibana) and ability to pick up, work with and explore new analytical tools
- Data modeling and schema design
- Proven understanding and demonstrable implementation experience in Azure
- Excellent interpersonal and teamwork skills
- Strong problem solving, troubleshooting and analysis skills
- Good knowledge of Agile ScrumJob Responsibilities
- Responsible for the design and implementation of key components in the system.
More
- Takes ownership of features, leads design decisions
- Peer-review the code and provide constructive feedback
- Takes part in defining technical strategies and best practices for the team
- Assists with backlog refinement and estimation at story level
- Identifies and resolves bottlenecks in the development process (such as performance bottlenecks)
- Solves complex tasks without supervision. -
Β· 83 views Β· 14 applications Β· 8h
Data Support / Junior Data Engineer to $600
Full Remote Β· Ukraine Β· Product Β· 0.5 years of experienceΠΡΠΈΠ²ΡΡ! ΠΠ°ΡΠ°Π· Π²ΡΠ΄ΠΊΡΠΈΡΠ° Π²Π°ΠΊΠ°Π½ΡΡΡ Data Support / Junior Data Engineer Π½Π° ΠΏΡΠΎΡΠΊΡΠ°ΡΠ½ΡΡΠΊΠΈΠΉ ΠΏΡΠΎΡΠΊΡ. ΠΠΎΠΌΠΏΠ°Π½ΡΡ: Product IT, NDA. ΠΠΎΠΌΠΏΠ°Π½ΡΡ Π·Π°ΡΠ½ΠΎΠ²Π°Π½Π° Ρ 2022 ΡΠΎΡΡ, ΠΌΠ°Ρ ΡΠ½Π²Π΅ΡΡΠΈΡΡΡ. ΠΡΠΎΡΠΊΡ Π²ΠΆΠ΅ Π² ΡΠΎΠ±ΠΎΡΡ, Π·Π°ΡΠ°Π· ΠΏΠΎΡΡΡΠ±Π½Π° Π»ΠΈΡΠ΅ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠ°. Π’ΠΎΠΌΡ ΡΠ΅ ΠΌΠΎΠΆΠ΅ Π±ΡΡΠΈ Π³Π°ΡΠ½ΠΈΠΌ ΡΡΠ°ΡΡΠΎΠΌ...ΠΡΠΈΠ²ΡΡ! ΠΠ°ΡΠ°Π· Π²ΡΠ΄ΠΊΡΠΈΡΠ° Π²Π°ΠΊΠ°Π½ΡΡΡ Data Support / Junior Data Engineer Π½Π° ΠΏΡΠΎΡΠΊΡΠ°ΡΠ½ΡΡΠΊΠΈΠΉ ΠΏΡΠΎΡΠΊΡ.
More
ΠΠΎΠΌΠΏΠ°Π½ΡΡ: Product IT, NDA. ΠΠΎΠΌΠΏΠ°Π½ΡΡ Π·Π°ΡΠ½ΠΎΠ²Π°Π½Π° Ρ 2022 ΡΠΎΡΡ, ΠΌΠ°Ρ ΡΠ½Π²Π΅ΡΡΠΈΡΡΡ.
ΠΡΠΎΡΠΊΡ Π²ΠΆΠ΅ Π² ΡΠΎΠ±ΠΎΡΡ, Π·Π°ΡΠ°Π· ΠΏΠΎΡΡΡΠ±Π½Π° Π»ΠΈΡΠ΅ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΠ°. Π’ΠΎΠΌΡ ΡΠ΅ ΠΌΠΎΠΆΠ΅ Π±ΡΡΠΈ Π³Π°ΡΠ½ΠΈΠΌ ΡΡΠ°ΡΡΠΎΠΌ Π΄Π»Ρ ΠΏΠΎΡΠ°ΡΠΊΡΠ²ΡΡ, ΡΠΎΠ± ΠΎΡΡΠΈΠΌΠ°ΡΠΈ ΠΊΠΎΠΌΠ΅ΡΡΡΠΉΠ½ΠΈΠΉ Π΄ΠΎΡΠ²ΡΠ΄ Π² ΡΠ΅Π·ΡΠΌΠ΅ ΡΠ° Π·Π°ΠΊΡΡΠΏΠΈΡΠΈΡΡ Π½Π° ΠΏΠΎΡΠ°Π΄Ρ.
π Π©ΠΎ ΠΎΡΡΠΊΡΡΠΌΠΎ Π²ΡΠ΄ ΡΠΏΠ΅ΡΡΠ°Π»ΡΡΡΠ°:
β ΠΡΠ΄ 6 ΠΌΡΡΡΡΡΠ² ΠΊΠΎΠΌΠ΅ΡΡΡΠΉΠ½ΠΎΠ³ΠΎ Π΄ΠΎΡΠ²ΡΠ΄Ρ Π· Python
β ΠΠ½Π°Π½Π½Ρ Linux, Pandas, ETL-ΠΏΡΠΎΡΠ΅ΡΡΠ²
β Π ΠΎΠ±ΠΎΡΠ° Π· VSCode
β ΠΠΎΡΠ²ΡΠ΄ Π· DuckDB
β Elasticsearch β Π±ΡΠ΄Π΅ ΠΏΠ»ΡΡΠΎΠΌ
β ΠΠ½Π³Π»ΡΠΉΡΡΠΊΠ° Π½Π΅ ΠΏΠΎΡΡΡΠ±Π½Π°.
ΠΠ»Π°Ρ Π·Π°Π²Π΄Π°Π½Ρ (ΠΊΠΎΡΠΎΡΠΊΠΈΠΉ ΠΏΠ΅ΡΠ΅Π»ΡΠΊ Π²ΡΠ΄ Π·Π°ΠΌΠΎΠ²Π½ΠΈΠΊΠ°):
1οΈβ£ ΠΠ±ΡΠΎΠ±ΠΊΠ° ΡΡΡΡΠΊΡΡΡΠΎΠ²Π°Π½ΠΈΡ ΡΠ° Π½Π΅ΡΡΡΡΠΊΡΡΡΠΎΠ²Π°Π½ΠΈΡ Π΄Π°Π½ΠΈΡ .
2οΈβ£ ΠΠ³ΡΠ΅Π³Π°ΡΡΡ ΡΠ° ΡΡΠ°Π½ΡΡΠΎΡΠΌΠ°ΡΡΡ Π΄Π°Π½ΠΈΡ .
3οΈβ£ ΠΠΏΡΠΈΠΌΡΠ·Π°ΡΡΡ ETL-ΠΏΡΠΎΡΠ΅ΡΡΠ².
4οΈβ£ Π ΠΎΠ·ΡΠΈΡΠ΅Π½Π½Ρ ΡΡΠ½ΠΊΡΡΠΎΠ½Π°Π»Ρ ΡΠ° ΡΡΠΏΡΠΎΠ²ΡΠ΄.
5οΈβ£ ΠΠ΅ΡΡΠ²Π°Π½Π½Ρ Π΅ΠΊΠΎΡΠΈΡΡΠ΅ΠΌΠΎΡ Elasticsearch (ΠΎΠΏΡΡΠΎΠ½Π°Π»ΡΠ½ΠΎ).
ΠΠΏΠΈΡ Π·Π°Π΄Π°Ρ Π²ΡΠ΄ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ (ΡΠΈΡΠ°ΡΠ° Π²ΡΠ΄ ΡΠΎΠ·ΡΠΎΠ±Π½ΠΈΠΊΠ°):
"ΠΠΎΠ·ΠΈΡΡΡ Π½Π° ΡΠ°ΠΏΠΏΠΎΡΡ ΡΠΎΠ·ΡΠΎΠ±Π½ΠΈΠΊΠ°. ΠΡΠΎΠ³ΡΠ°ΠΌΡΠ²Π°ΡΠΈ Π½Π΅ ΠΏΠΎΡΡΡΠ±Π½ΠΎ. ΠΠΎΡΡΡΠ±Π½ΠΎ Π±ΡΠ΄Π΅ Π·Π°Π²Π°Π½ΡΠ°ΠΆΡΠ²Π°ΡΠΈ Π±Π°Π·ΠΈ Π΄Π°Π½ΠΈΡ Ρ Π½Π°Π»Π°ΡΡΠΎΠ²ΡΠ²Π°ΡΠΈ ΠΏΠΎΡΡΠΊ ΠΏΠΎ Π½ΠΈΡ ΡΠ΅ΡΠ΅Π· Π²Π΅Π±ΡΠ½ΡΠ΅ΡΡΠ΅ΠΉΡ. ΠΠ°Π³Π°Π»ΠΎΠΌ ΡΠ½ΡΡΡΡΠΌΠ΅Π½Ρ ΡΠΆΠ΅ ΠΏΡΠ°ΡΡΡ, ΠΏΠΎΡΡΡΠ±Π½ΠΎ Π»ΠΈΡΠ΅ ΠΉΠΎΠ³ΠΎ ΠΌΠ°ΡΡΡΠ°Π±ΡΠ²Π°ΡΠΈ."
Π£ΠΌΠΎΠ²ΠΈ:
β ΠΠ ~600$
β ΠΠ½ΡΡΠΊΠΈΠΉ Π³ΡΠ°ΡΡΠΊ, Π²ΡΠ΄ΡΡΡΠ½ΡΡΡΡ ΡΠ°ΠΉΠΌ-ΡΡΠ΅ΠΊΠ΅ΡΡΠ²
β ΠΠ΅Π²Π΅Π»ΠΈΠΊΠ° ΠΊΠΎΠΌΠ°Π½Π΄Π°, Π°Π΄Π΅ΠΊΠ²Π°ΡΠ½Ρ Π·Π°Π΄Π°ΡΡ (ΡΠΎ Π΄ΠΎΠ·Π²ΠΎΠ»ΠΈΡΡ Π»Π΅Π³ΡΠ΅ Π°Π΄Π°ΠΏΡΡΠ²Π°ΡΠΈΡΡ Π΄ΠΎ Π·Π°Π΄Π°Ρ, Π±Π΅Π· Π°Π²ΡΠ°Π»ΡΠ², Π³ΠΎΠ½ΠΊΠΈ ΡΠ° ΡΡΡΠ΅ΡΡΠ²)
β ΠΠΎΠΆΠ»ΠΈΠ²ΡΡΡΡ Π²ΠΏΠ»ΠΈΠ²Π°ΡΠΈ Π½Π° ΡΠΎΠ·Π²ΠΈΡΠΎΠΊ Π½Π°ΡΠΎΡ ΠΊΡΠ°ΡΠ½ΠΈ, Π·ΠΌΡΡΠ½ΡΠ²Π°ΡΠΈ ΡΠ° Π·Π°Ρ ΠΈΡΠ°ΡΠΈ Π²Π»Π°ΡΠ½ΠΈΠΌ ΡΠΎΠ·ΡΠΌΠΎΠΌ.
ΠΡΠ°ΠΏΠΈ Π²ΡΠ΄Π±ΠΎΡΡ:
1. Screening-call - 20-30 Ρ Π²ΠΈΠ»ΠΈΠ½.
2. ΠΠ΅Π²Π΅Π»ΠΈΠΊΠ΅ Π’Π (ΠΎΠΏΡΡΠΎΠ½Π°Π»ΡΠ½ΠΎ).
3. ΠΠ½ΡΠ΅ΡΠ²βΡ Π· ΡΠΎΠ·ΡΠΎΠ±Π½ΠΈΠΊΠΎΠΌ (Π²ΡΠ½ ΠΎΠ΄ΠΈΠ½ Π½Π° ΠΏΡΠΎΡΠΊΡΡ) ΡΠ° PM - 30-60 Ρ Π²ΠΈΠ»ΠΈΠ½.
4. Π€ΡΠ½Π°Π» / Offer β ΠΊΠΎΡΠΎΡΠΊΠ° ΡΠΎΠ·ΠΌΠΎΠ²Π° Π· ΠΊΠ΅ΡΡΠ²Π½ΠΈΠΊΠΎΠΌ ΠΏΡΠΎΡΠΊΡΡ - 15-20 Ρ Π²ΠΈΠ»ΠΈΠ½. -
Β· 28 views Β· 2 applications Β· 12h
Senior Data Engineer
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· Upper-IntermediateWe are building a greenfield MVP for a healthcare analytics platform focused on patient-level insights from large-scale pharmacy claims data. The platform is being developed by a newly formed client with existing client interest and will be used for...We are building a greenfield MVP for a healthcare analytics platform focused on patient-level insights from large-scale pharmacy claims data. The platform is being developed by a newly formed client with existing client interest and will be used for real-time patient analytics at the point of service.
All data is ingested via Snowflake Share from external vendors (no ingestion layer needed) and processed through a typical ETL pipeline to create a final patient-level dataset (~300M rows). This normalized output will be loaded into a PostgreSQL database (or comparable RDBMS; final tooling to be confirmed) and served via a low-latency REST API.
Key pipeline stages include:
- Standardization (cleansing, mapping, enrichment using BIN/PCN lookups)
- Projection and extrapolation using a simple classification model or proximity search
Summarization to per-patient records
The data is updated weekly (batch-based system). We are not building the ML model but must integrate with it and support its output. The system will initially serve two core API endpoints:
- Given patient info, return plan/coverage info with confidence score
Given patient info, return medical history
You will be part of a lean, senior-level engineering team and expected to own key parts of the ETL and data modeling effort.
Key Responsibilities
- Build performant and scalable ETL pipelines in Snowflake, transforming wide raw claims datasets into normalized outputs
- Apply cleansing, mapping, enrichment logic (e.g., payer enrichment via BIN/PCN lookups)
- Collaborate on projection/extrapolation workflows, integrating with classification models or rules-based engines
- Load processed outputs into PostgreSQL to power real-time REST API endpoints
- Tune Snowflake queries for cost-efficiency and speed, optimize workloads for batch processing (~weekly cadence)
- Work closely with the API engineer to ensure alignment between data schema and API needs
- Ensure data privacy, compliance, and PHI de-identification coordination (with Datavant)
Contribute to architectural decisions and the implementation roadmap in a fast-moving MVP cycle
Requirements
- 5+ years in data engineering or data platform development roles
- Advanced SQL skills and experience working with wide, high-volume datasets (e.g., 100M+ rows)
- Experience with Snowflake or readiness to quickly ramp up on it, including performance tuning and familiarity with native features (streams, tasks, stages)
- Proficiency in Python for scripting, orchestration, and integration
- Experience working with batch pipelines and familiar with best practices for data warehousing
- Solid understanding of ETL design patterns and ability to work independently in a small, fast-paced team
Awareness of data compliance standards (HIPAA, PHI de-identification workflows)
Preferred Qualifications
- Experience with Snowpark (Python) or other in-Snowflake processing tools
- Familiarity with payer enrichment workflows or healthcare claims data
- Previous use of classification models, vector similarity, or proximity-based data inference
- Hands-on experience with AWS EC2, S3 and integrating cloud resources with Snowflake
- Exposure to PostgreSQL and API integration for analytic workloads
-
Β· 3 views Β· 0 applications Β· 20h
On-Site Data Center Engineer (Hyper-V and Infrastructure Upgrades)
Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· IntermediateRequirements: 4+ years of hands-on experience managing on-premise data center infrastructure, including server hardware setup, troubleshooting, and virtualization (Hyper-V preferred) Install, configure, and maintain physical servers for Hyper-V...Requirements:
- 4+ years of hands-on experience managing on-premise data center infrastructure, including server hardware setup, troubleshooting, and virtualization (Hyper-V preferred)
- Install, configure, and maintain physical servers for Hyper-V virtualization environments.
- Experience with server hardware setup, including RAID and remote management tools configurations, BIOS/UEFI settings, and hardware diagnostics.
- Troubleshoot and resolve hardware and network issues
- Knowledge of critical data center infrastructure (Power configurations, HVAC, Cabling)
- Displayed proficiency in various software applications such as Microsoft Office 365 Suite and G Suite applications
- Strong troubleshooting methodology and attention to detail
- Working knowledge of HPE/Dell server platforms and Juniper or Arista networking equipment.
- Ability to work on-site in London to support ongoing infrastructure upgrades.
- Strong troubleshooting skills and ability to assist with real-time problem resolution.
Nice-to-Have:
- Experience with large-scale data center setups or expansions.
- Strong understanding of maintenance on mission-critical infrastructure power infrastructure in a live environment
- Familiarity with networking and server provisioning.
- Previous experience coordinating with remote teams to ensure smooth project execution.
- Experience using Data Center Infrastructure management (DCIM) tools to manage data center infrastructure
- Experience managing vendors while working on data center build projects
Key Responsibilities:
- Assist with Hyper-V installations and configuration within the data center.
- Setting up and configuring on-premise networks for server infrastructures.
- Work closely with the remote engineering team to facilitate a smooth and efficient upgrade process.
- Document infrastructure setups and procedures.
- Provide on-site support to minimize travel requirements for the core team.
- Identify and resolve any issues that arise during installations and upgrades.
About the Project:
This project focuses on optimizing the power infrastructure within a London-based data center while deploying Hyper-V installations. The goal is to leverage all remaining power resources efficiently, ensuring a seamless and accelerated implementation. Having an on-site contractor will reduce the need for frequent travel, speeding up the project timeline and ensuring smooth execution.
More -
Β· 21 views Β· 1 application Β· 1d
Senior Data Engineer
Full Remote Β· Poland Β· Product Β· 5 years of experience Β· Upper-IntermediateProject Toshiba is the global market share leader in retail store technology. As retailβs first choice for integrated in-store solutions, Toshibaβs innovative technology enhances customer engagement, transforms in-store experience, and accelerates the...Project
Toshiba is the global market share leader in retail store technology. As retailβs first choice for integrated in-store solutions, Toshibaβs innovative technology enhances customer engagement, transforms in-store experience, and accelerates the digital transformation of the retail industry. Today, Toshiba is in a position wherein it defines dominating practices of retail automation and advances the future of retail.
The product is aimed at comprehensive retail chain automation and covers all work processes of large retail chain operators. The product covers retail store management, warehouse management, payment systems integration, logistics management, hardware/software store automation, etc.
The product is already adopted by the market, and the biggest US and global retail operators are among the clients.Technology Stack
Azure Databricks, Apache Spark (PySpark) , Delta Lake , ADF , Synapse , Python ,SQL, Power BI, MongoDB/CosmosDB, PostgreSQL, Terraform, Jenkins
What you will do
We are looking for an experienced Azure Databricks Engineer to join our team and contribute to building and optimizing large-scale data solutions. You will be responsible for working with Azure Databricks and Power BI , writing efficient Python and SQL scripts, and optimizing data workflows to ensure performance and scalability, building meaningful reports.
Must-have skills
- Bachelorβs or Masterβs degree in Data Science, Computer Science or related field.
- 3+ years of experience as a Data Engineer or in a similar role.
- Proven experience in data analysis, data warehousing, and data reporting.
- Proven experience in Azure Databricks ( python, pytorch), Azure infrastructure
- Experience with Business Intelligence tools like Power BI.
- Proficiency in querying languages like SQL.
- Strong problem-solving skills and attention to detail.
- Proven ability to translate business requirements into technical solutions.
Nice-to-have skills
- Knowledge and experience in e-commerce/retail
-
Β· 23 views Β· 9 applications Β· 1d
Principal Data Engineer
Full Remote Β· Europe except Ukraine Β· 5 years of experience Β· Upper-IntermediateTech stack: Python, Java, MS SQL, Oracle, PostgreSQL, MySQL, Scala, Kafka, Data Domain expertise: Location: Europe Expected date: ASAP Duration: Long-term Expiration: ASAP Description: Required Skills: - 5+ years in data engineering...β€ Tech stack: Python, Java, MS SQL, Oracle, PostgreSQL, MySQL, Scala, Kafka, Data
β€ Domain expertise:
β€ Location: Europe
β€ Expected date: ASAP
β€ Duration: Long-term
β€ Expiration: ASAP
β€ Description:
Required Skills:
- 5+ years in data engineering (Python, Scala, or Java).
- Strong expertise in Apache Kafka for stream processing.
- Experience with databases like PostgreSQL, MySQL, MSSQL, Oracle.
- Familiarity with cloud platforms (AWS, Azure, GCP) and on-prem solutions.
- Leadership skills to guide and mentor a team.
Must-have: Expertise in ClickHouse & iGaming industry experience
More -
Β· 19 views Β· 0 applications Β· 1d
Senior Python Data Engineer (only Ukraine)
Ukraine Β· Product Β· 6 years of experience Β· Upper-IntermediateThe company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer...The company is the first Customer-Led Marketing Platform. Its solutions ensure that marketing always starts with the customer instead of a campaign or product. It is powered by the combination of 1) rich historical, real-time, and predictive customer data, 2) AI-led multichannel journey orchestration, and 3) statistically credible multitouch attribution of every marketing action.
Requirements:
- At least 5 years of experience with Python
- At least 3 years of experience in processing structured terabyte-scale data (processing structured data of several hundreds of gigabytes).
- Solid experience in SQL and NoSQL (ideally GCP storages Firestore, BigQuery, BigTable and/or Redis, Kafka), (advanced skills in DML).
- Hands-on experience with OLAP storage (at least one of Snowflake, BigQuery, ClickHouse, etc).
- Deep understanding of data processing services (at least one of Apache Airflow, GCP Dataflow, Apache Hadoop, Apache Spark).
- Experience in automated test creation (TDD).
Freely spoken English.
Advantages:
- Being fearless of mathematical algorithms (part of our teamβs responsibility is developing ML models for data analysis; although knowledge of ML is not required for the current position, it would be awesome if a person felt some passion for algorithms).
- Experience in any OOP language.
- Experience in DevOps (Familiarity with Docker and Kubernetes).
- Experience with GCP services would be a plus.
- Experience with IaC would be a plus.
- Experience in Scala.
What we offer:
- 20 working daysβ vacation;
- 10 paid sick leaves;
- public holidays;
- equipment;
- accountant helps with documents;
- many cool team activities.
Apply now and start a new page of your fast career growth with us!
More -
Β· 25 views Β· 2 applications Β· 1d
Data engineer (relocation to Berlin)
Office Work Β· Germany Β· 5 years of experience Β· Upper-IntermediateAt TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment,...At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our clients' teams. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.
About the Data Solution Team As a Data Engineer, you will join our Data Solution Team, which drives our data-driven innovation. The team is pivotal to powering our business processes and enhancing customer experiences through effective data utilization. Our focus areas include:
β Developing integrations between systems.
β Analyzing customer data to derive actionable insights.
β Improving customer experience by leveraging statistical and machine learning models. Our tech stack includes:
β Cloud & Infrastructure: AWS (S3, EKS, Quicksight, and monitoring tools).
β Data Engineering & Analytics: Apache Spark (Scala and PySpark on Databricks), Apache Kafka (Confluence Cloud).
β Infrastructure as Code: Terraform.
β Development & Collaboration: BitBucket, Jira.
β Integration Tools & APIs: Segment.io, Blueshift, Zendesk, Google Maps API, and other external systems
Job requirements
As A Data Engineer, You Will:
β Design, build, and own near-time and batch data processing workflows.
β Develop efficient, low-latency data pipelines and systems.
β Maintain high data quality while ensuring GDPR compliance.
β Analyze customer data and extract insights to drive business decisions.
β Collaborate with Product, Backend, Marketing, and other teams to deliver impactful features.
β Help Data scientists deliver ML/AI solutions.
Requirements:
β 5+ years of experience as a Data Engineer, with expertise in Apache Spark using Python and Scala.
β 3+ years of experience with Apache Kafka.
β Management experience or Tech Lead experience
β Strong proficiency in SQL.
β Experience with CI/CD processes and platforms.
β Hands-on experience with cloud technologies such as AWS, GCP or Azure.
β Familiarity with Terraform.
β Comfortable working in an agile environment.
β Excellent problem-solving and self-learning skills, with the ability to operate both independently and as part of a team.
Nice to have:
β Hands-on experience with Databricks.
β Experience with document databases, particularly Amazon DocumentDB.
β Familiarity with handling high-risk data.
β Exposure to BI tools such as AWS Quicksight or Redash.
β Work experience in a Software B2C company, especially in the FinTech industry.
What we offer:
Our goal is to set up a great working environment. Become part of the process and:
β Shape the future of our organization as part of the international founding team.
β Take on responsibility from day one.
β Benefit from various coaching and training opportunities, including a Sports Subscription, German classes, and a β¬1000 yearly self-development budget.
β Work in a hybrid working model from the comfortable Berlin office
β Enjoy a modern workplace in the heart of Berlin with drinks, fresh fruit, kicker and ping pong
More -
Β· 16 views Β· 0 applications Β· 1d
DataOps Team Lead to $6500
Poland Β· Product Β· 3 years of experience Β· Upper-IntermediateOnly Krakow!!! Who we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Bigabid is on a mission to fuel mobile...Only Krakow!!!
Who we are:Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Bigabid is on a mission to fuel mobile app growth through AI-powered prediction and precision targeting. With over 50TB of data processed daily, 4M+ requests per second, and 1B+ unique users reached weekly, theyβre scaling like the biggest names in tech β but with startup agility.
At Bigabid, theyβre building intelligent systems that predict which mobile apps users will love β and connect them at the perfect moment. Their real-time user modeling engine continuously ingests and analyzes massive streams of behavioral data to power highly targeted ad campaigns with exceptional performance. As a result, they are seeing remarkable growth and close to zero customer churn. To support their hyper-growth and continue propelling the growth of some of the biggest names in the mobile industry, they offer a wide range of opportunities for different skill levels and experiences.About the Role:
Weβre now looking for a DataOps Team Lead to own the stability, observability, and quality of the data pipelines and processes. This is a 50/50 hands-on and leadership role, perfect for someone who thrives at the intersection of data engineering, operational excellence, and cross-functional collaboration.
Youβll be part of the mission-critical DataOps layer, ensuring data health and reliability across a product that directly influences business outcomes. Youβll support the engine that empowers some of the worldβs top app developers to grow smarter and faster.
Key Responsibilities:
- Lead a DataOps team of 2 (and growing), with ownership over Bigabidβs core data quality and observability processes
- Build, maintain, and monitor robust data pipelines and workflows (Python + Airflow)
- Act as the go-to person for identifying and resolving data issues affecting production systems
- Coordinate with multiple teams: Data Engineering, Product, BI, Operations, Backend, and occasionally Data Science
- Own projects such as metadata store development, anomaly detection systems, and scalable data quality frameworks
- Balance strategic project leadership with hands-on scripting, debugging, and optimizations
- Promote a culture of quality, reliability, and clear communication in a fast-moving, high-volume environment.
Required Competence and Skills:
- 3+ years of experience in data engineering/data operations, with at least 1 year of team or project leadership
- Proficient in Python for scripting and automation (clean, logical code β not full-stack development)
- Strong experience with Airflow (hands-on, not through abstraction layers)
- Solid understanding of SQL and NoSQL querying, schema design, and cost-efficient querying (e.g., Presto, document DBs)
- Exposure to tools like Spark, AWS, or similar is a big plus
- Comfortable managing incident escalation, prioritizing urgent fixes, and guiding teams toward solutions
- Analytical, communicative, and excited to work with smart, mission-driven people
Nice-to-Have Skills:
- Previous experience as a NOC or DevOps engineer
- Familiarity with PySpark.
-
Β· 31 views Β· 0 applications Β· 1d
Middle Data Support Engineer (Python, SQL)
Ukraine Β· 3 years of experience Β· Upper-IntermediateN-iX is looking for a Middle Data Support Engineer to join our team. Our customer is the leading school transportation solutions provider in North America. Every day, the company completes 5 million student journeys, moving more passengers than all U.S....N-iX is looking for a Middle Data Support Engineer to join our team. Our customer is the leading school transportation solutions provider in North America. Every day, the company completes 5 million student journeys, moving more passengers than all U.S. airlines combined and delivers reliable, quality services, including full-service transportation and management, special-needs transportation, route optimization and scheduling, maintenance, and charter services for 1,100 school district contracts.
Responsibilities:
- Provide support in production and non-production environments (Azure cloud)
- Install, configure and provide day-to-day support after implementation, including off hours as needed;
- Troubleshooting defects and errors, arising problems resolution;
- Plan, test, and implement server upgrades, maintenance fixes, and vendor-supplied patches;
- Help in resolving incidents;
- Monitor ETL jobs;
Perform small enhancements (Azure/SQL).
Requirements:
- Proven knowledge and 3+ years experience in Python
- Proficiency in RDBMS systems (MS SQL experience as a plus);
- Experience with Azure cloud provider service;
- Understanding of Azure Data Lake / Storage Accounts;
- Experience in creation and managing data pipelines in Azure Data Factory;
Upper Intermediate/Advanced English level.
Nice to have:
- Experience with administration of Windows Server 2012 and higher;
- Experience with AWS, Snowflake, Power BI;
- Experience with technical support;
Experience in .Net.
We offer*:
- Flexible working format - remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
*not applicable for freelancers
More -
Β· 22 views Β· 1 application Β· 1d
Senior Data Engineer
Full Remote Β· Poland Β· 5 years of experience Β· Upper-IntermediateAs a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization,...As a Senior/Tech Lead Data Engineer, you will play a pivotal role in designing, implementing, and optimizing data platforms for our clients. Your primary responsibilities will revolve around data modeling, ETL development, and platform optimization, leveraging technologies such as EMR/Glue, Airflow, Spark, using Python and various cloud-based solutions.
Key Responsibilities:
- Design, develop, and maintain ETL pipelines for ingesting and transforming data from diverse sources.
- Collaborate with cross-functional teams to ensure seamless deployment and integration of data solutions.
- Lead efforts in performance tuning and query optimization to enhance data processing efficiency.
- Provide expertise in data modeling and database design to ensure scalability and reliability of data platforms.
- Contribute to the development of best practices and standards for data engineering processes.
- Stay updated on emerging technologies and trends in the data engineering landscape.
Required Skills and Qualifications:
- Bachelor's Degree in Computer Science or related field.
- Minimum of 5 years of experience in tech lead data engineering or architecture roles.
- Proficiency in Python and PySpark for ETL development and data processing.
- AWS CLOUD at least 2 years
- Extensive experience with cloud-based data platforms, particularly EMR.
- Must have knowledge with Spark.
- Excellent problem-solving skills and ability to work effectively in a collaborative team environment.
- Leadership experience, with a proven track record of leading data engineering teams.
Benefits
- 20 days of paid vacation, 5 sick leave
- National holidays observed
- Company-provided laptop
-
Β· 72 views Β· 3 applications Β· 1d
Middle Data Engineer (Prom.ua)
Full Remote Β· Ukraine Β· Product Β· 2 years of experience Ukrainian Product πΊπ¦Prom.ua β Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ ΠΌΠ°ΡΠΊΠ΅ΡΠΏΠ»Π΅ΠΉΡ Π£ΠΊΡΠ°ΡΠ½ΠΈ, Π΄Π΅ ΠΏΡΠΎΠ΄Π°ΡΡΡΡΡ ΠΏΠΎΠ½Π°Π΄ 200 ΠΌΠ»Π½ ΡΠΎΠ²Π°ΡΡΠ² Π²ΡΠ΄ Π΄Π΅ΡΡΡΠΊΡΠ² ΡΠΈΡΡΡ ΠΏΡΠ΄ΠΏΡΠΈΡΠΌΡΡΠ² Π· ΡΡΡΡΡ ΠΊΡΠ°ΡΠ½ΠΈ. ΠΠ° Prom.ua: ΠΊΠΎΠΆΠ΅Π½ ΠΏΠΎΠΊΡΠΏΠ΅ΡΡ ΠΌΠΎΠΆΠ΅ Π·Π½Π°ΠΉΡΠΈ Π²ΡΠ΅, ΡΠΎ ΠΏΠΎΡΡΡΠ±Π½ΠΎ, Π·Π° Π½Π°ΠΉΠΊΡΠ°ΡΠΎΡ ΡΡΠ½ΠΎΡ: Π²ΡΠ΄ Π·ΡΠ±Π½ΠΎΡ ΡΡΡΠΊΠΈ Π΄ΠΎ ΠΊΡΠ»ΡΡΠΈΠ²Π°ΡΠΎΡΠ° Π΄Π»Ρ ΡΠ°Π΄Ρ ΡΠ°...Prom.ua β Π½Π°ΠΉΠ±ΡΠ»ΡΡΠΈΠΉ ΠΌΠ°ΡΠΊΠ΅ΡΠΏΠ»Π΅ΠΉΡ Π£ΠΊΡΠ°ΡΠ½ΠΈ, Π΄Π΅ ΠΏΡΠΎΠ΄Π°ΡΡΡΡΡ ΠΏΠΎΠ½Π°Π΄ 200 ΠΌΠ»Π½ ΡΠΎΠ²Π°ΡΡΠ² Π²ΡΠ΄ Π΄Π΅ΡΡΡΠΊΡΠ² ΡΠΈΡΡΡ ΠΏΡΠ΄ΠΏΡΠΈΡΠΌΡΡΠ² Π· ΡΡΡΡΡ ΠΊΡΠ°ΡΠ½ΠΈ.
ΠΠ° Prom.ua:
- ΠΊΠΎΠΆΠ΅Π½ ΠΏΠΎΠΊΡΠΏΠ΅ΡΡ ΠΌΠΎΠΆΠ΅ Π·Π½Π°ΠΉΡΠΈ Π²ΡΠ΅, ΡΠΎ ΠΏΠΎΡΡΡΠ±Π½ΠΎ, Π·Π° Π½Π°ΠΉΠΊΡΠ°ΡΠΎΡ ΡΡΠ½ΠΎΡ: Π²ΡΠ΄ Π·ΡΠ±Π½ΠΎΡ ΡΡΡΠΊΠΈ Π΄ΠΎ ΠΊΡΠ»ΡΡΠΈΠ²Π°ΡΠΎΡΠ° Π΄Π»Ρ ΡΠ°Π΄Ρ ΡΠ° Π³ΠΎΡΠΎΠ΄Ρ.
ΠΊΠΎΠΆΠ΅Π½ ΠΏΡΠ΄ΠΏΡΠΈΡΠΌΠ΅ΡΡ ΠΌΠΎΠΆΠ΅ ΠΏΡΠΎΠ΄Π°Π²Π°ΡΠΈ ΡΠΎΠ²Π°ΡΠΈ Π² ΠΊΠ°ΡΠ°Π»ΠΎΠ·Ρ ΠΌΠ°ΡΠΊΠ΅ΡΠΏΠ»Π΅ΠΉΡΠ°, Π½Π° ΡΠ°ΠΉΡΡ, ΡΡΠ²ΠΎΡΠ΅Π½ΠΎΠΌΡ Π½Π° ΠΏΠ»Π°ΡΡΠΎΡΠΌΡ Prom ΡΠ° Ρ ΠΌΠΎΠ±ΡΠ»ΡΠ½ΠΎΠΌΡ Π΄ΠΎΠ΄Π°ΡΠΊΡ βProm ΠΏΠΎΠΊΡΠΏΠΊΠΈβ.
Prom.ua Π² ΡΠΈΡΡΠ°Ρ :
- ΡΠΎΠ΄Π½Ρ ΠΌΠ°ΡΠΊΠ΅ΡΠΏΠ»Π΅ΠΉΡ Π²ΡΠ΄Π²ΡΠ΄ΡΡΡΡ 4,8 ΠΌΠ»Π½ ΠΎΡΡΠ±
- Π½Π° ΠΌΠ°ΡΠΊΠ΅ΡΠΏΠ»Π΅ΠΉΡΡ ΠΏΡΠ°ΡΡΡΡΡ ΠΏΠΎΠ½Π°Π΄ 60 ΡΠΈΡ. ΠΊΠΎΠΌΠΏΠ°Π½ΡΠΉ
- Ρ ΠΊΠ°ΡΠ°Π»ΠΎΠ·Ρ 200 ΠΌΠ»Π½ ΡΠΎΠ²Π°ΡΡΠ²
ΠΡΠΎ ΠΊΠΎΠΌΠ°Π½Π΄Ρ:
Data Analytics, Data Engineers, Product Analytics
ΠΠΈ Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΡΠΌΠΎ Π² ΡΠΎΠ±ΠΎΡΡ:
- Data Lakehouse 200+ TB Π΄Π°Π½ΠΈΡ , Π΄Π°Π½Ρ Π·Π±Π΅ΡΡΠ³Π°ΡΠΌΠΎ Π² HDFS, S3, Π΄Π»Ρ ΡΠ°Π±Π»ΠΈΡΠ½ΠΈΡ Π΄Π°Π½ΠΈΡ Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΡΠΌΠΎ ΡΠΎΡΠΌΠ°Ρ Apache Iceberg.
- ΠΠ΅Π·ΠΏΠΎΡΠ΅ΡΠ΅Π΄Π½ΡΠΎ Π· ΡΠ°ΠΌΠΈΠΌ ΡΡ ΠΎΠ²ΠΈΡΠ΅ΠΌ Π²Π·Π°ΡΠΌΠΎΠ΄ΡΡ 30 β 40 Π»ΡΠ΄Π΅ΠΉ, ΡΠΏΠΎΠΆΠΈΠ²Π°ΡΡΡ ΠΉΠΎΠ³ΠΎ Π΄Π°Π½Ρ ΡΠΎΡΠ½Ρ (ΡΠΊΡΠΎ Π²ΡΠ°Ρ ΠΎΠ²ΡΠ²Π°ΡΠΈ ΡΡΠ»ΡΠΊΠΈ Π²Π½ΡΡΡΡΡΠ½ΡΡ ΡΠΏΠΎΠΆΠΈΠ²Π°ΡΡΠ²).
- ΠΠ»Ρ ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΡΠΌΠΎ Spark, Trino, ΠΎΡΠΊΠ΅ΡΡΡΠ°ΡΡΡ Π²ΡΠ΄Π±ΡΠ²Π°ΡΡΡΡΡ Π² Airflow.
- ΠΠ½ΡΡΡΡΠΌΠ΅Π½ΡΠΈ/Π‘Π΅ΡΠ²ΡΡΠΈ ΠΏΠ΅ΡΠ΅Π²Π°ΠΆΠ½ΠΎ Π΄Π΅ΠΏΠ»ΠΎΡΠΌΠΎ Π² Kubernetes, ΡΠ½ΠΎΠ΄Ρ Π² OpenStack.
- Π£ ΡΠΊΠΎΡΡΡ ΡΠ΅ΠΏΠΎΠ·ΠΈΡΠΎΡΡΡ ΠΊΠΎΠ΄Ρ ΡΠ° Π΄Π»Ρ CI/CD Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΡΠΌΠΎ GitLab.
ΠΡΠ΄ ΠΊΠ°ΡΠ°Π»ΠΎΠ³, Π΄ΠΎΠΊΡΠΌΠ΅Π½ΡΠ°ΡΡΡ ΡΠ° Π΄Π»Ρ ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³Ρ Π΄Π°Π½ΠΈΡ Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΡΠΌΠΎ Open Metadata, Π΄Π»Ρ Π΄ΠΎΠΊΡΠΌΠ΅Π½ΡΠ°ΡΡΡ ΠΏΠΎ ΡΠ½ΡΡΡΡΠΌΠ΅Π½ΡΠ°Ρ /ΡΠ΅ΡΠ²ΡΡΠ°Ρ Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΡΠΌΠΎ Material for MkDocs.
ΠΠ»Ρ Π΄Π°Π½ΠΎΡ ΡΠΎΠ»Ρ Π²Π°ΠΆΠ»ΠΈΠ²ΠΎ:
- ΠΠ°ΡΠΈ Π²ΠΈΡΠΎΠΊΠΈΠΉ ΡΡΠ²Π΅Π½Ρ Π²ΠΎΠ»ΠΎΠ΄ΡΠ½Π½Ρ ΠΌΠΎΠ²Π°ΠΌΠΈ SQL ΡΠ° Python.
- ΠΠ°ΡΠΈ Π³Π»ΠΈΠ±ΠΎΠΊΠΈΠΉ Π΄ΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· code-based ΡΠ½ΡΡΡΡΠΌΠ΅Π½ΡΠ°ΠΌΠΈ ΠΎΡΠΊΠ΅ΡΡΡΠ°ΡΡΡ, Π±Π°ΠΆΠ°Π½ΠΎ Airflow, Prefect, Dagster. ΠΠ°ΠΏΡΠΈΠΊΠ»Π°Π΄, Ρ Π²ΠΈΠΏΠ°Π΄ΠΊΡ Π· Airflow ΡΠΎΠ·ΡΠΌΡΡΠΈ, ΡΠΎ ΡΠ°ΠΊΠ΅ XCom, Pool, Hook, Sensor, Operator, TaskGroup ΡΠΎΡΠΎ.
- ΠΠ°ΡΠΈ Π΄ΠΎΡΠ²ΡΠ΄ Π½Π°ΠΏΠΈΡΠ°Π½Π½Ρ ΡΠΊΠ»Π°Π΄Π½ΠΈΡ , ΡΠ΄Π΅ΠΌΠΏΠΎΡΠ΅Π½ΡΠ½ΠΈΡ ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½ΡΠ² Π· ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ Π²Π΅Π»ΠΈΠΊΠΈΡ ΠΎΠ±ΡΡΠ³ΡΠ² Π΄Π°Π½ΠΈΡ .
- ΠΠ°ΡΠΈ Π³Π»ΠΈΠ±ΠΎΠΊΠΈΠΉ Π΄ΠΎΡΠ²ΡΠ΄ Π²ΠΈΠΊΠΎΡΠΈΡΡΠ°Π½Π½Ρ Ρ ΠΎΡΠ° Π± Π΄Π²ΠΎΡ Π· ΠΏΠ΅ΡΠ΅Π»ΡΡΠ΅Π½ΠΈΡ ΡΠ½ΡΡΡΡΠΌΠ΅Π½ΡΡΠ² Π΄Π»Ρ ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ : (Spark/Databricks/Snowpark), (Trino/Presto/Athena), (Kafka/Kinesis/Flink), (Snowflake/BigQuery/Redshift).
- Π ΠΎΠ·ΡΠΌΡΡΠΈ Π²ΡΠ΄ΠΌΡΠ½Π½ΡΡΡΡ ΠΌΡΠΆ Data Warehouse, Data Lake, Data Lakehouse.
- Π ΠΎΠ·ΡΠΌΡΡΠΈ ΠΏΡΠΈΠ½ΡΠΈΠΏ ΡΠΎΠ±ΠΎΡΠΈ ΡΠ°Π±Π»ΠΈΡΠ½ΠΈΡ ΡΠΎΡΠΌΠ°ΡΡΠ², Π² ΡΠ΄Π΅Π°Π»Ρ ΠΌΠ°ΡΠΈ Π΄ΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Iceberg Π°Π±ΠΎ Delta Lake, Hudi.
- ΠΠ½Π°ΡΠΈ ΠΏΡΠΈΠ½ΡΠΈΠΏΠΈ ΠΎΡΠ³Π°Π½ΡΠ·Π°ΡΡΡ ΡΠ° ΠΌΠΎΠ΄Π΅Π»ΡΠ²Π°Π½Π½Ρ Π΄Π°Π½ΠΈΡ : Medallion, Kimball, Inmon.
- ΠΠ°ΡΠΈ Π΄ΠΎΡΠ²ΡΠ΄ ΡΠΎΠ±ΠΎΡΠΈ Π· Docker, Kubernetes, Gitlab CI/CD.
- Π‘ΠΈΡΡΠ΅ΠΌΠ½ΠΎ ΠΌΠΈΡΠ»ΠΈΡΠΈ, ΡΠΈΡΠΎΠΊΠΎ ΡΠ° Π· ΡΡΠ°Ρ ΡΠ²Π°Π½Π½ΡΠΌ ΠΌΠ°ΠΉΠ±ΡΡΠ½ΡΡ ΠΏΠ΅ΡΡΠΏΠ΅ΠΊΡΠΈΠ², Π΄ΡΠΌΠ°ΡΠΈ ΠΏΡΠΎ ΠΌΠ΅ΡΡ, Π° Π½Π΅ Π·Π°Π΄Π°ΡΡ.
- ΠΡΡΡΠ½ΡΡΠ²Π°ΡΠΈΡΡ Π½Π° ΡΠΊΡΡΠ½ΠΈΠΉ ΡΠ° Π΄ΠΎΠ²Π³ΠΎΡΡΡΠΎΠΊΠΎΠ²ΠΈΠΉ ΡΠ΅Π·ΡΠ»ΡΡΠ°Ρ ΡΠΎΠ±ΠΎΡΠΈ, Π² Π±ΡΠ»ΡΡΠΎΡΡΡ Π²ΠΈΠΏΠ°Π΄ΠΊΡΠ² ΡΠΊΡΡΡΡ Π²Π°ΠΆΠ»ΠΈΠ²ΡΡΠ° Π·Π° ΡΠ²ΠΈΠ΄ΠΊΡΡΡΡ.
- ΠΡΡΠΈ Π²ΡΠ΄ΠΊΡΠΈΡΠΈΠΌ Π΄ΠΎ Π·ΠΌΡΠ½, ΠΏΡΠΎΠ΄ΡΠΊΡ Π΄ΡΠΆΠ΅ Π°ΠΊΡΠΈΠ²Π½ΠΎ Π·ΠΌΡΠ½ΡΡΡΡΡΡ.
- Π ΠΎΠ·ΡΠΌΡΡΠΈ, ΡΠΎ ΠΊΠΎΠΌΡΠ½ΡΠΊΠ°ΡΡΡ Π· ΡΠ΅Ρ Π½ΡΡΠ½ΠΈΠΌΠΈ ΡΠ° Π½Π΅ ΡΠ΅Ρ Π½ΡΡΠ½ΠΈΠΌΠΈ ΡΠΏΠ΅ΡΡΠ°Π»ΡΡΡΠ°ΠΌΠΈ ΡΠ΅ ΡΠ°ΡΡΠΈΠ½Π° ΠΊΠΎΠΌΠΏΠ΅ΡΠ΅Π½ΡΡΠΉ.
ΠΠΎΠΆΠ»ΠΈΠ²Ρ Π·Π°Π΄Π°ΡΡ:
- ΠΡΠ΄ΡΡΠΈΠΌΠΊΠ° ΡΠ° ΡΠΎΠ·Π²ΠΈΡΠΎΠΊ ΡΡ
ΠΎΠ²ΠΈΡΠ° (Data Lakehouse) ΠΏΡΠΎΡΠΊΡΡΠ² Prom+, ΡΠΎ Π²ΠΊΠ»ΡΡΠ°Ρ Π² ΡΠ΅Π±Π΅:
- ΠΡΠΎΡΠΊΡΡΠ²Π°Π½Π½Ρ ΡΠ° Π½Π°ΠΏΠΎΠ²Π½Π΅Π½Π½Ρ ΡΡ ΠΎΠ²ΠΈΡΠ° Π΄Π°Π½ΠΈΠΌΠΈ, Π½Π΅ΠΎΠ±Ρ ΡΠ΄Π½ΠΈΠΌΠΈ Π΄Π»Ρ ΠΊΠΎΠΌΠ°Π½Π΄ ΠΏΡΠΎΡΠΊΡΡΠ².
- ΠΠ°ΠΏΠΈΡΠ°Π½Π½Ρ ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½ΡΠ² Π΄Π»Ρ ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ/ΠΎΠ½ΠΎΠ²Π»Π΅Π½Π½Ρ Π΄Π°Π½ΠΈΡ Π² ΡΡ ΠΎΠ²ΠΈΡΡ.
- Π Π΅ΡΠ°ΠΊΡΠΎΡΠΈΠ½Π³, ΠΌΠΎΠ΄ΠΈΡΡΠΊΠ°ΡΡΡ Π½Π°ΡΠ²Π½ΠΈΡ ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½ΡΠ².
- ΠΠ°ΠΏΠΈΡΠ°Π½Π½Ρ ΡΠ΅ΡΡΡΠ² ΡΠ° ΠΌΠΎΠ½ΡΡΠΎΡΠΈΠ½Π³ ΡΠΊΠΎΡΡΡ Π΄Π°Π½ΠΈΡ .
- ΠΠ±ΡΠ»ΡΠ³ΠΎΠ²ΡΠ²Π°Π½Π½Ρ Π΄ΠΎΠΊΡΠΌΠ΅Π½ΡΠ°ΡΡΡ ΠΏΠΎ ΡΡ ΠΎΠ²ΠΈΡΡ.
- ΠΡΠ΄ΡΡΠΈΠΌΠΊΠ° ΡΠ°ΡΡΠΈΠ½ΠΈ ΡΠ½ΡΡΠ°ΡΡΡΡΠΊΡΡΡΠΈ Π΄Π»Ρ ΡΠΎΠ±ΠΎΡΠΈ Π· Π΄Π°Π½ΠΈΠΌΠΈ, Airflow, Trino, Open Metadata (ΠΊΠΎΠ½ΡΡΠ³ΡΡΡΠ²Π°Π½Π½Ρ, ΠΎΠ½ΠΎΠ²Π»Π΅Π½Π½Ρ ΡΠ° Π΄Π΅ΠΏΠ»ΠΎΠΉ).
- ΠΠΎΠΏΠΎΠΌΠΎΠ³Π° ΡΠ° ΠΊΠΎΠ½ΡΡΠ»ΡΡΡΠ²Π°Π½Π½Ρ ΠΊΠΎΠΌΠ°Π½Π΄ Data Analytics, Data Science Π· ΠΏΡΠΈΠ²ΠΎΠ΄Ρ ΠΎΡΡΠΈΠΌΠ°Π½Π½Ρ ΡΠ° ΠΎΠ±ΡΠΎΠ±ΠΊΠΈ Π΄Π°Π½ΠΈΡ .
- Code Review ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½ΡΠ² ΠΊΠΎΠΌΠ°Π½Π΄ Data Analytics, Data Engineers.
ΠΡΠ°ΠΏΠΈ ΠΏΡΠ΄Π±ΠΎΡΡ:
- ΠΠ½Π°ΠΉΠΎΠΌΡΡΠ²ΠΎ
- Π’Π΅Ρ Π½ΡΡΠ½Π° ΡΠΏΡΠ²Π±Π΅ΡΡΠ΄Π°
- Π€ΡΠ½Π°Π»ΡΠ½Π° Π·ΡΡΡΡΡΡ (ΠΎΠΏΡΡΠΉΠ½ΠΎ)
ΠΡΠΎ ΡΠΎΠ±ΠΎΡΡ Π² EVO:
- Π‘ΠΎΡΡΠ°Π»ΡΠ½ΠΈΠΉ ΠΏΠ°ΠΊΠ΅Ρ β ΠΎΡΡΡΡΠΉΠ½Π΅ ΠΏΡΠ°ΡΠ΅Π²Π»Π°ΡΡΡΠ²Π°Π½Π½Ρ, 24 Π΄Π½Ρ ΠΎΠΏΠ»Π°ΡΡΠ²Π°Π½ΠΎΡ Π²ΡΠ΄ΠΏΡΡΡΠΊΠΈ Π½Π° ΡΡΠΊ Ρ Π½Π΅ΠΎΠ±ΠΌΠ΅ΠΆΠ΅Π½Π° ΠΊΡΠ»ΡΠΊΡΡΡΡ Π»ΡΠΊΠ°ΡΠ½ΡΠ½ΠΈΡ , ΡΠΎΠ± Π²ΠΈ ΠΌΠΎΠ³Π»ΠΈ Π²ΡΠ΄ΠΏΠΎΡΠΈΠ²Π°ΡΠΈ ΡΠ° Π΄Π±Π°ΡΠΈ ΠΏΡΠΎ ΡΠ²ΠΎΡ Π·Π΄ΠΎΡΠΎΠ²βΡ.
- Π’ΡΡΠ±ΠΎΡΠ° ΠΏΡΠΎ Π·Π΄ΠΎΡΠΎΠ²βΡ β ΠΌΠΈ ΠΏΠΎΠΊΡΠΈΠ²Π°ΡΠΌΠΎ ΠΌΠ΅Π΄ΠΈΡΠ½Π΅ ΡΡΡΠ°Ρ ΡΠ²Π°Π½Π½Ρ ΡΠ° ΠΏΡΠΎΠΏΠΎΠ½ΡΡΠΌΠΎ ΠΏΡΠ΄ΡΡΠΈΠΌΠΊΡ ΠΊΠΎΡΠΏΠΎΡΠ°ΡΠΈΠ²Π½ΠΎΠ³ΠΎ ΠΏΡΠΈΡ ΠΎΠ»ΠΎΠ³Π°, Π°Π΄ΠΆΠ΅ ΠΏΠ΅ΡΠ΅ΠΊΠΎΠ½Π°Π½Ρ, ΡΠΎ ΠΏΡΠΊΠ»ΡΠ²Π°Π½Π½Ρ ΠΏΡΠΎ ΠΌΠ΅Π½ΡΠ°Π»ΡΠ½Π΅ Π·Π΄ΠΎΡΠΎΠ²βΡ Ρ ΡΠ°ΠΊ ΡΠ°ΠΌΠΎ Π²Π°ΠΆΠ»ΠΈΠ²ΠΈΠΌ, ΡΠΊ Ρ ΠΏΡΠΎ ΡΡΠ·ΠΈΡΠ½Π΅.
- ΠΠ½ΡΡΠΊΠΈΠΉ ΡΠΎΡΠΌΠ°Ρ ΡΠΎΠ±ΠΎΡΠΈ β Π²ΡΠ΄Π΄Π°Π»Π΅Π½ΠΎ Π°Π±ΠΎ Π² ΠΎΡΡΡΡ. ΠΠΈ ΠΌΠΎΠΆΠ΅ΡΠ΅ ΠΏΡΠ°ΡΡΠ²Π°ΡΠΈ Π΄ΠΈΡΡΠ°Π½ΡΡΠΉΠ½ΠΎ Π°Π±ΠΎ Π²ΡΠ΄Π²ΡΠ΄ΡΠ²Π°ΡΠΈ Π½Π°Ρ Π·Π°ΡΠΈΡΠ½ΠΈΠΉ ΠΎΡΡΡ Π² ΠΠΈΡΠ²Ρ, ΡΠΊΠΈΠΉ ΠΏΠΎΠ²Π½ΡΡΡΡ Π΅Π½Π΅ΡΠ³ΠΎΠ½Π΅Π·Π°Π»Π΅ΠΆΠ½ΠΈΠΉ ΡΠ° ΠΎΡΠ½Π°ΡΠ΅Π½ΠΈΠΉ ΡΡΡΠΌ Π½Π΅ΠΎΠ±Ρ ΡΠ΄Π½ΠΈΠΌ.
- ΠΠΎΠ»ΠΎΠ½ΡΠ΅ΡΡΡΠΊΠ° ΡΠΏΡΠ»ΡΠ½ΠΎΡΠ° β ΠΌΠΈ ΡΠ΅Π³ΡΠ»ΡΡΠ½ΠΎ ΠΏΡΠΎΠ²ΠΎΠ΄ΠΈΠΌΠΎ Π±Π»Π°Π³ΠΎΠ΄ΡΠΉΠ½Ρ Π°ΡΠΊΡΡΠΎΠ½ΠΈ, Π·Π±ΠΈΡΠ°ΡΠΌΠΎ Π³ΡΠΎΡΡ Π½Π° Π΄ΡΠΎΠ½ΠΈ-ΡΠΎΠ·Π²ΡΠ΄Π½ΠΈΠΊΠΈ ΡΠ° ΠΏΡΠ΄ΡΡΠΈΠΌΡΡΠΌΠΎ Π²ΠΎΠ»ΠΎΠ½ΡΠ΅ΡΡΡΠΊΡ ΡΠ½ΡΡΡΠ°ΡΠΈΠ²ΠΈ ΡΠΏΡΠ²ΡΠΎΠ±ΡΡΠ½ΠΈΠΊΡΠ².
- ΠΠΈ Π½Π°Π΄Π°ΡΠΌΠΎ ΡΡΠ²Π½Ρ ΠΌΠΎΠΆΠ»ΠΈΠ²ΠΎΡΡΡ Π΄Π»Ρ Π²ΡΡΡ β ΡΠΎΠΌΡ Π½Π΅ Π΄ΠΎΠΏΡΡΠΊΠ°ΡΠΌΠΎ Π΄ΠΈΡΠΊΡΠΈΠΌΡΠ½Π°ΡΡΡ Π·Π° Π±ΡΠ΄Ρ-ΡΠΊΠΈΠΌΠΈ ΠΎΠ·Π½Π°ΠΊΠ°ΠΌΠΈ. Π’Π°ΠΊΠΎΠΆ ΠΌΠΈ Π²ΡΠ΄ΠΊΡΠΈΡΡ Π΄ΠΎ ΡΠΏΡΠ²ΠΏΡΠ°ΡΡ Π· Π²Π΅ΡΠ΅ΡΠ°Π½Π°ΠΌΠΈ/Π²Π΅ΡΠ΅ΡΠ°Π½ΠΊΠ°ΠΌΠΈ ΡΠ° Π³ΠΎΡΠΎΠ²Ρ ΠΏΡΠ΄ΡΡΠΈΠΌΠ°ΡΠΈ ΡΡ Π½Π° ΡΠ»ΡΡ Ρ Π΄ΠΎ Π½ΠΎΠ²ΠΈΡ ΠΏΡΠΎΡΠ΅ΡΡΠΉΠ½ΠΈΡ Π΄ΠΎΡΡΠ³Π½Π΅Π½Ρ.
- ΠΠΎΠΆΠ»ΠΈΠ²ΡΡΡΡ Π½Π°Π²ΡΠ°Π½Π½Ρ ΡΠ° ΠΏΡΠΎΡΠ΅ΡΡΠΉΠ½ΠΎΠ³ΠΎ Π·ΡΠΎΡΡΠ°Π½Π½Ρ. Π§Π΅ΡΠ½ΡΡΡΡ ΡΠ° Π²ΡΠ΄ΠΊΡΠΈΡΡΡΡΡ Ρ Π²ΡΡΡ ΠΊΠΎΠΌΡΠ½ΡΠΊΠ°ΡΡΡΡ . ΠΠΎΠ½ΡΡΡΡΠΊΡΠΈΠ²Π½ΠΈΠΉ Π·Π²ΠΎΡΠΎΡΠ½ΠΈΠΉ Π·Π²βΡΠ·ΠΎΠΊ Π·Π° ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΠ°ΠΌΠΈ ΡΠΎΠ±ΠΎΡΠΈ. ΠΡΠ΄ΡΡΠΈΠΌΠΊΠ° Π»ΡΠ΄Π΅ΡΠ° Ρ ΠΊΠΎΠΌΠ°Π½Π΄ΠΈ.
-
Β· 21 views Β· 1 application Β· 1d
Senior Data Engineer/Lead Data Engineer (Healthcare domain)
Full Remote Β· Europe except Ukraine Β· 5 years of experience Β· Upper-IntermediateWe are looking for a Senior Data Engineer with extensive experience in data engineering who is passionate about making an impact. Join our team, where you will have the opportunity to drive innovation, improve solutions, and help us reach new heights! If...We are looking for a Senior Data Engineer with extensive experience in data engineering who is passionate about making an impact. Join our team, where you will have the opportunity to drive innovation, improve solutions, and help us reach new heights!
If you're ready to take your expertise to the next level and contribute significantly to the success of our projects, submit your resume now.
Our client is a leading medical technology company. The portfolio of products, services, and solutions is central to clinical decision-making and treatment pathways. Patient-centered innovation has always been at the core of the company, which is committed to improving patient outcomes and experiences, no matter where they live or what challenges they face. The company is innovating sustainably to provide healthcare for everyone, everywhere.
The Projectβs mission is to enable healthcare providers to increase their value by equipping them with innovative technologies and services in diagnostic and therapeutic imaging, laboratory diagnostics, molecular medicine, and digital health and enterprise services.
Responsibilities:- Work closely with the client (PO) as well as other team members to clarify tech requirements and expectations
- Contribute to the design, development, and optimization of squad-specific data architecture and pipelines adhering to defined ETL and Data Lake principles
- Implement architectures using Azure Cloud platforms (Data Factory, Databricks, Event Hub)
- Discover, understand, and organize disparate data sources, structuring them into clean data models with clear, understandable schemas
- Evaluate new tools for analytical data engineering or data science and suggest improvements
- Contribute to training plans to improve analytical data engineering skills, standards, and processes
Requirements:- Solid experience in data engineering and cloud computing services, specifically in the areas of data and analytics (Azure preferred)
- Strong conceptual knowledge of data analytics fundamentals, including dimensional modeling, ETL, reporting tools, data governance, data warehousing, and handling both structured and unstructured data
- Expertise in SQL and at least one programming language (Python/Scala)
- Excellent communication skills and fluency in business English
- Familiarity with Big Data DB technologies such as Snowflake, BigQuery, etc. (Snowflake preferred)
- Experience with database development and data modeling, ideally with Databricks/Spark
-
Β· 101 views Β· 25 applications Β· 2d
Middle Python / Data Engineer
Part-time Β· Full Remote Β· Worldwide Β· 2 years of experience Β· Upper-IntermediateInvolvement: ~15β20 hours/week Start Date: ASAP Location: Remote Client: USA-based Project: Legal IT β AI-powered legal advisory platform About the Project Join a growing team behind Legal IT, an intelligent legal advisory platform that simplifies legal...Involvement: ~15β20 hours/week
Start Date: ASAP
Location: Remote
Client: USA-based
Project: Legal IT β AI-powered legal advisory platformAbout the Project
Join a growing team behind Legal IT, an intelligent legal advisory platform that simplifies legal support for businesses. The platform features:
- A robust contract library
- AI-assisted document generation & guidance
- Interactive legal questionnaires
- A dynamic legal blog with curated insights
Weβre building out advanced AI-driven proof-of-concepts (PoCs) and are looking for a strong Python/Data Engineer to support the backend logic and data pipelines powering these tools.
Core Responsibility
- Collaborate directly with the AI Architect to develop and iterate on proof-of-concept features with ongoing development
Being a part of 3asoft means having:
More
- High level of flexibility and freedom
- p2p relationship with worldwide customers
- Competitive compensation paid in USD
- Fully remote working