Jobs Data Engineer

163
  • Β· 54 views Β· 2 applications Β· 10d

    Junior Snowflake Data Engineer

    Full Remote Β· Ukraine Β· 2 years of experience Β· English - B2
    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data...

    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data infrastructure through the transition to Snowflake as a priority, as it will enhance capabilities for implementing advanced AI solutions and unlock numerous opportunities for innovation and growth.

    We are seeking a highly skilled Snowflake Data Engineer to design, build, and optimize scalable data pipelines and cloud-based solutions across AWS, Azure, and GCP. The ideal candidate will have strong expertise in Snowflake, ETL Tools like DBT, Python, visualization tools like Tableau and modern CI/CD practices, with a deep understanding of data governance, security, and role-based access control (RBAC). Knowledge of data modeling methodologies (OLTP, OLAP, Data Vault 2.0), data quality frameworks, Stream lit application development and SAP integration and infrastructure-as-code with Terraform is essential. Experience working with different file formats such as JSON, Parquet, CSV, and XML is highly valued.

    • Responsibilities:

      β€’ In-depth knowledge of Snowflake's data warehousing capabilities.
      β€’ Understanding of Snowflake's virtual warehouse architecture and how to optimize performance
      and cost.
      β€’ Proficiency in using Snowflake's data sharing and integration features for seamless collaboration.
      β€’ Develop and optimize complex SQL scripts, stored procedures, and data transformations.
      β€’ Work closely with data analysts, architects, and business teams to understand requirements and
      deliver reliable data solutions.
      β€’ Implement and maintain data models, dimensional modeling for data warehousing, data marts,
      and star/snowflake schemas to support reporting and analytics.
      β€’ Integrate data from various sources including APIs, flat files, relational databases, and cloud
      services.
      β€’ Ensure data quality, data governance, and compliance standards are met.
      β€’ Monitor and troubleshoot performance issues, errors, and pipeline failures in Snowflake and
      associated tools.
      β€’ Participate in code reviews, testing, and deployment of data solutions in development and production environments.

    • Mandatory Skills Description:

      β€’ 2+ years of experience
      β€’ Strong proficiency in Snowflake (Snowpipe, RBAC, performance tuning).
      β€’ Ability to write complex SQL queries, stored procedures, and user-defined functions.
      β€’ Skills in optimizing SQL queries for performance and efficiency.
      β€’ Experience with ETL/ELT tools and techniques, including Snowpipe, AWS Glue, openflow, fivetran
      or similar tools for real-time and periodic data processing.
      β€’ Proficiency in transforming data within Snowflake using SQL, with Python being a plus.
      β€’ Strong understanding of data security, compliance and governance.
      β€’ Experience with DBT for database object modeling and provisioning.
      β€’ Experience in version control tools, particularly Azure DevOps.
      β€’ Good documentation and coaching practice.

    More
  • Β· 69 views Β· 4 applications Β· 10d

    Data Engineer

    Office Work Β· Ukraine (Kyiv) Β· Product Β· 5 years of experience Β· English - B2
    About Us: Atto Trading, a dynamic quantitative trading firm founded in 2010 and leading in global high-frequency strategies, is looking for a Data Engineer to join our team. We are expanding an international, diverse team with experts in trading,...

    About Us:

    Atto Trading, a dynamic quantitative trading firm founded in 2010 and leading in global high-frequency strategies, is looking for a Data Engineer to join our team.

    We are expanding an international, diverse team with experts in trading, statistics, engineering, and technology. Our disciplined approach, combined with rapid market feedback, allows us to quickly turn ideas into profit. Our environment of learning and collaboration allows us to solve some of the world’s hardest problems, together. As a small firm, we remain nimble and hold ourselves to the highest standards of integrity, ingenuity, and effort. 

    Role Highlights:
    We are seeking an experienced Senior Data Engineer to design, build, and maintain our comprehensive Data Lake for a fast-growing number of research and production datasets. This role combines hardware and platform infrastructure expertise with data engineering excellence to support our rapidly growing data assets (~200TB current, scaling ~100TB/year). 
     

    Responsibilities:

    • Architect and manage high-performance, scalable on-premise data storage systems optimized for large-scale data access and analytics workloads
    • Configure and maintain compute clusters for distributed data processing
    • Plan capacity and scalability roadmaps to accommodate 100TB+ annual data growth
    • Design and implement efficient monitoring and alerting systems to forecast growth trends and proactively react to critical states
    • Design, create, automate, and maintain various data pipelines
    • Enhance existing and setup new β€œdata checks” and alerts to determine when the data is β€œbad”
    • Design and implement a comprehensive on-premise Data Lake system connected to VAST storage solution for normalized market data across:
      • US Equities, US Futures, and SIP feeds
      • Other market data sources that will be further added
      • Security Definition data for various markets
      • Various private column data
    • Build and operate end‑to‑end data pipelines and SLA/SLO monitoring to ensure data quality, completeness, and governance
    • Analyze existing data models, usage patterns, and access frequencies to identify bottlenecks and optimization opportunities
    • Develop metadata and catalog layers for efficient data discovery and self‑service access
    • Design and deploy event‑driven architectures for near real‑time market data processing and delivery
    • Orchestrate ETL/ELT data pipelines using tools like Prefect (or Airflow), ensuring robustness, observability, and clear operational ownership
    • Ensure fault tolerance, scalability, and high availability across existing systems
    • Partner with traders, quantitative researchers, and other stakeholders to understand use cases and continuously improve the usability, performance, and reliability of the Data Lake  


    Requirements:

    • 5+ years of experience in data engineering or data platform roles
    • Proven experience with large‑scale data infrastructure (hundreds of TBs of data, high‑throughput pipelines)
    • Strong understanding of market data formats and financial data structures (e.g., trades, quotes, order books, corporate actions)
    • Experience designing and modernizing data infrastructure within on-premise solutions
    • Bachelor’s degree in Computer Science, Engineering or related field required; Master’s degree preferred or equivalent practical experience


    Tech Skills:

    • Data Engineering - Spark, Iceberg (or similar table formats), Trino/Presto, Parquet optimization
    • ETL pipelines - Prefect/Airflow or similar DAG-oriented tools
    • Infrastructure - High-performance networking and compute
    • Storage Systems - High-performance distributed storage, NAS/SAN, object storage
    • Networking - Low-latency networking (aware about DPDK and kernel bypass technologies. Data center infrastructure basics
    • Programming - Python (production‑grade), SQL, building APIs (e.g., FastAPI)
    • Data Analysis - Advanced SQL, Tableau (or similar BI tools), data profiling tools

    Nice to have:

    • Experience in HFT or financial services
    • Background in high‑frequency trading (HFT) or quantitative finance

    What we offer:

    • Competitive compensation package
    • Performance-based bonus opportunities
    • Healthcare & Sports/gym budget
    • Mental health support, including access to therapy
    • Paid time off (25 days)
    • Relocation support (where applicable)
    • International team meet-ups
    • Learning and development support, including courses and certifications
    • Access to professional tools, software, and resources
    • Fully equipped workstations with high-quality hardware
    • Modern office with paid lunches
       

    Our motivation:

    We are a company committed to staying at the forefront of technology. Our team is passionate about continual learning and improvement. With no external investors or customers, we are the primary users of the products we create, giving you the opportunity to make a real impact on our company's growth.

     

    Ready to advance your career? Join our innovative team and help shape the future of trading on a global scale. Apply now and let's create the future together!

    More
  • Β· 36 views Β· 0 applications Β· 10d

    Infrastructure Developer (Π‘++), Vinnytsia HUB, Ukraine

    Hybrid Remote Β· Ukraine Β· Product Β· 5 years of experience Β· English - B2
    An engineering and technology company that creates cutting-edge robotic, autonomous, and mission-critical systems used in real-world conditions around the world. Teams work on complex hardware and software solutions, from system architecture and...

    An engineering and technology company that creates cutting-edge robotic, autonomous, and mission-critical systems used in real-world conditions around the world. Teams work on complex hardware and software solutions, from system architecture and electronics to high-performance real-time software.
     

    The company's employees work in international engineering hubs, where local talent interacts with teams and partners from different countries, sees the direct impact of their work, and participates in global projects. This opens up opportunities for professional growth, development of expertise in robotics and autonomous systems, and participation in the creation of innovative solutions that shape the future of high-tech industries

    We are looking for an Infrastructure Developer to take ownership of the core system infrastructure that ensures reliable, low-latency, real-time operation. You will work with Linux, embedded platforms, and video systems, collaborating with backend, frontend, and hardware teams to maintain system stability, performance, and scalability throughout the full software lifecycle. This is a unique opportunity to work on complex, real-world systems at the intersection of robotics, autonomy, and high-performance software engineering.


    KEY RESPONSIBILITIES
    β€’ Develop, maintain, and optimize infrastructure and low-level components for embedded systems.
    β€’ Develop and maintain video pipelines for real-time and low-latency systems.
    β€’ Build, customize, and maintain Linux kernels and BSPs.
    β€’ Develop and maintain Docker-based build and deployment environments for embedded systems.
    β€’ Optimize system performance, latency, reliability, and resource usage.
    β€’ Debug, profile, and maintain complex production and embedded systems.
    β€’ Conduct code reviews and ensure high code quality and adherence to best practices.
    β€’ Collaborate with cross-disciplinary teams to deliver robust system solutions.

    BASIC QUALIFICATIONS

    β€’ At least 5 years of hands-on C++ development experience.
    β€’ Strong experience working in Linux-based environments.
    β€’ Experience with Docker and containerized deployments.
    β€’ Experience with real-time or low-latency systems.
    β€’ Strong debugging, profiling, and performance optimization skills.
    β€’ Experience with Git and modern development tools.
    β€’ Ability to work independently and take ownership of infrastructure components.

    PREFERRED SKILLS AND EXPERIENCE

    β€’ Experience with video streaming protocols (e.g., RTP, RTSP, WebRTC).
    β€’ Experience with Gstreamer.
    β€’ Familiarity with GPU / hardware-accelerated video pipelines.
    β€’ Background in robotics or autonomous systems.
    β€’ Experience with mission-critical or safety-critical environments.

    what we offer
    β€’ Experience in a fast-growing, highly innovative global industry.
    β€’ Excellent work conditions and open-minded team.
    β€’ Corporate events, regular internal activities and other benefits.
    β€’ Professional development opportunities and training.

    More
  • Β· 67 views Β· 0 applications Β· 10d

    Sales Executive (Google Cloud+Google Workspace)

    Full Remote Β· Czechia Β· Product Β· 2 years of experience Β· English - B2
    Cloudfresh is a Global Google Cloud Premier Partner, Zendesk Premier Partner, Asana Solutions Partner, GitLab Select Partner, Hubspot Platinum Partner, Okta Activate Partner, and Microsoft Partner. Since 2017, we’ve been specializing in the...

    Cloudfresh ⛅️ is a Global Google Cloud Premier Partner, Zendesk Premier Partner, Asana Solutions Partner, GitLab Select Partner, Hubspot Platinum Partner, Okta Activate Partner, and Microsoft Partner.

    Since 2017, we’ve been specializing in the implementation, migration, integration, audit, administration, support, and training for top-tier cloud solutions. Our products focus on cutting-edge cloud computing, advanced location and mapping, seamless collaboration from anywhere, unparalleled customer service, and innovative DevSecOps.

    We are seeking a dynamic Sales Executive to lead our sales efforts for GCP and GWS solutions across the EMEA and CEE regions. The ideal candidate will be a high-performing A-player with experience in SaaS sales, adept at navigating complex sales environments, and driven to exceed targets through strategic sales initiatives.

    Requirements:

    • Fluency in English and native Czech is essential;
    • From 2 years of proven sales experience in SaaS/ IaaS fields, with a documented history of achieving and exceeding sales targets, particularly in enterprise sales;
    • Sales experience on GCP and/or GWS specifically;
    • Sales or technical certifications related to Cloud Solutions are advantageous;
    • Experience in expanding new markets with outbound activities;
    • Excellent communication, negotiation, and strategic planning abilities;
    • Proficient in managing CRM systems and understanding their strategic importance in sales and customer relationship management.

    Responsibilities:

    • Develop and execute sales strategies for GCP and GWS solutions, targeting enterprise clients within the Cloud markets across EMEA and CEE;
    • Identify and penetrate new enterprise market segments, leveraging GCP and GWS to improve client outcomes;
    • Conduct high-level negotiations and presentations with major companies across Europe, focusing on the strategic benefits of adopting GCP and GWS solutions;
    • Work closely with marketing and business development teams to align sales strategies with broader company goals;
    • Continuously assess the competitive landscape and customer needs, adapting sales strategies to meet market demands and drive revenue growth.

    Work conditions:

    • Competitive Salary & Transparent Motivation: Receive a competitive base salary with commission on sales and performance-based bonuses, providing clear financial rewards for your success.
    • Flexible Work Format: Work remotely with flexible hours, allowing you to balance your professional and personal life efficiently.
    • Freedom to Innovate: Utilize multiple channels and approaches for sales, allowing you the freedom to find the best strategies for success.
    • Training with Leading Cloud Products: Access in-depth training on cutting-edge cloud solutions, enhancing your expertise and equipping you with the tools to succeed in an ever-evolving industry.
    • International Collaboration: Work alongside A-players and seasoned professionals in the cloud industry. Expand your expertise by engaging with international markets across the EMEA and CEE regions.
    • Vibrant Team Environment: Be part of an innovative, dynamic team that fosters both personal and professional growth, creating opportunities for you to advance in your career.
    • When applying to this position, you consent to the processing of your personal data by CLOUDFRESH for the purposes necessary to conduct the recruitment process, in accordance with Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 (GDPR).
    • Additionally, you agree that CLOUDFRESH may process your personal data for future recruitment processes.
    More
  • Β· 39 views Β· 5 applications Β· 11d

    Principal Analytics Developer

    Full Remote Β· EU Β· 3 years of experience Β· English - B2
    The Principal Analytics Developer is a new role that will support the newly created Product Data Domain teams. The role requires strong skills in dimensionally modelling, conforming and integrating data from multiple sources, as well as experience in...

    The Principal Analytics Developer is a new role that will support the newly created Product Data Domain teams. The role requires strong skills in dimensionally modelling, conforming and integrating data from multiple sources, as well as experience in leading strong analytics engineering teams.
    Responsibilities:
     

    • Planning workloads and delegating tasks in agile environment
    • Assisting with the daily operation of the organisation, including support and incidents
    • Able to provide feedback to team members, including constructive areas for development
    • Leading on the design, implementation and maintenance of dimensional data models that promote a self-service approach to data consumption. This includes ensuring that data quality within the data warehouse is maintained throughout the data lifecycle.
    • Define best practices in dimensional data modelling and database design and ensure standards are adhered to across the team.
    • Mentoring, coaching and supporting other team members in developing data modelling skills through knowledge transfer.
    • Automating data pipelines using proprietary technology & Airflow.
    • Using your expert knowledge of the company products and their features to inform the design and development of data products and upskilling the team through this knowledge.
    • Developing ways of working between product data domains and other data teams within product group.
    • The creation of processes for data product development, ensuring these processes are documented and advocating their use throughout the organisation.
    • Supporting analytics, data science and other colleagues outside the digital product area in managing projects and fielding queries.
    • Ability to build and maintain strong working relationships where you might, as a specialist, have to manage the expectations of more senior colleagues.
    • Working across mobile, web, television and voice platforms supporting Product Managers, Business Analysts and working closely with Software & Data Engineers.

       

    Requirements:
     

    • Extensive (5+ years) experience in managing teams building data warehouses / analytics from a diverse set of data sources (including event streams, various forms of batch processing)
    • At least 5 years’ experience in a Data Analyst, Data Modelling, Data Engineering or Analytics Engineering role, preferably in digital products, with an interest in data modelling and ETL processes
    • Proven experience in dimensionally modelling complex data at the conceptual, logical and physical layer.
    • Experience of designing STAR Schemas
    • Excellent SQL skills for extracting and manipulating data. Experience of using tools such as DBT, Looker and Airflow would be an advantage.
    • Good knowledge of analytical database systems (Redshift, Snowflake, BigQuery).
    • Comfortable working alongside cross-functional teams interacting with Product Managers, Engineers, Data Scientists, and Analysts.
    • Knowledge of digital products and their components, as well as what metrics affect their performance.
    • An understanding of how digital products use experimentation.
    • Some experience coding in R or Python.
    • A good understanding of on-demand audio and video media products, with a knowledge of key competitors.
       

    Will be a plus:

     

    • Ability to listen to others’ ideas and build on them 
    • Ability to clearly communicate to both technical and non-technical audiences.
    • Ability to collaborate effectively, working alongside other team members towards the team’s goals, and enabling others to succeed, where possible.
    • Ability to prioritise. A structured approach and ability to bring other on the journey. 
    • Strong attention to detail
       
    More
  • Β· 114 views Β· 12 applications Β· 11d

    Senior Solana Engineer (Smart Wallet)

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B1
    Senior Solana Developer - CoFo Neobank About the Project We're building CoFo Neobank β€” the first AI-first smart wallet on Solana that brings the banking app experience (like Revolut, Robinhood) into the on-chain environment. Our goal is to abstract...

    Senior Solana Developer - CoFo Neobank

    About the Project

    We're building CoFo Neobank β€” the first AI-first smart wallet on Solana that brings the banking app experience (like Revolut, Robinhood) into the on-chain environment.

    Our goal is to abstract blockchain complexity. We're building an architecture where every user gets a Smart Account (a programmable account, not a simple EOA) that supports multi-factor authentication (2/3 Multisig), access recovery, spending limits, and native integration of complex financial products (Staking, Yield, Perps, RWA).

    Core Responsibilities

    β€’ Smart Account Architecture Development: Design and write custom Rust programs (Anchor) for managing user accounts. Implement multisig logic (Device Key + 2FA Key), key rotation, and access recovery (Social Recovery).

    β€’ DeFi Composability (Integrations): Write adapters and CPI (Cross-Program Invocations) calls to integrate external protocols directly into the smart account:

    • Swap: Aggregation through Jupiter
    • Yield & Lending: Integration with Kamino, MarginFi, Meteora
    • Perps: Integration with Drift Protocol

    β€’ Security and Access Control: Implement spending limits system, protocol whitelisting, and replay attack protection.

    β€’ Portfolio Logic: Develop on-chain structures for position tracking (storing data about deposits, debts, PnL) for fast frontend/AI reading.

    β€’ Gas Abstraction: Implement mechanisms for paying fees on behalf of users (Fee Bundling / Gas Tank).

    Requirements (Hard Skills)

    β€’ Expert Rust & Anchor: Deep understanding of Solana Sealevel runtime, memory management, PDAs, and Compute Units (CU) limitations.

    β€’ Account Abstraction Experience: Understanding of how to build smart contract wallets that differ from standard system accounts.

    β€’ DeFi Integration Experience: You've already worked with SDKs or IDLs of major Solana protocols (Jupiter, Kamino, Drift, etc.). You understand what CPI is and how to safely call external code.

    β€’ Cryptography: Understanding of signature operations (Ed25519), transaction verification, and building secure multisig schemes.

    β€’ Security Mindset: Experience with audits, knowledge of attack vectors on Solana (re-entrancy, account substitution, ownership checks).

    Nice to Have

    β€’ Experience with Privy (for authentication) β€’ Understanding of cross-chain bridges (Wormhole/LayerZero) for implementing deposits from other networks β€’ Experience with tokenized assets (RWA) and Token-2022 standard

    Tech Stack

    β€’ Solana (Rust, Anchor Framework) β€’ Integrations: Jupiter, Kamino, Drift, MarginFi β€’ Infrastructure: Helius, Privy

    We Offer

    β€’ Work on a product that's changing UX in DeFi β€’ Complex architectural challenges (not just another token fork, but sophisticated wallet infrastructure) β€’ Competitive compensation in stablecoins/fiat + project options/tokens

    More
  • Β· 36 views Β· 2 applications Β· 11d

    Data Streaming Engineer

    Full Remote Β· Worldwide Β· 3 years of experience Β· English - None
    N.B.! Location: remote from Latvia/Lithuania; possible relocation (the company provides support). JD: Client: Media group Belgium. Skills Required: AWS, Kafka, Spark, Python (FastAPI), SQL, Terraform. You have: high standards for the quality of...

    N.B.! Location: remote from Latvia/Lithuania; possible relocation (the company provides support).
    JD:

    βœ”οΈο»ΏClient: Media group Belgium.

    βœ”οΈSkills Required: AWS, Kafka, Spark, Python (FastAPI), SQL, Terraform.

    βœ”οΈYou have:

    ● high standards for the quality of the work you deliver

    ● a degree in computer science, software engineering, a related field, or relevant prior experience

    ● 3+ years of software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience

    ● affinity with data analysis

    ● a natural interest in digital media products

    ● AWS certified on an Associate level or higher, or willing to get certified

    βœ”οΈyou have experience in:

    ● developing applications in a Kubernetes environment

    ● developing batch jobs in Apache Spark (pyspark or Scala)

    ● developing streaming applications for Apache Kafka in Python

    ● working with CI/CD pipelines

    ● writing Infrastructure as Code with Terraform

    βœ”οΈResponsibilities

    ● Maintain and extend our back-end.

    ● Support operational excellence through practices like code review and pair programming.

    ● The entire team is responsible for the operations of our services. This includes actively monitoring different applications and their infrastructure as well as intervening to solve operational problems whenever they arise.

    More
  • Β· 58 views Β· 27 applications Β· 11d

    Python Data Engineer

    Full Remote Β· Worldwide Β· 5 years of experience Β· English - B2
    Core Responsibilities β€’ Data Pipeline Management: Develop, optimize, and maintain scalable data pipelines to ensure high-quality data flow. β€’ API Development: Build and maintain high-performance backend APIs using FastAPI. β€’ System Reliability:...

    Core Responsibilities

     β€’ Data Pipeline Management: Develop, optimize, and maintain scalable data pipelines to ensure high-quality data flow. 

    β€’ API Development: Build and maintain high-performance backend APIs using FastAPI. 

    β€’ System Reliability: Proactively identify bottlenecks and improve system stability within existing infrastructures.

     β€’ Collaboration: Work closely with cross-functional teams to integrate AWS services and workflow orchestration tools into the production environment.

     

    Required Qualifications 

    β€’ Experience: 3+ years of professional Python development experience. 

    β€’ Databases: Strong proficiency in both SQL and NoSQL database design and management. 

    β€’ DevOps Tools: Hands-on experience with Docker, CI/CD pipelines, and Git version control. 

    β€’ Frameworks: Proven experience building applications with FastAPI.

    β€’ Cloud & Orchestration: Practical experience with AWS services and familiarity with Airflow (or similar workflow orchestration tools). 

    β€’ Communication: Upper-Intermediate level of English (written and spoken) for effective team collaboration. 

     

     Preferred Skills (Nice to Have) 

    β€’ Experience within the Financial Domain. 

    β€’ Hands-on experience with Apache Spark and complex ETL pipelines.

    β€’ Knowledge of container orchestration using Kubernetes. 

    β€’ Exposure to or interest in Large Language Models (LLMs) and AI integration.

    More
  • Β· 44 views Β· 11 applications Β· 11d

    Senior Data Engineer

    Worldwide Β· Product Β· 4 years of experience Β· English - C1
    How about building a high-load data architecture that handles millions of transactions daily? We’re looking for a Senior Data Engineer with growing to Data Lead. For design scalable pipelines from scratch. An international iGaming company & Data-first...

    How about building a high-load data architecture that handles millions of transactions daily?
    We’re looking for a Senior Data Engineer with growing to Data Lead.
    For design scalable pipelines from scratch.
    An international iGaming company & Data-first mindset,
    Remote, TOP-Salary

     

    Responsibilities

    – Build and run scalable pipelines (batch + streaming) that power gameplay, wallet, and promo analytics.

    – Model data for decisions (star schemas, marts) that Product, BI, and Finance use daily.

    – Make things reliable: tests, lineage, alerts, SLAs. Fewer surprises, faster fixes.

    – Optimize ETL/ELT for speed and cost (partitioning, clustering, late arrivals, idempotency).

    – Keep promo data clean and compliant (PII, GDPR, access controls).

    – Partner with POs and analysts on bets/wins/turnover KPIs, experiment readouts, and ROI.

    – Evaluate tools, migrate or deprecate with clear trade-offs and docs.

    – Handle prod issues without drama, then prevent the next one.

     

     

    Requirements

    – 4+ years building production data systems. You’ve shipped, broken, and fixed pipelines at scale.

    – SQL that sings and Python you’re proud of.

    – Real experience with OLAP and BI (Power BI / Tableau / Redash β€” impact > logo).

    – ETL/ELT orchestration (Airflow/Prefect or similar) and CI/CD for data.

    – Strong grasp of warehouses & lakes: incremental loads, SCDs, partitioning.

    – Data quality mindset: contracts, tests, lineage, monitoring.

    – Product sense: you care about player/clients impact, not just rows processed.

    ✨ Nice to Have (tell us if you’ve got it)

    – Kafka (or similar streaming), ClickHouse (we like it), dbt (modular ELT).

    – AWS data stack (S3, IAM, MSK/Glue/Lambda/Redshift) or equivalents.

    – Containers & orchestration (Docker/K8s), IaC (Terraform).

    – Familiarity with AI/ML data workflows (feature stores, reproducibility).

    – iGaming context: provider metrics bets / wins / turnover, regulated markets, promo events.

     

     

    We offer

    – Fully remote (EU-friendly time zones) or Bratislava/Malta/Cyprus if you like offices.

    – Unlimited vacation + paid sick leave.

    – Quarterly performance bonuses.

    – No micromanagement. Real ownership, real impact.

    – Budget for conferences and growth.

    – Product-led culture with sharp people who care.

     

     

    🧰 Our Day-to-Day Stack (representative)
    Python, SQL, Airflow/Prefect, Kafka, ClickHouse/OLAP DBs, AWS (S3 + friends), dbt, Redash/Tableau, Docker/K8s, GitHub Actions.

    More
  • Β· 35 views Β· 1 application Β· 12d

    Middle/Senior/Lead Python Cloud Engineer (IRC280058)

    Hybrid Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    Job Description β€’ Terraform β€’ AWS Platform: Working experience with AWS services - in particular serverless architectures (S3, RDS, Lambda, IAM, API Gateway, etc.) supporting API development in a microservices architecture β€’ Programming Languages: Python...

    Job Description

    β€’ Terraform

    β€’ AWS Platform: Working experience with AWS services - in particular serverless architectures (S3, RDS, Lambda, IAM, API Gateway, etc.) supporting API development in a microservices architecture

    β€’ Programming Languages: Python (strong programming skills)

    β€’ Data Formats: Experience with JSON, XML, and other relevant data formats

    β€’ CI/CD Tools: experience setting up and managing CI/CD pipelines using GitLab CI, Jenkins, or similar tools

    β€’ Scripting and automation: experience in scripting languages such as Python, PowerShell, etc.

    β€’ Monitoring and Logging: Familiarity with monitoring & logging tools like CloudWatch, ELK, Dynatrace, Prometheus, etc…

    β€’ Source Code Management: Expertise with git commands and associated VCS (Gitlab, Github, Gitea, or similar)

     

     

    NICE TO HAVE

    β€’ Strongly Preferred: Infrastructure as Code: Experience with Terraform and CloudFormation - Proven ability to write and manage Infrastructure as Code (IaC)
    β€’ Documentation: Experience with markdown and, in particular, Antora for creating technical documentation
    β€’ Experience working with Healthcare Data, including HL7v2, FHI,R and DICOM
    β€’ FHIR and/or HL7 Certifications
    β€’ Building software classified as Software as a Medical Device (SaMD)
    β€’ Understanding of EHR technologies such as EPIC, Cerner, e.c.
    β€’ Experience in implementing enterprise-grade cyber security & privacy by design into software products
    β€’ Experience working in Digital Health software
    β€’ Experience developing global applications
    β€’ Strong understanding of SDLC – Waterfall & Agile methodologies
    β€’ Software estimation
    β€’ Experience leading software development teams onshore and offshore

    Job Responsibilities

    β€’ Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.
    β€’ Involved in planning of system and development deployment, as well as responsible for meeting compliance and security standards.
    β€’ API development using AWS services
    β€’ Experience with Infrastructure as Code (IaC)
    β€’ Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.
    β€’ May document testing and maintenance of system updates, modifications, and configurations.
    β€’ May act as a liaison with key technology vendor technologists or other business functions.
    β€’ Function Specific: Strategically design technology solutions that meet the needs and goals of the company and its customers/users.
    β€’ Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or customization solution would be required.
    β€’ Test the quality of a product and its ability to perform a task or solve a problem.
    β€’ Perform basic maintenance and performance optimization procedures in each of the primary operating systems.
    β€’ Ability to document detailed technical system specifications based on business system requirements
    β€’ Ensures system implementation compliance with global & local regulatory and security standards (i.e. , HIPAA, SOCII, ISO27001, etc.)

     

    Department/Project Description

    The Digital Health organization is a technology team that focuses on next-generation Digital Health capabilities, which deliver on the Medicine mission and vision to deliver Insight Driven Care. This role will operate within the Digital Health Applications & Interoperability subgroup of the broader Digital Health team, focused on patient engagement, care coordination, AI, healthcare analytics & interoperability amongst other advanced technologies which enhance our product portfolio with new services, while improving clinical & patient experiences.

     

    Authorization and Authentication platform & services for Digital Health

     

    Secure cloud platform for storing and managing medical images (DICOM compliant). Leverages AWS for cost-effective storage and access, integrates with existing systems (EHR, PACS), and offers a customizable user interface.

    More
  • Β· 50 views Β· 9 applications Β· 12d

    Data Engineer

    Ukraine Β· 4 years of experience Β· English - B2
    Role Summary A key role in our data engineering team, working closely with the rest of the technology team to provide a first class service to both internal and external users. In this role you will be responsible for building solutions that allow us to...

    Role Summary

    A key role in our data engineering team, working closely with the rest of the technology team to provide a first class service to both internal and external users. In this role you will be responsible for building solutions that allow us to use our data in a robust, flexible and efficient way while also maintaining the integrity of our data, much of which is of a sensitive nature.

    Role and Responsibilities

    Manages resources (internal and external) in the delivery of the product roadmap for our data asset. Key responsibilities include but not exhaustive:

    • Work with data to solve business problems, building and maintaining the infrastructure to answer questions and improve processes
    • Work closely with the development and product teams (both internal and external) to ensure that products meet the required specification prior to release.
    • Working closely with our technology colleagues throughout the delivery lifecycle to ensure that all data related processes are efficient and accurate
    • Providing expert assistance with design and implementation of all new products. All of our new technology stack has data at its heart.
    • Ensuring data is available for business and management reporting purposes.
    • Assist with the development and refinement of the agile process.
    • Be an advocate for best practices and continued learning
    • Strong technical understanding of a data experience
    • Ensure the ongoing maintenance of their own CPD
    • Carry out all duties in a manner that always reflect Financial Wellness Group’s values and principles

    Essential Criteria

    • Extensive knowledge of using Python to build ETL and ELT products in AWS using Lambda and Batch processing.
    • A keen understanding of developing and tuning Microsoft SQL Server.
    • Exposure to development in Postgres.
    • A good understanding of CI/CD for data and the challenges inherent.
    • Ability to use Source Control Systems such as Git/Azure DevOps
    • An understanding of dimensional modelling and data warehousing methodologies and an interest in Data Lakehousing technologies.
    • An understanding of Infrastructure as a Service provision (for example Terraform)
    • The ability to rapidly adapt to new technologies and technical challenges.
    • The flexibility to quickly react to changing business priorities.
    • A team player, with a natural curiosity and a desire to learn new skills
    • An interest in finding the β€˜right way’
    • Passionate about data delivery and delivering change

    What To Expect From Digicode?

    🌎 Work from Anywhere: From an office, home, or travel continuously if that’s your thing. Everything we do is online. As long as you have the Internet and your travel nomad lifestyle doesn’t affect the work process (you meet all deadlines and are present at all the meetings), you’re all set.

    πŸ’Ό Professional Development: We offer great career development opportunities in a growing company, international work environment, paid language classes, conference and education budget, & internal 42 Community training.

    πŸ§˜β€β™‚οΈ Work-life Balance: We provide employees with 18+ paid vacation days and paid sick leave, flexible schedule, medical insurance for employees and their children, monthly budget for things like a gym or pool membership.

    πŸ™Œ Culture of Openness: We’re committed to fostering a community where everyone feels welcome, seen, and heard, with minimal bureaucracy, and a flat organization structure.

    And, also, corporate gifts, corporate celebrations, free food & snacks, play & relax rooms for those who visit the office.

    Did we catch your attention? We’d love to hear from you.

    More
  • Β· 24 views Β· 1 application Β· 12d

    Senior/Lead Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    Description The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of...

    Description

    The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of our broader team, with a focus on patient engagement, care coordination, AI, healthcare analytics, and interoperability. These advanced technologies enhance our product portfolio with new services while improving clinical and patient experiences.

     

    Requirements

    An AWS Data Engineer designs, develops, and maintains scalable data solutions using AWS cloud services.
    Key Responsibilities:
        β€’ Design, build, and manage ETL (Extract, Transform, Load) pipelines using AWS services (e.g., Glue, Lambda, EMR, Redshift, S3).
        β€’ Develop and maintain data architecture (data lakes, warehouses, databases) on AWS.
        β€’ Implement data quality and governance solutions.
        β€’ Automate data workflows and monitor pipeline health.
        β€’ Ensure data security and compliance with company policies.
    Required Skills:
        β€’ Proficiency with AWS cloud services, especially data-related offerings (S3, Glue, Redshift, Athena, EMR, Kinesis, Lambda).
        β€’ Strong SQL and Python skills.
        β€’ Experience with ETL tools and frameworks.
        β€’ Familiarity with data modelling and warehousing concepts.
        β€’ Knowledge of data security, access management, and best practices in AWS.
    Preferred Qualifications:
        β€’ AWS certifications (e.g., AWS Certified Data Analytics – Speciality, AWS Certified Solutions Architect).
        β€’ Background in software engineering or data science.

    β€’ Hands-on experience with Oracle Database and log-based Change Data Capture (CDC) replication using AWS Database Migration Service (DMS) for near real-time data ingestion.

    Job responsibilities

    • Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.
    • Involved in planning of system and development deployment, as well as responsible for meeting compliance and security standards.
    • API development using AWS services in a scalable, microservices-based architecture
    • Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.
    • May document testing and maintenance of system updates, modifications, and configurations.
    • May act as a liaison with key technology vendor technologists or other business functions.
    • Function Specific: Strategically design technology solutions that meet the needs and goals of the company and its customers/users.
    • Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or if a customisation solution would be required.
    • Test the quality of a product and its ability to perform a task or solve a problem.
    • Perform basic maintenance and performance optimisation procedures in each of the primary operating systems.
    • Ability to document detailed technical system specifications based on business system requirements
    • Ensures system implementation compliance with global & local regulatory and security standards (i.e. HIPAA, SOCII, ISO27001, etc.)
    More
  • Β· 38 views Β· 1 application Β· 12d

    Middle Data Engineer IRC285068

    Full Remote Β· Ukraine Β· 3 years of experience Β· English - None
    Description The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of...

    Description

    The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of our broader team, with a focus on patient engagement, care coordination, AI, healthcare analytics, and interoperability. These advanced technologies enhance our product portfolio with new services while improving clinical and patient experiences.

     

    Requirements

    MUST HAVE

    AWS Platform: Working experience with AWS data technologies, including S3
    Programming Languages: Strong programming skills in Python
    Data Formats: Experience with JSON, XML and other relevant data formats
    HealthCare Interoperability Tools: Previous experience with integration engines such as Intersystems, Lyniate, Redox, Mirth Connect, etc…

    Hands-on experience with Oracle Database and log-based Change Data Capture (CDC) replication using AWS Database Migration Service (DMS) for near real-time data ingestion.

    CI/CD Tools: experience setting up and managing CI/CD pipelines using GitLab CI, Jenkins, or similar tools
    Scripting and automation: experience in scripting language such as Python, PowerShell, etc…
    Monitoring and Logging: Familiarity with monitoring & logging tools like CloudWatch, ELK, Dynatrace, Prometheus, etc…
    Source Code Management: Expertise with git commands and associated VCS (Gitlab, Github, Gitea or similar)
    Documentation: Experience with markdown and in particular Antora for creating technical documentation

     

    NICE TO HAVE
    Strongly Preferred:
    Previous Healthcare or Medical Device experience
    Other data technologies such as Snowflake, Trino/Starburst
    Experience working with Healthcare Data, including HL7v2, FHIR and DICOM
    FHIR and/or HL7 Certifications
    Building software classified as Software as a Medical Device (SaMD)
    Understanding of EHR technologies such as EPIC, Cerner, etc…
    Experience implementation enterprise grade cyber security & privacy by design into software products
    Experience working in Digital Health software
    Experience developing global applications
    Strong understanding of SDLC – Waterfall & Agile methodologies
    Software estimation
    Experience leading software development teams onshore and offshore

     

    Job responsibilities

    – Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.

    – Involved in planning of system and development deployment as well as responsible for meeting compliance and security standards.

    – API development using AWS services in a scalable, microservices based architecture

    – Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.

    – May document testing and maintenance of system updates, modifications, and configurations.

    – May act as a liaison with key technology vendor technologists or other business functions.

    – Function Specific: Strategically design technology solutions that meets the needs and goals of the company and its customers/users.

    – Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or customization solution would be required.

    – Test the quality of a product and its ability to perform a task or solve a problems.

    – Perform basic maintenance and performance optimization procedures in each of the primary operating systems.

    – Ability to document detailed technical system specifications based on business system requirements

    – Ensures system implementation compliance with global & local regulatory and security standards (i.e. HIPAA, SOCII, ISO27001, etc…)

     

    What we offer

    Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. 

    Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally.

    Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today.

    Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way!

    High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do.

    More
  • Β· 22 views Β· 1 application Β· 12d

    Senior Data Engineer IRC284644

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - None
    Description Our client is a luxury skincare and beauty brand. The brand is based in San Francisco and sells luxury skincare products worldwide. Client’s main IT β€œproduct” is its e-commerce website, which functions as a digital platform to sell products,...

    Description

    Our client is a luxury skincare and beauty brand. The brand is based in San Francisco and sells luxury skincare products worldwide.

    Client’s main IT β€œproduct” is its e-commerce website, which functions as a digital platform to sell products, educate customers, and personalize experiences.

    • Runs on Salesforce Commerce Cloud (formerly Demandware) β€” an enterprise e-commerce platform that supports online shopping, order processing, customer accounts, and product catalogs.
    • Hosted on cloud infrastructure (e.g., AWS, Cloudflare) for reliable performance and security
      Uses HTTPS/SSL encryption to secure data transfers.
    • Integrated marketing and analytics technologies such as Klaviyo (email & SMS automation), Google Tag Manager, and personalization tools to track behavior, optimize campaigns, and increase conversions

    It’s both a shopping platform and a digital touchpoint for customers worldwide.

     

    Requirements

    • 4+ years of experience as a Data Engineer, Analytics Engineer, or in a similar data-focused role.
    • Strong SQL skills for complex data transformations and analytics-ready datasets.
    • Hands-on experience with Python for data pipelines, automation, and data processing.
    • Experience working with cloud-based data platforms (AWS preferred).
    • Solid understanding of data warehousing concepts (fact/dimension modeling, star schemas).
    • Experience building and maintaining ETL/ELT pipelines from multiple data sources.
    • Familiarity with data quality, monitoring, and validation practices.
    • Experience handling customer, transactional, and behavioral data in a digital or e-commerce environment.
    • Ability to work with cross-functional stakeholders (Marketing, Product, Analytics, Engineering).

    Nice to have:

    • Experience with Snowflake, Redshift, or BigQuery.
    • Experience with dbt or similar data transformation frameworks.
    • Familiarity with Airflow or other orchestration tools.
    • Experience with marketing and CRM data (e.g. Klaviyo, GA4, attribution tools).
    • Exposure to A/B testing and experimentation data.
    • Understanding of privacy and compliance (GDPR, CCPA).
    • Experience in consumer, retail, or luxury brands.
    • Knowledge of event tracking and analytics instrumentation.
    • Ability to travel + visa to the USA

     

    Job responsibilities

    • Design, build, and maintain scalable data pipelines ingesting data from multiple sources:
      e-commerce platform (e.g. Salesforce Commerce Cloud), CRM/marketing tools (Klaviyo), web analytics, fulfillment and logistics systems.
    • Ensure reliable, near-real-time data ingestion for customer behavior, orders, inventory, and marketing performance.
    • Develop and optimize ETL/ELT workflows using cloud-native tools.
    • Model and maintain customer, order, product, and session-level datasets to support analytics and personalization use cases.
    • Enable 360Β° customer view by unifying data from website interactions, email/SMS campaigns, purchases, and returns.
    • Support data needs for personalization tools (e.g. product recommendation quizzes, ritual finders).
    • Build datasets that power marketing attribution, funnel analysis, cohort analysis, and LTV calculations.
    • Enable data access for growth, marketing, and CRM teams to optimize campaign targeting and personalization
    • Ensure accurate tracking and validation of events, conversions, and user journeys across channels.
    • Work closely with Product, E-commerce, Marketing, Operations, and Engineering teams to translate business needs into data solutions.
    • Support experimentation initiatives (A/B testing, new digital experiences, virtual stores).
    • Act as a data partner in decision-making for growth, CX, and operational efficiency.
    • Build and manage data solutions on cloud infrastructure (e.g. AWS).
    • Optimize storage and compute costs while maintaining performance and scalability.

     

    What we offer

    Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. 

    Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally.

    Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today.

    Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way!

    High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do.

    More
  • Β· 71 views Β· 10 applications Β· 13d

    Data Platform Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 6 years of experience Β· English - B1
    WHO WE ARE At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on actionable technology that helps...

    WHO WE ARE

    At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on actionable technology that helps people around the world. Our deep experience and product-first attitude set us apart from other groups and it's why people who work with us tend to stay with us long term.

     

    WHY YOU SHOULD WORK WITH US

    You'll work on an important problem that improves the lives of a lot of people. You'll be at the cutting edge of innovation and get to work on fascinating problems, supporting real products, with real data. Your perks include: expert mentorship from senior staff, competitive compensation, paid leave, flexible work schedule and ability to travel internationally.

    Essential Requirements for Data Platform Engineer:

    • Architecture & Improvement: Continuously review the current architecture and implement incremental improvements, facilitating a gradual transition of production operations from Data Science to Engineering.
    • AWS Service Ownership: Own the full lifecycle (development, deployment, support, and monitoring) of client-facing AWS services (including SageMaker endpoints, Lambdas, and OpenSearch). Maintain high uptime and adherence to Service Level Agreements (SLAs).
    • ETL Operations Management: Manage all ETL processes, including the operation and maintenance of Step Functions and Batch jobs (scheduling, scaling, retry/timeout logic, failure handling, logging, and metrics).
    • Redshift Operations & Maintenance: Oversee all Redshift operations, focusing on performance optimization, access control, backup/restore readiness, cost management, and general housekeeping.
    • Performance Optimization: Post-stabilization of core monitoring and pipelines, collaborate with the Data Science team on targeted code optimizations to enhance reliability, reduce latency, and lower operational costs.
    • Security & Compliance: Implement and manage the vulnerability monitoring and remediation workflow (Snyk).
    • CI/CD Implementation: Establish and maintain robust Continuous Integration/Continuous Deployment (CI/CD) systems.
    • Infrastructure as Code (Optional): Utilize IaC principles where necessary to ensure repeatable and streamlined release processes.


    Mandatory Hard Skills:

    • AWS Core Services: Proven experience with production fundamentals (IAM, CloudWatch, and VPC networking concepts).
    • AWS Deployment: Proficiency in deploying and operating AWS SageMaker and Lambda services.
    • ETL Orchestration: Expertise in using AWS Step Functions and Batch for ETL and job orchestration.
    • Programming & Debugging: Strong command of Python for automation and troubleshooting.
    • Containerization: Competence with Docker/containers (build, run, debug).
    • Version Control & CI/CD: Experience with CI/CD practices and Git (GitHub Actions preferred).
    • Data Platform Tools: Experience with Databricks, or a demonstrated aptitude and willingness to quickly learn.
    •  

    Essential Soft Skills:

    • Accountability: Demonstrate complete autonomy and ownership over all assigned systems ("you run it, you fix it, you improve it").
    • Communication: Fluent in English; capable of clear, direct communication, especially during incidents.
    • Prioritization: A focus on delivering a minimally-supportable, deployable solution to meet deadlines, followed by optimization and cleanup.
    • Incident Management: Maintain composure under pressure and possess strong debugging and incident handling abilities.
    • Collaboration: Work effectively with the Data Science team while communicating technical trade-offs clearly and maintaining momentum.
    More
Log In or Sign Up to see all posted jobs