Jobs Data Engineer

143
  • Β· 134 views Β· 29 applications Β· 9d

    Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    We are looking for you! As we continue to design and build data-driven solutions across diverse domains β€” we’re seeking a Data Engineer who thrives on transforming data into impactful insights. If you’re passionate about crafting robust architectures,...

    We are looking for you!

    As we continue to design and build data-driven solutions across diverse domains β€” we’re seeking a Data Engineer who thrives on transforming data into impactful insights. If you’re passionate about crafting robust architectures, optimizing data pipelines, and enabling intelligent decision-making at scale, we’d love to have you join our global team and shape the next generation of data excellence with us.

    Contract type: Gig contract.

    Skills and experience you can bring to this role

    Qualifications & experience:

    • 3+ years of intensive experience as a Data Engineer or in a similar role, with a demonstrable track record of leading large-scale projects;
    • Mastery in Python and data stack (NumPy, Pandas, scikit-learn);
    • Good Understanding of SQL/RDBMS and familiarity with data warehouses (BigQuery, Snowflake, Redshift, etc.);
    • Experience building ETL data pipelines (Airflow, Prefect, Dagster, etc);
    • Experience with modern Scrum-based Software Development Life Cycle (SDLC);
    • Strong communication skills to explain technical insights to non-technical stakeholders.

    Nice to have:

    • Hands-on experience with Python web stack (Fast API / Flask);
    • Proven expertise in designing and optimizing large-scale data pipelines;
    • Comprehensive understanding of data governance principles and data quality management practices;
    • Understand marketing and media metrics (i.e., what conversion rate is and how it is calculated).
    • Exceptional leadership, communication, and collaboration skills, with a knack for guiding and nurturing teams.

       

    Educational requirements:

    • Bachelor’s degree in Computer Science, Information Systems, or a related discipline is preferred.

    What impact you’ll make 

    • Lead the design, development, testing, and maintenance of scalable data architectures, ensuring they align with business and technical objectives;
    • Spearhead the creation of sophisticated data pipelines using Python, leveraging advanced Snowflake capabilities such as Data Shares, Snowpipe, Snowpark, and more;
    • Collaborate intensively with data scientists, product teams, and other stakeholders to define and fulfill intricate data requirements for cross-channel budget optimization solutions;
    • Drive initiatives for new data collection, refining existing data sources, and ensuring the highest standards of data accuracy and reliability;
    • Set the gold standard for data quality, introducing cutting-edge tools and frameworks to detect and address data inconsistencies and inaccuracies; and
    • Identify, design, and implement process improvements, focusing on data delivery optimization, automation of manual processes, and infrastructure enhancements for scalability.

    What you’ll get 

    Regardless of your position or role, we have a wide array of benefits in place, including flexible working (hybrid/remote models) and generous time off policies (unlimited vacations, sick and parental leaves) to make it easier for all people to thrive and succeed at Star. On top of that, we offer an extensive reward and compensation package, intellectually and creatively stimulating space, health insurance and unique travel opportunities.

    Your holistic well-being is central at Star. You'll join a warm and vibrant multinational environment filled with impactful projects, career development opportunities, mentorship and training programs, fun sports activities, workshops, networking and outdoor meet-ups.

    More
  • Β· 109 views Β· 7 applications Β· 3d

    Data Engineer (Python-first, ETL, Azure)

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    COMPANY Atlas Technica β€” the US-based MSP providing services in the hedge fund vertical. Founded in New York in 2016, and rapidly growing (twice a year) all along the way. These days comprises 200+ engineers and 10+ established offices in US, UK,...

    COMPANY 
    Atlas Technica β€” the US-based MSP providing services in the hedge fund vertical. Founded in New York in 2016, and rapidly growing (twice a year) all along the way. These days comprises 200+ engineers and 10+ established offices in US, UK, Ukraine, Hong Kong, Singapore. 

    Location/Type: Remote (Ukraine only)

    Hours: UA timezone, flexible

     

    We are seeking an experienced Data Engineer to lead the design, implementation, and ongoing maintenance of scalable data pipelines and cloud-native solutions. You will work extensively with Python, Azure cloud services, and SQL-based data models, with a strong focus on automation, reliability, and data security, and collaborate closely with cross-functional teams to turn data into actionable insights.

     

    Responsibilities:β€―

    • Build and maintain efficient ETL workflows using Python 3, applying both object-oriented and functional paradigms.
    • Write comprehensive unit, integration, and end-to-end tests; troubleshoot complex Python traces.
    • Automate deployment and integration processes.
    • Develop Azure Functions, configure and deploy Storage Accounts and SQL Databases.
    • Design relational schemas, optimize queries, and manage advanced MSSQL features including temporal tables, external tables, and row-level security.
    • Author and maintain stored procedures, views, and functions.
    • Collaborating with cross-functional teams
       

    Requirements:β€―

    • English level – B2 or higher
    • 5+ years of proven experience as a Data engineer
    • Programming
      • Proficient in Python 3, with both object-oriented and functional paradigms
      • Design and implement ETL workflows using sensible code patterns
      • Discover, navigate and understand third-party library source code
      • Author unit, integration and end-to-end tests for new or existing ETL (pytest, fixtures, mocks, monkey patching)
      • Ability to troubleshoot esoteric python traces encountered in the terminal, logs, or debugger
    • Tooling & Automation
      • Git for version control and branching strategies
      • Unix-like shells (Nix-based OS) in cloud environments
      • Author CI/CD configs and scripts (JSON, YAML, Bash, PowerShell)
    • Cloud & Serverless Patterns
      • Develop Azure Functions (HTTP, Blob, Queue triggers) using azure-functions SDK
      • Implement concurrency and resilience (thread pools, tenacity, rate limiters)
    • Azure SDKs & Services
      • Deploy and configure:
      • Functions, Web Apps & App Service Plans
      • Storage Accounts, Communication Services
      • SQL Database / Managed Instance
    • Data Security and Reliability
      • Maintain strict secrets, access discipline
      • Implement data quality checks and validation steps
    • Database Administration
      • Relational data modeling & schema design
      • Data partitioning strategies & temporal tables (system-versioned)
      • Query performance tuning (indexes, execution plans)
      • Selection of optimal data types
      • Complex T-SQL (windowing, CTEs, advanced joins)
      • Advanced MSSQL features (External Tables, Row-Level Security)
    • SQL Objects & Schema Management
      • Author and maintain tables, views, Stored Procedures, Functions, and external tables (polybase)
      • Strong analytical and problem-solving skills, with meticulous attention to detail
      • Strong technical documentation skills

     

    WE OFFER:

    • Direct long-term contract with a US-based company
    • Full-time remote role aligned with EST
    • B2B set-up via SP (FOP in $USD)
    • Competitive compensation
    • Annual salary reviews and performance-based bonuses
    • Company equipment provided for work
    • Professional, collaborative environment with the ability to influence strategic decisions
    • Opportunities for growth within a scaling global organization
    More
  • Β· 37 views Β· 1 application Β· 26d

    Data Engineer /Strategy Consultant in Data Management

    Full Remote Β· Poland Β· 7 years of experience Β· English - B2
    Project duration: 10.12.2025 – 30.12.2026 Experience: 7 years Domain: IT Format: Remote (from Poland) Overview We are looking for an experienced Data Engineer / Strategy Consultant with strong expertise in data management and Informatica technologies....

    Project duration: 10.12.2025 – 30.12.2026
    Experience: 7 years
    Domain: IT
    Format: Remote (from Poland)


    Overview
    We are looking for an experienced Data Engineer / Strategy Consultant with strong expertise in data management and Informatica technologies. This role involves working closely with clients, shaping data strategies, and supporting enterprise-level data transformation initiatives.


    Technical Requirements
    β€’ Bachelor’s degree in Computer Science, Business, or a related field
    β€’ At least 7 years of consulting experience in data management
    β€’ Deep understanding of key data management areas: data governance, data quality, master data management, and data integration
    β€’ Practical experience with Informatica tools, ideally IDMC; experience with PowerCenter, MDM, or Data Quality solutions is a plus
    β€’ Ability to translate complex technical topics into clear business value
    β€’ Strong communication and presentation skills, including experience interacting with C-level stakeholders


    Required Technical Skills
    β€’ Informatica PowerCenter
    β€’ IBM MDM


    Main Responsibilities
    β€’ Assess clients’ current data management capabilities and develop strategic improvement plans using Informatica technologies
    β€’ Lead workshops to gather business requirements and convert them into scalable IDMC-based solutions
    β€’ Build business cases and ROI models to justify data management initiatives
    β€’ Advise on data governance models, data quality programs, and MDM strategies
    β€’ Develop and oversee implementation plans for enterprise-wide data management systems aligned with business objectives
    β€’ Support onboarding and integration of Organizational Entities (OEs) into the Data Management System (DMS), including:
    – Metadata and data catalog configuration
    – Business glossary setup
    – CDQ rules configuration
    – Reference Data Management (RDM) distribution

     

     

     

    More
  • Β· 32 views Β· 0 applications Β· 26d

    Data Engineer – Python/PySpark

    Hybrid Remote Β· Poland Β· 6 years of experience Β· English - B2
    Project Duration: 02.01.2026 – 31.12.2026 Experience: 6+ years Industry: Banking & Finance Work Format: Hybrid (mandatory office presence 3 days per week, KrakΓ³w, Poland) About the Project We are looking for a skilled Data Engineer to work on a banking...

    Project Duration: 02.01.2026 – 31.12.2026
    Experience: 6+ years
    Industry: Banking & Finance
    Work Format: Hybrid (mandatory office presence 3 days per week, KrakΓ³w, Poland)
     

    About the Project

    We are looking for a skilled Data Engineer to work on a banking sector project. The role focuses on building and optimizing data pipelines, working with big data tools, cloud platforms, and collaborating with international teams.


    Note: Background check may be required by the client.


    Requirements

    Must-Have:

    • 6–9 years of professional experience in data engineering or similar fields
    • Proficiency in Python and experience with PySpark for large-scale, distributed data processing
    • Hands-on experience with Microsoft Azure tools: Data Lake, Synapse, Data Factory, and Key Vault
    • Strong knowledge of Databricks for big data analysis and workflow orchestration
    • Advanced SQL/Oracle skills and understanding of relational database principles
    • Experience with data modeling, building data warehouses, and system performance optimization
    • Familiarity with CI/CD processes, Git, and general DevOps methodologies
    • Strong analytical skills and comfortable working in Agile teams
    • Fluent English
       

    Technical Skills

    Core Skills:

    • Python
    • PySpark
    • Microsoft Azure
    • Data Lake
    • Azure Synapse
    • Azure Data Factory
    • Key Vault
    • Databricks
    • SQL
    • Oracle
    • CI/CD
    • Git
    • DevOps
    More
  • Β· 35 views Β· 0 applications Β· 26d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    Project Description New long-term project for an Energy client, where we will create an application with AI integrated for the comprehensive data analysis. You will be working closely with the customer stakeholders as part of the Scrum team. ...

    Project Description

    New long-term project for an Energy client, where we will create an application with AI integrated for the comprehensive data analysis. You will be working closely with the customer stakeholders as part of the Scrum team. 

     

     

    Technical Requirements (Must Have):
    Python β€” 5+ years, production code (not just notebooks)
    SQL / PostgreSQL β€” 5+ years, complex queries, optimization
    Apache Kafka β€” event streaming, consumers, producers
    pandas / numpy β€” expert level, large datasets (1M+ rows)
    scikit-learn β€” clustering algorithms, metrics, hyperparameter tuning
    ETL Pipelines β€” 4+ years building production data pipelines
    Text Processing β€” tokenization, cleaning, encoding handling
    Git β€” branching, PRs, code reviews
    English β€” B2+ written and verbal

     

    Would Be a Plus
    Sentence-BERT / Transformers (HuggingFace ecosystem)
    MLflow or similar ML experiment tracking
    Topic Modeling (LDA, NMF)
    DBSCAN / Hierarchical Clustering
    FastAPI / Flask
    Azure DevOps
    Kafka Streams / ksqlDB
    BI & Visualization tools (Power BI, Tableau, Grafana, Apache Superset, Plotly/Dash, or similar)

    Nice to Have
    Energy / Utility / SCADA domain experience
    Time-series analysis
    Prometheus / Grafana monitoring
    On-premise ML infrastructure (no cloud APIs)
    Data modeling / dimensional modeling
    dbt (data build tool)

     

     

    Job Responsibilities

    Strong problem-solving and follow-up skills; must be proactive and take initiative
    Professionalism and ability to maintain the highest level of confidentiality
    Create robust code and translate business logic into project requirements
    Develop code using development best practices, and an emphasis on security best practices
    Leverage technologies to support business needs to attain high reusability and maintainability of current and newly developed systems
    Provide system design recommendations based on technical requirements
    Work independently on development tasks with a minimal amount of supervision

    More
  • Β· 55 views Β· 0 applications Β· 26d

    Middle Data Engineer IRC285068

    Full Remote Β· Ukraine Β· 3 years of experience Β· English - B2
    Description The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of...

    Description

    The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of our broader team, with a focus on patient engagement, care coordination, AI, healthcare analytics, and interoperability. These advanced technologies enhance our product portfolio with new services while improving clinical and patient experiences.

     

    Requirements

    MUST HAVE

    AWS Platform: Working experience with AWS data technologies, including S3
    Programming Languages: Strong programming skills in Python
    Data Formats: Experience with JSON, XML and other relevant data formats
    HealthCare Interoperability Tools: Previous experience with integration engines such as Intersystems, Lyniate, Redox, Mirth Connect, etc…

    Hands-on experience with Oracle Database and log-based Change Data Capture (CDC) replication using AWS Database Migration Service (DMS) for near real-time data ingestion.

    CI/CD Tools: experience setting up and managing CI/CD pipelines using GitLab CI, Jenkins, or similar tools
    Scripting and automation: experience in scripting language such as Python, PowerShell, etc…
    Monitoring and Logging: Familiarity with monitoring & logging tools like CloudWatch, ELK, Dynatrace, Prometheus, etc…
    Source Code Management: Expertise with git commands and associated VCS (Gitlab, Github, Gitea or similar)
    Documentation: Experience with markdown and in particular Antora for creating technical documentation

     

    NICE TO HAVE
    Strongly Preferred:
    Previous Healthcare or Medical Device experience
    Other data technologies such as Snowflake, Trino/Starburst
    Experience working with Healthcare Data, including HL7v2, FHIR and DICOM
    FHIR and/or HL7 Certifications
    Building software classified as Software as a Medical Device (SaMD)
    Understanding of EHR technologies such as EPIC, Cerner, etc…
    Experience implementation enterprise grade cyber security & privacy by design into software products
    Experience working in Digital Health software
    Experience developing global applications
    Strong understanding of SDLC – Waterfall & Agile methodologies
    Software estimation
    Experience leading software development teams onshore and offshore

     

    Job responsibilities

    – Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.

    – Involved in planning of system and development deployment as well as responsible for meeting compliance and security standards.

    – API development using AWS services in a scalable, microservices based architecture

    – Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.

    – May document testing and maintenance of system updates, modifications, and configurations.

    – May act as a liaison with key technology vendor technologists or other business functions.

    – Function Specific: Strategically design technology solutions that meets the needs and goals of the company and its customers/users.

    – Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or customization solution would be required.

    – Test the quality of a product and its ability to perform a task or solve a problems.

    – Perform basic maintenance and performance optimization procedures in each of the primary operating systems.

    – Ability to document detailed technical system specifications based on business system requirements

    – Ensures system implementation compliance with global & local regulatory and security standards (i.e. HIPAA, SOCII, ISO27001, etc…)

     

    What we offer

    Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. 

    Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally.

    Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today.

    Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way!

    High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do.

    More
  • Β· 111 views Β· 12 applications Β· 4d

    Senior Solana Engineer (Smart Wallet)

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B1
    Senior Solana Developer - CoFo Neobank About the Project We're building CoFo Neobank β€” the first AI-first smart wallet on Solana that brings the banking app experience (like Revolut, Robinhood) into the on-chain environment. Our goal is to abstract...

    Senior Solana Developer - CoFo Neobank

    About the Project

    We're building CoFo Neobank β€” the first AI-first smart wallet on Solana that brings the banking app experience (like Revolut, Robinhood) into the on-chain environment.

    Our goal is to abstract blockchain complexity. We're building an architecture where every user gets a Smart Account (a programmable account, not a simple EOA) that supports multi-factor authentication (2/3 Multisig), access recovery, spending limits, and native integration of complex financial products (Staking, Yield, Perps, RWA).

    Core Responsibilities

    β€’ Smart Account Architecture Development: Design and write custom Rust programs (Anchor) for managing user accounts. Implement multisig logic (Device Key + 2FA Key), key rotation, and access recovery (Social Recovery).

    β€’ DeFi Composability (Integrations): Write adapters and CPI (Cross-Program Invocations) calls to integrate external protocols directly into the smart account:

    • Swap: Aggregation through Jupiter
    • Yield & Lending: Integration with Kamino, MarginFi, Meteora
    • Perps: Integration with Drift Protocol

    β€’ Security and Access Control: Implement spending limits system, protocol whitelisting, and replay attack protection.

    β€’ Portfolio Logic: Develop on-chain structures for position tracking (storing data about deposits, debts, PnL) for fast frontend/AI reading.

    β€’ Gas Abstraction: Implement mechanisms for paying fees on behalf of users (Fee Bundling / Gas Tank).

    Requirements (Hard Skills)

    β€’ Expert Rust & Anchor: Deep understanding of Solana Sealevel runtime, memory management, PDAs, and Compute Units (CU) limitations.

    β€’ Account Abstraction Experience: Understanding of how to build smart contract wallets that differ from standard system accounts.

    β€’ DeFi Integration Experience: You've already worked with SDKs or IDLs of major Solana protocols (Jupiter, Kamino, Drift, etc.). You understand what CPI is and how to safely call external code.

    β€’ Cryptography: Understanding of signature operations (Ed25519), transaction verification, and building secure multisig schemes.

    β€’ Security Mindset: Experience with audits, knowledge of attack vectors on Solana (re-entrancy, account substitution, ownership checks).

    Nice to Have

    β€’ Experience with Privy (for authentication) β€’ Understanding of cross-chain bridges (Wormhole/LayerZero) for implementing deposits from other networks β€’ Experience with tokenized assets (RWA) and Token-2022 standard

    Tech Stack

    β€’ Solana (Rust, Anchor Framework) β€’ Integrations: Jupiter, Kamino, Drift, MarginFi β€’ Infrastructure: Helius, Privy

    We Offer

    β€’ Work on a product that's changing UX in DeFi β€’ Complex architectural challenges (not just another token fork, but sophisticated wallet infrastructure) β€’ Competitive compensation in stablecoins/fiat + project options/tokens

    More
  • Β· 61 views Β· 8 applications Β· 25d

    Software Engineer (Data)

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    We are looking for a Software Engineer who has spent at least 5 years in Data Engineering but still has a very strong grasp of Computer Science basics. We aren't just building pipelines; we are scaling them to handle billions of events. This means we need...

    We are looking for a Software Engineer who has spent at least 5 years in Data Engineering but still has a very strong grasp of Computer Science basics.

    We aren't just building pipelines; we are scaling them to handle billions of events. This means we need someone who understands algorithms and Data Science fundamentals to make sure our data is optimized and our logic is efficient. We value people who are independent, take ownership, and aren't afraid to dive into day-to-day tasks to get the job done.

    What you will do:

    • Develop Java-based microservices running on K8S.
    • Build and scale data pipelines using Clickhouse, Spark, and Kafka.
    • Tackle challenges like data duplication, schema versioning, and high availability.
    • Handle complex ETL workflows and data ingestion from different sources.
    • Optimize queries to make sure everything runs fast and reliably at scale.

    What we expect from you:

    • 3+ years of experience with Java and 5+ years in Data Engineering.
    • Strong Algorithm fundamentals and a deep understanding of Data Science (DS).
    • Experience with high-volume, real-time systems (billions of events daily).
    • Technical stack: Airflow, K8S, Clickhouse, Snowflake, Redis, Spark, and Kafka.
    • A BSc in Computer Science or a similar degree from a top university.
    • A self-starter attitude β€” we are a small team, so being independent is key.

    Experience in fintech, crypto, or trading is a big plus. 

    More
  • Β· 71 views Β· 12 applications Β· 3d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    Dataforest is looking for a Senior Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics. You will be responsible for developing and maintaining a scalable data architecture...

    Dataforest is looking for a Senior Data Engineer to join our team and work on the Dropship project β€” a cutting-edge data intelligence platform for e-commerce analytics.
    You will be responsible for developing and maintaining a scalable data architecture that powers large-scale data collection, processing, analysis, and integrations.

    If you are passionate about data optimization, system performance, and architecture, we’re waiting for your CV!

    Requirements:
    β€’ 4+ years of commercial experience with Python;
    β€’ Advanced experience with SQL DBs (optimisations, monitoring, etc.);
    β€’ PostgreSQL β€” must have;
    β€’ Solid understanding of ETL principles (architecture/ monitoring/ alerting/search and resolve bottlenecks);
    β€’ Experience with Message brokers: Kafka/ Redis;
    β€’ Experience with Pandas;
    β€’ Familiar with AWS infrastructure (boto3, S3 buckets, etc);
    β€’ Experience working with large volumes of data;
    β€’ Understanding the principles of medallion architecture.   

    Will Be a Plus:
    β€’ Understanding noSQL DBs (Elastic);
    β€’ TimeScaleDB;
    β€’ PySpark;
    β€’ Experience with e-commerce or fintech.   
     

    Key Responsibilities:

    β€’ Develop and maintain a robust and scalable data processing architecture using Python.

    β€’  Design, optimize, and monitor data pipelines using Kafka and AWS SQS.

    β€’  Implement and optimize ETL processes for various data sources.

    β€’  Manage and optimize SQL and NoSQL databases (PostgreSQL, TimeScaleDB, Elasticsearch).

    β€’  Work with AWS infrastructure to ensure reliability, scalability, and cost efficiency.

    β€’  Proactively identify bottlenecks and suggest technical improvements.

     

     We offer:

    β€’  Working in a fast-growing company;

    β€’  Great networking opportunities with international clients, challenging tasks;

    • Personal and professional development opportunities;
    • Competitive salary fixed in USD;
    • Paid vacation and sick leaves;
    • Flexible work schedule;
    • Friendly working environment with minimal hierarchy;
    • Team building activities, corporate events.


     

    More
  • Β· 62 views Β· 4 applications Β· 3d

    Data Engineer

    Office Work Β· Ukraine (Kyiv) Β· Product Β· 5 years of experience Β· English - B2
    About Us: Atto Trading, a dynamic quantitative trading firm founded in 2010 and leading in global high-frequency strategies, is looking for a Data Engineer to join our team. We are expanding an international, diverse team with experts in trading,...

    About Us:

    Atto Trading, a dynamic quantitative trading firm founded in 2010 and leading in global high-frequency strategies, is looking for a Data Engineer to join our team.

    We are expanding an international, diverse team with experts in trading, statistics, engineering, and technology. Our disciplined approach, combined with rapid market feedback, allows us to quickly turn ideas into profit. Our environment of learning and collaboration allows us to solve some of the world’s hardest problems, together. As a small firm, we remain nimble and hold ourselves to the highest standards of integrity, ingenuity, and effort. 

    Role Highlights:
    We are seeking an experienced Senior Data Engineer to design, build, and maintain our comprehensive Data Lake for a fast-growing number of research and production datasets. This role combines hardware and platform infrastructure expertise with data engineering excellence to support our rapidly growing data assets (~200TB current, scaling ~100TB/year). 
     

    Responsibilities:

    • Architect and manage high-performance, scalable on-premise data storage systems optimized for large-scale data access and analytics workloads
    • Configure and maintain compute clusters for distributed data processing
    • Plan capacity and scalability roadmaps to accommodate 100TB+ annual data growth
    • Design and implement efficient monitoring and alerting systems to forecast growth trends and proactively react to critical states
    • Design, create, automate, and maintain various data pipelines
    • Enhance existing and setup new β€œdata checks” and alerts to determine when the data is β€œbad”
    • Design and implement a comprehensive on-premise Data Lake system connected to VAST storage solution for normalized market data across:
      • US Equities, US Futures, and SIP feeds
      • Other market data sources that will be further added
      • Security Definition data for various markets
      • Various private column data
    • Build and operate end‑to‑end data pipelines and SLA/SLO monitoring to ensure data quality, completeness, and governance
    • Analyze existing data models, usage patterns, and access frequencies to identify bottlenecks and optimization opportunities
    • Develop metadata and catalog layers for efficient data discovery and self‑service access
    • Design and deploy event‑driven architectures for near real‑time market data processing and delivery
    • Orchestrate ETL/ELT data pipelines using tools like Prefect (or Airflow), ensuring robustness, observability, and clear operational ownership
    • Ensure fault tolerance, scalability, and high availability across existing systems
    • Partner with traders, quantitative researchers, and other stakeholders to understand use cases and continuously improve the usability, performance, and reliability of the Data Lake  


    Requirements:

    • 5+ years of experience in data engineering or data platform roles
    • Proven experience with large‑scale data infrastructure (hundreds of TBs of data, high‑throughput pipelines)
    • Strong understanding of market data formats and financial data structures (e.g., trades, quotes, order books, corporate actions)
    • Experience designing and modernizing data infrastructure within on-premise solutions
    • Bachelor’s degree in Computer Science, Engineering or related field required; Master’s degree preferred or equivalent practical experience


    Tech Skills:

    • Data Engineering - Spark, Iceberg (or similar table formats), Trino/Presto, Parquet optimization
    • ETL pipelines - Prefect/Airflow or similar DAG-oriented tools
    • Infrastructure - High-performance networking and compute
    • Storage Systems - High-performance distributed storage, NAS/SAN, object storage
    • Networking - Low-latency networking (aware about DPDK and kernel bypass technologies. Data center infrastructure basics
    • Programming - Python (production‑grade), SQL, building APIs (e.g., FastAPI)
    • Data Analysis - Advanced SQL, Tableau (or similar BI tools), data profiling tools

    Nice to have:

    • Experience in HFT or financial services
    • Background in high‑frequency trading (HFT) or quantitative finance

    What we offer:

    • Competitive compensation package
    • Performance-based bonus opportunities
    • Healthcare & Sports/gym budget
    • Mental health support, including access to therapy
    • Paid time off (25 days)
    • Relocation support (where applicable)
    • International team meet-ups
    • Learning and development support, including courses and certifications
    • Access to professional tools, software, and resources
    • Fully equipped workstations with high-quality hardware
    • Modern office with paid lunches
       

    Our motivation:

    We are a company committed to staying at the forefront of technology. Our team is passionate about continual learning and improvement. With no external investors or customers, we are the primary users of the products we create, giving you the opportunity to make a real impact on our company's growth.

     

    Ready to advance your career? Join our innovative team and help shape the future of trading on a global scale. Apply now and let's create the future together!

    More
  • Β· 46 views Β· 2 applications Β· 22d

    Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    Data Engineer Full-time. Remote. B2B. Working time zone: EET (Ukraine). Location of candidates: Ukraine About the company: It is a US-based Managed IT Services (MSP) company, founded in 2016. Services: IT management, user support, cybersecurity,...

    Data Engineer 
     

    Full-time. Remote. B2B. 
    Working time zone: EET (Ukraine). 
    Location of candidates: Ukraine
     

    About the company: It is a US-based Managed IT Services (MSP) company, founded in 2016.

    Services: IT management, user support, cybersecurity, cloud solutions (Microsoft Azure, M365), and data engineering.
    Core clients: Hedge funds, investment and asset management firms (financial sector focus) across North America, Europe, and Asia.

     

    As a Data Engineer, you will be responsible for designing, implementing, and maintaining robust data pipelines and cloud-native solutions that support scalable analytics and operational efficiency. This role requires deep expertise in Python programming, Azure cloud services, and SQL-based data modeling, with a strong emphasis on automation, reliability, and security.

    Currently, the data processing system is built entirely in pure Python, without external ETL or data integration platforms (such as Snowflake or Data Factory). The company plans to continue relying on Python as the core technology for data processing, making it essential that the new engineer has strong, hands-on expertise in Python-based ETL development β€” including automation, testing, error handling, and code stability.
    You will play a key role in evolving the current data platform as the company moves toward adopting Microsoft Fabric, while maintaining core Python ETL logic.

    This role will work closely with another Data Engineer on internal company projects.

    Team: Cloud Engineering Team

    Reports to: Cloud DevOps Manager
     

    Responsibilities:
    - Build and maintain efficient ETL workflows using Python 3, applying both object-oriented and functional paradigms.

    - Write comprehensive unit, integration, and end-to-end tests; troubleshoot complex Python traces.

    - Automate deployment and integration processes.

    - Develop Azure Functions, configure and deploy Storage Accounts and SQL Databases.

    - Design relational schemas, optimize queries, and manage advanced MSSQL features including temporal tables, external tables, and row-level security.

    - Author and maintain stored procedures, views, and functions.

    - Collaborating with cross-functional teams
     

    Requirements:
    - English level – B2 or higher (English speaking environment)
    - 5+ years of proven experience as a Data engineer
    - Proficient in Python 3, with both object-oriented and functional paradigms
    - Experience with Python (vanilla), Dagster, Prefect, Apache Airflow, Apache Beam
    - Design and implement ETL workflows using sensible code patterns

    - Discover, navigate and understand third-party library source code

    - Author unit, integration and end-to-end tests for new or existing ETL (pytest, fixtures, mocks, monkey patching)

    - Ability to troubleshoot esoteric python traces encountered in the terminal, logs, or debugger

    - Git (branching), Unix-like shells (Nix-based) in cloud environments- Author CI/CD configs and scripts (JSON, YAML, Bash, PowerShell)

    - Develop Azure Functions (HTTP, Blob, Queue triggers) using azure-functions SDK

    - Implement concurrency and resilience (thread pools, tenacity, rate limiters)

    - Deploy and configure: Functions, Web Apps & App Service Plans, Storage Accounts, Communication Services, SQL Database / Managed Instance

    - Secrets/access management, data validation, data quality checks

    - Relational data modeling, schema design, data partitioning strategies, and temporal tables (system-versioned) 

    - Query performance tuning (indexes, execution plans)

    - Selection of optimal data types

    - Complex T-SQL (windowing, CTEs, advanced joins)
    - Advanced MSSQL features (External Tables, Row-Level Security)
    - SQL Objects & Schema Management: Author and maintain tables, views, Stored Procedures, Functions, and external tables (polybase)- Strong analytical, problem-solving, and documentation skills

    - Microsoft certifications would be a plus.
     

    Work conditions:

    - B2B. Remote. Full-time.

    - Competitive salary and a performance-based bonus of up to 10% of the annual salary, paid at the end of the year.

    - Paid vacation (4 weeks / 20 working days) to start, increasing with years of service & Sick leave.

    - Official Ukrainian public holidays are days off

    - Professional development: company-paid courses and certifications. Successful certification exams are rewarded with several paid days off or a monetary bonus.
     

    Hiring stages:

    - Interview with the recruiter of the recruitment agency ~20–30 min (call recorded)

    - Personality test ~20 min & Cognitive test ~5 min (to be completed on your own) 

    - Technical interview

    - Final interview

    - Offer

    We are a recruitment agency helping our client find a Data Engineer. If you have any questions or would like to know more about the company, feel free to reach out to us.

    More
  • Β· 84 views Β· 6 applications Β· 20d

    Salesforce Automation Engineer

    Part-time Β· Full Remote Β· Worldwide Β· 1 year of experience Β· English - B2
    Overview We are looking for a developer to build a custom registration page for our a clinet annual C-suite summit (200 pre-invited guests). The system must integrate directly existing Salesforce instance and use token-based authentication (magic links)...

    Overview

    We are looking for a developer to build a custom registration page for our a clinet annual C-suite summit (200 pre-invited guests). The system must integrate directly existing Salesforce instance and use token-based authentication (magic links) to prevent unauthorized registrations.

    What Needs to be built

    1. Salesforce Configuration:

    • Create custom fields on Contact object: Registration_Token__c (text, unique), Token_Used__c (checkbox), Registration_URL__c (formula field)
    • Generate 200 unique random tokens for existing Contact records
    • Create formula field that combines token with our website URL
    • Export CSV with: Contact Email, Name, Registration_URL for Mailchimp integration

    2. Registration Web Page:

    • Simple HTML/JavaScript form hosted on our website (globalmaritimeforum.org)
    • Fields: Full Name, Company, Job Title, Dietary Requirements, Planned Arrival Date
    • Extract token from URL parameter (?token=xyz)

    Β·  Validate token against Salesforce in real-time (check if exists and unused)

    Β·  Display form if valid, show error message if invalid/expired/used

    Β·  Include Google reCAPTCHA v3 for bot protection

    3. Salesforce API Integration:

    • Connect form to Salesforce via REST API
    • On submission: Update the Contact record (identified by token) with form data
    • Mark token as used (Token_Used__c = TRUE)
    • Update Contact status to "Confirmed Attendance"

    Technical Requirements

    • Experience with Salesforce administration and custom fields

    Β·  Proficiency with Salesforce REST API

    Β·  JavaScript for client-side validation and API calls

    Β·  Understanding of token-based authentication systems

    • Mobile-responsive design

    What Will be provided

    • Salesforce admin credentials (full access)
    • Website hosting access for page deployment
    • List of 200 Contact records already in Salesforce
    • Field specifications and data mapping requirements
    • Testing support

    Deliverables

    1. Configured Salesforce fields with tokens generated for 200 Contacts
    2. Exported CSV file ready for Mailchimp (emails + magic link URLs)
    3. Functional registration page deployed on our website
    4. Documentation for managing the system
    5. Testing completed with dummy records
    More
  • Β· 29 views Β· 1 application Β· 19d

    Senior DBA/BI Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    ABOUT CLIENT This independent research group focuses on tracking and forecasting infectious diseases. Initially studying illnesses like influenza and dengue, they later expanded to global outbreaks such as COVID-19. The team gathers unique data sources,...

    ABOUT CLIENT

    This independent research group focuses on tracking and forecasting infectious diseases. Initially studying illnesses like influenza and dengue, they later expanded to global outbreaks such as COVID-19. The team gathers unique data sources, extracts key indicators of disease activity, and shares them publicly to support real-time monitoring and short-term forecasts. Their mission is to strengthen global readiness for future epidemics through data-driven insights and predictive modeling.
     

    PROJECT TECH STACK

    Apache Airflow 3.0, PostgreSQL
     

    PROJECT STAGE

    Live product
     

    QUALIFICATIONS AND SKILLS

    • 5+ years of experience with Airflow, with a focus on quality and depth of understanding, not just duration on the platform.
    • Proven experience designing, deploying, and maintaining production-grade ETL workflows using Apache Airflow, with a strong understanding of DAG orchestration, performance optimization, and operation in managed environments such as Astronomer.
    • Senior-level expertise in scaling, best practices, maintainability, and cost management β€” not just someone who can build pipelines.
    • Strong DBT skills would be a plus (the team has not yet adopted DBT, but is considering it).
    • Solid knowledge of Postgres optimization and the ability to clearly explain not only how to implement optimizations but also why they are needed.
    • Excellent communication skills are needed as the team needs guidance to enhance their capabilities more than immediate technical fixes.
       

    RESPONSIBILITIES

    • Ensure and help define best practices that promote maintainability and high code quality across the team.
    • Assist in resolving performance issues as they arise and proactively identify potential performance concerns.
    • Help the team detect bugs and problem areas early in the development process.
    • Guide the building of scalable solutions and the effective management of hosting costs.
    More
  • Β· 37 views Β· 4 applications Β· 19d

    Middle/Senior/Lead Data Engineer

    Full Remote Β· Ukraine Β· 3 years of experience Β· English - B2
    An AWS Data Engineer designs, develops, and maintains scalable data solutions using AWS cloud services. Key Responsibilities: β€’ Design, build, and manage ETL (Extract, Transform, Load) pipelines using AWS services (e.g., Glue, Lambda, EMR, Redshift,...

    An AWS Data Engineer designs, develops, and maintains scalable data solutions using AWS cloud services.
    Key Responsibilities:
        β€’ Design, build, and manage ETL (Extract, Transform, Load) pipelines using AWS services (e.g., Glue, Lambda, EMR, Redshift, S3).
        β€’ Develop and maintain data architecture (data lakes, warehouses, databases) on AWS.
        β€’ Implement data quality and governance solutions.
        β€’ Automate data workflows and monitor pipeline health.
        β€’ Ensure data security and compliance with company policies.
    Required Skills:
        β€’ Proficiency with AWS cloud services, especially data-related offerings (S3, Glue, Redshift, Athena, EMR, Kinesis, Lambda).
        β€’ Strong SQL and Python skills.
        β€’ Experience with ETL tools and frameworks.
        β€’ Familiarity with data modelling and warehousing concepts.
        β€’ Knowledge of data security, access management, and best practices in AWS.
    Preferred Qualifications:
        β€’ AWS certifications (e.g., AWS Certified Data Analytics – Speciality, AWS Certified Solutions Architect).
        β€’ Background in software engineering or data science.

    β€’ Hands-on experience with Oracle Database and log-based Change Data Capture (CDC) replication using AWS Database Migration Service (DMS) for near real-time data ingestion.

     

    Job Responsibilities

    • Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the cloud development & engineering.
    • Involved in planning of system and development deployment, as well as responsible for meeting compliance and security standards.
    • API development using AWS services in a scalable, microservices-based architecture
    • Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.
    • May document testing and maintenance of system updates, modifications, and configurations.
    • May act as a liaison with key technology vendor technologists or other business functions.
    • Function Specific: Strategically design technology solutions that meet the needs and goals of the company and its customers/users.
    • Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or if a customisation solution would be required.
    • Test the quality of a product and its ability to perform a task or solve a problem.
    • Perform basic maintenance and performance optimisation procedures in each of the primary operating systems.
    • Ability to document detailed technical system specifications based on business system requirements
    • Ensures system implementation compliance with global & local regulatory and security standards (i.e. HIPAA, SOCII, ISO27001, etc.)
    •  

    Department/Project Description

    The GlobalLogic technology team is focused on next-generation health capabilities that align with the client’s mission and vision to deliver Insight-Driven Care. This role operates within the Health Applications & Interoperability subgroup of our broader team, with a focus on patient engagement, care coordination, AI, healthcare analytics, and interoperability. These advanced technologies enhance our product portfolio with new services while improving clinical and patient experiences.

    More
  • Β· 21 views Β· 1 application Β· 19d

    Cloud DevOps Engineer

    Hybrid Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    Hello everyone At Intobi, we're a software and product development company passionate about driving innovation and progress. We help our clients succeed by delivering custom-built tech solutions designed to meet their unique needs. Our expertise lies in...

    Hello everyone πŸ‘‹

    At Intobi, we're a software and product development company passionate about driving innovation and progress.

    We help our clients succeed by delivering custom-built tech solutions designed to meet their unique needs.

    Our expertise lies in developing cutting-edge Web and Mobile applications.

     

    We’re hiring a Cloud DevOps Engineer to drive the design, automation, and reliability of our multi-cloud infrastructure. This is a key role in a fast-paced startup environment, where you’ll play a critical part in building, managing, and securing our cloud-native platform across AWS, Azure, and GCP.

     

    Cyngular is an Israeli cybersecurity company focused on cloud investigation and automated incident response. The platform helps security teams detect, investigate, and respond to complex threats across AWS, Azure, and GCP

     

    Role Overview:

    As a Cloud DevOps Engineer, you will be responsible for implementing CI/CD pipelines, managing infrastructure as code, automating cloud operations, and ensuring high availability and security across environments. You’ll work closely with development, security, and data teams to enable fast, reliable, and secure deployments.

     

    Key Responsibilities:

    β€”  Design, build, and maintain infrastructure using Terraform, CloudFormation, or Bicep.

    β€”  Manage CI/CD pipelines (GitHub Actions, GitLab CI, Azure DevOps, etc.) across multiple cloud platforms.

    β€”  Automate provisioning and scaling of compute, storage, and networking resources in AWS, Azure, and GCP.

    β€”  Implement and maintain monitoring, logging, and alerting solutions (CloudWatch, Stackdriver, Azure Monitor, etc.).

    β€”  Harden environments according to security best practices (IAM, service principals, KMS, firewall rules, etc.).

    β€”  Support cost optimization strategies and resource tagging/governance.

    β€”  Collaborate with engineers to streamline developer workflows and cloud-based deployments.

     

    Required Skills:

    β€”  4+ years of experience in DevOps, Site Reliability Engineering, or Cloud Engineering.

    β€”  Hands-on experience with at least two major cloud providers (AWS, Azure, GCP); familiarity with the third.

    β€”  Proficiency in infrastructure as code (Terraform required; CloudFormation/Bicep is a plus).

    β€”  Experience managing containers and orchestration platforms (EKS, AKS, GKE, or Kubernetes).

    β€”  Strong knowledge of CI/CD tooling and best practices.

    β€”  Familiarity with secrets management, role-based access controls, and audit logging.

    β€”  Proficiency in scripting with Python, Bash, or PowerShell.

     

    The position requires a high level of English β€” reading, writing, and speaking.

    This role is not suitable for juniors or those with little to no experience.

    We’re looking for professional DevOps engineers who are passionate about technology and tools, and who aren’t afraid to take on significant responsibilities, including self-directed learning.

     

    Please send your CV here or via email

     

    Should the first stage be successfully completed, you’ll be invited to a personal interview.

    More
Log In or Sign Up to see all posted jobs