DataArt

Joined in 2013
45% answers
DataArt is a global software engineering firm. With over 20 years of experience, teams of highly-trained engineers around the world, deep industry sector knowledge, and ongoing technology research, we help clients create custom software that improves their operations and opens new markets. DataArt started out as a company of friends and has a special culture that distinguishes it from other IT outsourcers, such as:
- Flat structure. There are no β€œbosses” and β€œsubordinates”.
- We hire people not to a project, but to the company. If the project (or your work in it) is over, you go to another project or to a paid β€œIdle”.
- Flexible schedule, ability to change projects, to work from home, to try yourself in different roles.
- Minimal bureaucracy and micromanagement, convenient corporate services
  • Β· 32 views Β· 4 applications Β· 29d

    System Administrator

    Office Work Β· Ukraine (Kharkiv) Β· 1 year of experience Β· A1 - Beginner
    Position overview DataArt is a global software engineering firm and a trusted technology partner for market leaders and visionaries. Our world-class team designs and engineers data-driven, cloud-native solutions to deliver immediate and enduring...

    Position overview

     

    DataArt is a global software engineering firm and a trusted technology partner for market leaders and visionaries. Our world-class team designs and engineers data-driven, cloud-native solutions to deliver immediate and enduring business value.

    We promote a culture of radical respect, prioritizing your personal well-being as much as your expertise. We stand firmly against prejudice and inequality, valuing each of our employees equally.

    We respect the autonomy of others before all else, offering remote, onsite, and hybrid work options. Our Learning and development centers, R&D labs, and mentorship programs encourage professional growth.

    Our long-term approach to collaboration with clients and colleagues alike focuses on building partnerships that extend beyond one-off projects. We provide the ability to switch between projects and technology stacks, creating opportunities for exploration through our learning and networking systems to advance your career.

    We are searching for a specialist for the position of System Administrator for our local IT helpdesk and support team.

     

    Technology stack

     

    Operating Systems: Microsoft Windows, macOS, Linux
    Network Technologies: AD, DNS, DHCP, NAT, VPN, VLAN, Group Policies

     

    Responsibilities

    • Configure and deploy hardware and software for new employees
    • Ensure seamless connectivity and troubleshoot network issues
    • Set up printers, scanners, and other peripherals
    • Monitor system performance and promptly address any disruptions
    • Provide efficient technical support to end-users, troubleshooting issues and answering queries
    • Maintain accurate records of system configurations and changes
    • Regularly update software, perform backups, and manage incidents
    • Implement security measures and adhere to industry standards

    Requirements

    • Excellent understanding of hardware and software troubleshooting
    • Familiarity with Windows Server and desktop operating systems
    • Strong understanding of Active Directory (AD), DNS, DHCP, NAT, VPN, VLAN, and Group Policies
    • Comfortable working with Windows, macOS, and Linux
    • Effective communication with internal customers and colleagues
    • Initiative, ability to multitask, and desire to collaborate
    • Excellent verbal and written communication skills

    Nice to have

    • Experience with VMware ESXi
    • Proficiency in PowerShell scripts for task automation
    • Experience managing Windows Servers
    • Managing access rights and File Server service quotas
    • Experience with Zabbix and Syslog
    • CCNA, MCSA, or other relevant vendor certifications are highly valued

     

    We offer

     

    Vacation

    20 paid days

     

    Health insurance

    We help you to take out an insurance policy for you and your loved ones

     

    Sick pay

    10 days without a doctor's note, afterwards - with the document

     

    Time off for state holidays

    According to the official calendar

     

    Pleasant environment

     

    Comfort service

    Solving technical and everyday problems at work

    More
  • Β· 7 views Β· 0 applications Β· 1d

    Data Architect (AWS and Python FastAPI)

    Full Remote Β· Ukraine Β· 6 years of experience Β· B2 - Upper Intermediate
    Client Our client is a leading legal recruiting company focused on building a cutting-edge data-driven platform for lawyers and law firms. The platform consolidates news and analytics, real-time deal and case tracking from multiple sources, firm and...

    Client

    Our client is a leading legal recruiting company focused on building a cutting-edge data-driven platform for lawyers and law firms. The platform consolidates news and analytics, real-time deal and case tracking from multiple sources, firm and lawyer profiles with cross-linked insights, rankings, and more β€” all in one unified place.

     

    Position overview

    We are seeking a skilled Data Architect with strong expertise in AWS technologies (Step Functions, Lambda, RDS - PostgreSQL), Python, and SQL to lead the design and implementation of the platform’s data architecture. This role involves defining data models, building ingestion pipelines, applying AI-driven entity resolution, and managing scalable, cost-effective infrastructure aligned with cloud best practices.

     

    Responsibilities

    • Define entities, relationships, and persistent IDs; enforce the Fact schema with confidence scores, timestamps, validation status, and source metadata.
    • Blueprint ingestion workflows from law firm site feeds; normalize data, extract entities, classify content, and route low-confidence items for review.
    • Develop a hybrid of deterministic rules and LLM-assisted matching; configure thresholds for auto-accept, manual review, or rejection.
    • Specify Ops Portal checkpoints, data queues, SLAs, and create a corrections/version history model.
    • Stage phased rollout of data sourcesβ€”from ingestion through processing, storage, replication, to management via CMS.
    • Align architecture with AWS and Postgres baselines; design for scalability, appropriate storage tiers, and cost-effective compute and queuing solutions.

     

    Requirements

    • Proven experience as a Data Architect or Senior Data Engineer working extensively with AWS services.
    • Strong proficiency in Python development, preferably with FastAPI or similar modern frameworks.
    • Deep understanding of data modeling principles, entity resolution, and schema design for complex data systems.
    • Hands-on experience designing and managing scalable data pipelines, workflows, and AI-driven data processing.
    • Familiarity with relational databases such as PostgreSQL.
    • Solid experience in data architecture, including data modelling. Knowledge of different data architectures such as Medallion architecture, Dimensional modelling
    • Strong knowledge of cloud infrastructure cost optimization and performance tuning.
    • Excellent problem-solving skills and ability to work in a collaborative, agile environment.

     

    Nice to have

    • Experience within legal tech or recruiting data domains.
    • Familiarity with Content Management Systems (CMS) for managing data sources.
    • Knowledge of data privacy, security regulations, and compliance standards.
    • Experience with web scraping.
    • Experience with EMR and SageMaker.
    More
  • Β· 64 views Β· 3 applications Β· 30d

    Solution Architect (AI/ML, Azure)

    Full Remote Β· Ukraine Β· 7 years of experience Β· B2 - Upper Intermediate
    Client A leading player in the European SaaS market, specializing in payroll and HR management solutions. The client is undertaking a strategic initiative to build an AI-powered assistant designed to transform HR and payroll processes through intelligent...

    Client

    A leading player in the European SaaS market, specializing in payroll and HR management solutions.
    The client is undertaking a strategic initiative to build an AI-powered assistant designed to transform HR and payroll processes through intelligent automation, natural language understanding, and data-driven insights.

     

    Position overview

    We are looking for a seasoned Solution Architect with expertise in Artificial Intelligence and Machine Learning to design and lead the implementation of advanced AI/ML solutions.
    You will collaborate with cross-functional teams including data scientists, engineers, product managers, and business stakeholders to create scalable, secure, and robust AI-driven systems that align with business goals.

     

    Responsibilities

    • Architect end-to-end AI/ML solutions aligned with business requirements and technical standards.
    • Lead the design of scalable machine learning platforms, pipelines, and infrastructure within Azure.
    • Collaborate closely with data scientists and engineers to translate AI research into production-ready solutions.
    • Define data integration strategies encompassing big data, real-time streaming, and structured data sources.
    • Evaluate and select appropriate AI/ML technologies, tools, and frameworks that fit project needs.
    • Ensure scalability, security, and compliance within AI/ML systems architecture.
    • Establish and evolve MLOps practices, including CI/CD for models, monitoring, and retraining workflows.
    • Provide technical leadership and mentorship to development teams, ensuring architectural consistency.
    • Maintain high-quality documentation and communicate designs clearly to both technical and non-technical stakeholders.
    • Stay current with AI/ML trends (LLMs, RAG, agentic systems) and apply them pragmatically to enhance solution efficiency.

     

    Requirements

    • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.
    • 7+ years of experience in solution or software architecture, with a focus on AI/ML systems.
    • Proven experience designing and deploying AI/ML or data-driven solutions on Microsoft Azure (Azure ML, Data Factory, Databricks, Cognitive or OpenAI Services).
    • Strong technical expertise in ML frameworks (PyTorch, TensorFlow, Scikit-learn, Hugging Face).
    • Understanding of MLOps principles and cloud-native architecture.
    • Familiarity with big data technologies and orchestration tools (e.g., Spark, Kafka, Airflow).
    • Experience with containerization and orchestration (Docker, Kubernetes).
    • Excellent communication, documentation, and leadership skills.
    • Strong sense of responsibility, punctuality, and attention to detail in delivery.

     

    Nice to have

    • Experience working with SaaS or HR/payroll systems.
    • Knowledge of AI ethics, data privacy, and GDPR compliance.
    • Familiarity with large language models (LLMs), Retrieval-Augmented Generation (RAG), or agentic systems.
    • Background in knowledge graphs or semantic data modeling.
    • Prior experience with Agile development methodologies and cross-functional collaboration.
    More
  • Β· 31 views Β· 0 applications Β· 30d

    Senior Database Administrator (DBA)

    Hybrid Remote Β· Ukraine Β· 5 years of experience Β· B1 - Intermediate
    Client Our client is a leading global travel agency network specializing in luxury and experiential journeys. This role requires a hands-on technical leader with deep experience in SQL Server administration, automation, and Azure-native migrations....

    Client

     

    Our client is a leading global travel agency network specializing in luxury and experiential journeys.

     

     

    This role requires a hands-on technical leader with deep experience in SQL Server administration, automation, and Azure-native migrations. The DBA will provide operational support while driving key strategic initiatives for 2026.

     

    Responsibilities

    • Oversee the health, performance, and availability of all SQL Server and Snowflake databases.
    • Implement proactive monitoring for warehouse performance, resource consumption, and credit usage (Snowflake).
    • Configure, tune, and troubleshoot SQL Server Agent jobs, indexes, and partitions.
    • Manage and monitor backup integrity, recovery jobs, and log shipping.
    • Manage user accounts, roles, and permissions across SQL Server and Snowflake.
    • Ensure role-based access control (RBAC) is enforced consistently.
    • Audit and maintain compliance for handling of sensitive/PII data.
    • Align database access policies with organizational security standards and ISO 27001 practices.
    • Manage and troubleshoot FiveTran connections and integrations.
    • Work closely with Data Engineering teams to optimize ingestion pipelines into SQL and Snowflake.
    • Support cross-platform connectivity and reporting tools (Grafana, DataDog, Power BI).
    • Conduct in-depth query performance analysis using execution plans and dynamic management views (DMVs).
    • Optimize indexing strategies, partition schemes, and statistics maintenance.
    • Identify and resolve blocking, deadlocks, and high-cost queries.
    • Develop automation scripts (PowerShell, T-SQL, Python) to reduce manual intervention.
    • Standardize deployment of database schema, objects, and configurations using Infrastructure-as-Code (IaC) principles.
    • Document repeatable processes and maintain runbooks for operational continuity.

    Requirements

    • 10+ years as a Database Administrator with deep expertise in SQL Server 2012–2019/2022.
    • Proven experience migrating large-scale databases from IaaS SQL to Azure SQL Managed Instances / Azure SQL Database.
    • Hands-on knowledge of high-availability SQL configurations (AlwaysOn Availability Groups, Failover Clusters, Log Shipping, Replication).
    • Strong performance tuning skills: indexing strategies, query plan analysis & optimization, tempDB and transaction log management, deadlock detection/resolution
    • Deep understanding of backup and disaster recovery strategies (full/differential/log backups, point-in-time recovery, geo-redundant backups).
    • Scripting and automation expertise with T-SQL, PowerShell, and IaC tooling (Bicep, ARM, Terraform preferred).
    • Familiarity with data pipeline tools (FiveTran, ETL/ELT platforms) and integration troubleshooting.
    • Excellent documentation skills for runbooks, migration plans, and incident reviews.
    • Experience with Snowflake administration: warehouse management, RBAC implementation, monitoring and credit optimization, supporting data sharing and secure cleanroom use cases.

    Nice to have

    • Experience with Azure Database Migration Service (DMS) and Azure-native monitoring (Log Analytics, DataDog).
    • Background in BCDR planning and ISO 27001/SOC2 compliance alignment.
    • Prior experience in cloud-native modernization efforts (legacy to PaaS migrations).
    • Strong collaboration skills to partner with Data Engineering, SRE, and Security teams.

     

    More
  • Β· 79 views Β· 8 applications Β· 30d

    Senior Site Reliability Engineer (SRE) – AWS and GCP

    Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    Client Our client is revolutionizing the retail direct store delivery model by addressing key challenges like communication gaps, out-of-stocks, invoicing errors, and price inconsistencies. Through innovative technology and strong partnerships, they help...

    Client

    Our client is revolutionizing the retail direct store delivery model by addressing key challenges like communication gaps, out-of-stocks, invoicing errors, and price inconsistencies. Through innovative technology and strong partnerships, they help boost sales, increase profits, and enhance customer loyalty.

     

    Position overview

    We are seeking a skilled Middle to Senior Site Reliability Engineer (SRE) with hands-on experience in both AWS and Google Cloud Platform (GCP) to join a fast-paced, innovative project team. This role requires proactive monitoring, automation, and optimization of cloud infrastructure to ensure high availability, scalability, and security of mission-critical retail solutions.

    The candidate should be available for at least four hours of overlapping work time with the New York time zone to ensure smooth collaboration and participation in team activities.

     

    Responsibilities

    • Design, build, and operate scalable and reliable systems on AWS and GCP cloud platforms
    • Develop and maintain automation scripts to improve deployment, monitoring, and incident response
    • Ensure system availability, latency, and overall reliability to meet service level objectives (SLOs)
    • Collaborate with development and operations teams to implement best practices for security, monitoring, and infrastructure management
    • Proactively troubleshoot and resolve infrastructure incidents and performance bottlenecks
    • Participate in on-call rotations and incident management processes
    • Continuously improve system architecture and automation to reduce manual intervention and improve efficiency
    • Support CI/CD pipelines and infrastructure as code (IaC) initiatives

     

    Requirements

    • 4+ years of experience in Site Reliability Engineering, DevOps, or Cloud Engineering roles
    • Strong hands-on experience with AWS services (EC2, S3, VPC, Lambda, CloudWatch, IAM, etc.)
    • Proven expertise with Google Cloud Platform (Compute Engine, GKE, Cloud Storage, IAM, Stackdriver, etc.)
    • Skilled in scripting and automation tools (Python, Bash, Terraform, Ansible, or similar)
    • Experience managing container orchestration platforms such as Kubernetes or GKE
    • Familiarity with CI/CD tools such as Jenkins, GitLab CI, or CircleCI
    • Solid understanding of networking, security best practices, and cloud infrastructure design
    • Comfortable working in agile, collaborative team environments
    • Excellent communication skills and ability to work with distributed teams
    • Availability for a minimum of 4 hours overlap with New York time zone for meetings and collaboration
    More
  • Β· 42 views Β· 1 application Β· 30d

    Platform Engineer

    Hybrid Remote Β· Ukraine Β· 4 years of experience Β· B1 - Intermediate
    Client Our client is a regional leader in the transportation industry, with annual revenue exceeding €100 million, currently launching a new international ticket sales platform. Team Multiple independent teams contribute to different parts of the...

    Client

     

    Our client is a regional leader in the transportation industry, with annual revenue exceeding €100 million, currently launching a new international ticket sales platform.

     

    Team

     

    Multiple independent teams contribute to different parts of the system. DataArt specialists are embedded across these teams. The mobile team is focused on shaping the future of the client’s mobile appβ€”enhancing performance and evolving its functionality.

     

    Position overview

     

    Our client is seeking skilled Platform Engineers to enhance the security and manageability of their legacy platform. The role focuses primarily on AWS IAM for both users and services, with implementation delivered through Terraform and automation supported in Node.js or Python. The design and technical approach are already defined, with work broken down into clear, deliverable tasks prior to onboarding. Contractors will be responsible for efficiently executing this plan while ensuring quality, reliability, and alignment with platform standards.

     

     

    Technology stack

     

    AWS (3+ years, live infrastructure)
    Terraform (IaC, tagging strategies, IAM)
    Software engineering (Python OR Node.js / JavaScript)

     

     

    Responsibilities

    • Deliver AWS IAM improvements and refactoring for users, roles, and services.
    • Implement infrastructure changes using Terraform following existing module patterns.
    • Apply least-privilege and secure-by-default principles consistently across accounts.
    • Contribute automation scripts or tooling in Node.js or Python as required.
    • Collaborate with the tech lead and delivery manager to achieve defined milestones.
    • Test, validate, and deploy IAM and Terraform changes through CI/CD pipelines.
    • Provide clear delivery updates and proactively highlight blockers or risks.

    Requirements

    • Strong experience with AWS, especially IAM (roles, policies, permissions, service access).
    • Proven delivery experience with Terraform (modules, pipelines, multi-account setups).
    • Proficiency in Node.js or Python for automation or integration tasks.
    • Track record of delivering results within agile, delivery-focused teams.
    • Familiarity with CI/CD pipelines (e.g. GitHub Actions, TeamCity)
    More
  • Β· 34 views Β· 1 application Β· 25d

    Snowflake Platform Engineer/DevOps

    Hybrid Remote Β· Armenia, Georgia, Poland, Serbia, Ukraine Β· 4 years of experience Β· B1 - Intermediate
    Client Our client is a leading financial services business operating a comprehensive data marketplace that supports the entire Client’s business. Project overview The Snowflake platform administration team underpins the data marketplace ecosystem,...

    Client

    Our client is a leading financial services business operating a comprehensive data marketplace that supports the entire Client’s business.

     

    Project overview

    The Snowflake platform administration team underpins the data marketplace ecosystem, including Snowflake, Confluent, dbt Labs, and Astronomer. The team is expanding to include new skills in Terraform-based platform configuration automation to better support the platform's operational needs. The role focuses on platform administration and operational stability.

     

    Position overview

    We are seeking an experienced Snowflake Platform Engineer/DevOps to join the team. The successful candidate will primarily deliver platform configuration automation using Terraform within a CI/CD environment, onboard new consuming applications, troubleshoot user issues, and ensure overall stability of the Snowflake environment. This role requires strong expertise in managing Snowflake environments, platform integrations, and infrastructure-as-code automation with Terraform.

    A $1,000 bonus will be provided after a successful trial period.

    This position requires working hours from 9:00 AM to 5:00 PM UK time zone.

     

    Technology stack

    Snowflake platform and RBAC management
    Terraform for infrastructure as code and change management within CI/CD pipelines
    Azure cloud services, including Azure Functions, security integrations, Private Link, and authentication mechanisms
    SaaS integrations with private link connectivity
    Data ecosystem, including Confluent, dbt Labs, and Astronomer

     

    Responsibilities

    • Manage the full lifecycle of Snowflake environments from account setup through production deployment
    • Administer Snowflake RBAC, storage integrations, and application integrations
    • Deliver platform configuration automation using Terraform in a CI/CD environment, supporting multiple SaaS capabilities
    • Onboard new consumer applications onto the platform and provide support and troubleshooting for platform issues
    • Collaborate across teams to maintain platform stability and reliability
    • Learn and apply new technologies and best practices to enhance team capability in automation and platform administration

    Requirements

    • Expert experience managing platform changes using Terraform within CI/CD pipelines
    • Minimum 3 years of experience managing Snowflake environments, including implementation and production support
    • Strong skills with Snowflake RBAC, storage, application integrations, and knowledge of Snowpark Container Services
    • Minimum 3 years of strong Azure knowledge, including Azure Functions, security integrations, Private Link, and authentication mechanisms
    • Experience integrating SaaS capabilities using a private link
    • Ability to troubleshoot and resolve platform and integration issues effectively
    • Strong communication skills and ability to collaborate across technical and business teams

    Nice to have

    • Experience working within complex, multi-technology ecosystems
    • Ability to work autonomously and as part of a team to manage platform stability and support business needs
    • Willingness to learn new automation technologies and contribute to team growth
    More
  • Β· 24 views Β· 1 application Β· 24d

    AI Engineer

    Hybrid Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    We are seeking an AI Engineer with a strong software engineering background, proficient in Python and modern cloud-native technologies. The ideal candidate has hands-on experience with Snowflake, BigQuery, or AWS data platforms and solid expertise in...

     

    We are seeking an AI Engineer with a strong software engineering background, proficient in Python and modern cloud-native technologies. The ideal candidate has hands-on experience with Snowflake, BigQuery, or AWS data platforms and solid expertise in data engineering, including ETL, Spark, Spark Streaming, Jupyter Notebooks, data quality, and medallion architecture and design.

    Experience with machine learning best practices such as model training, evaluation, and weighting is essential.

     

    Responsibilities

    • Design, develop, and deploy scalable AI and machine learning models.
    • Build and maintain data pipelines and ETL processes using Spark, Spark Streaming, and related tools.
    • Ensure high data quality and implement medallion architecture design principles.
    • Collaborate with data scientists, engineers, and product teams to translate requirements into technical solutions.
    • Implement best practices for model training, evaluation, and performance tuning.
    • Develop, integrate, and maintain AI agents and conversational AI solutions where applicable.

    Requirements

    • Strong software engineering skills (Python, cloud-native stacks)
    • Hands-on experience with Snowflake, BigQuery, or AWS data platforms
    • Solid data engineering experience (ETL, Spark, Spark Streaming, Jupyter Notebooks, data quality, medallion architecture)
    • Knowledge of machine learning best practices (model training, evaluation, weighting)

    Nice to have

    • Experience building AI agents (Langchain, Langgraph, OpenAI Agents, PydanticAI)
    • Experience building conversational AI agents (AI chats, Evaluation-Driven Development)
    More
  • Β· 18 views Β· 1 application Β· 24d

    AI Tech Lead

    Hybrid Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    We are looking for a skilled AI Tech Lead to guide our AI development efforts, mentor a talented team, and deliver innovative machine learning and AI solutions that drive business value. Responsibilities Lead and mentor a team of AI engineers and data...

     

    We are looking for a skilled AI Tech Lead to guide our AI development efforts, mentor a talented team, and deliver innovative machine learning and AI solutions that drive business value.

     

    Responsibilities

    • Lead and mentor a team of AI engineers and data scientists.
    • Oversee the design, development, and deployment of AI and machine learning models.
    • Collaborate with stakeholders to define project goals and align AI solutions with business needs.
    • Ensure best practices in AI development, including code quality, testing, and documentation.
    • Drive innovation by researching and applying the latest AI technologies and techniques.
    • Manage project timelines, priorities, and deliverables within an agile environment.
    • Facilitate cross-team communication and collaboration.
    • Monitor model performance and lead continuous improvements.

    Requirements

    • Experience leading teams or projects as a Tech Lead / Senior Engineer
    • Strong software engineering background (Python, modern cloud-native stacks)
    • Hands-on experience with Snowflake, BigQuery, or AWS data platforms
    • Solid experience with data engineering (ETL, Spark, Jupyter Notebooks, medallion architecture and design)
    • Experience building conversational AI agents (AI chats, Evaluation-Driven Development)
    • Understanding of constraint solving (SAT/CT-SAT) and/or optimization algorithms
    • Experience with machine learning best practices (model training, evaluation, weighting)
    • Solid API integration experience (REST, gRPC, messaging systems)
    • Excellent communication and leadership skills

    Nice to have

    • Knowledge of agentic AI patterns (human-in-the-loop, ReAct)
    • Experience building AI agents with frameworks like Langchain, Langgraph, OpenAI Agents, PydanticAI
    More
  • Β· 16 views Β· 0 applications Β· 22d

    Data Architect

    Hybrid Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    Client Our client is a leading global travel agency network specializing in luxury and experiential journeys. They seek to develop a unified API framework that ensures secure, flexible, and seamless system integration. As a Data Architect, you will...

    Client

    Our client is a leading global travel agency network specializing in luxury and experiential journeys. They seek to develop a unified API framework that ensures secure, flexible, and seamless system integration.

     

     

    As a Data Architect, you will design and implement scalable, secure, and high-performance data architectures that support business needs. You will leverage cloud platformsβ€”particularly Azure Data Servicesβ€”and data warehousing solutions such as Snowflake to build robust data pipelines, ensure data quality, and optimize data storage and processing. Collaborating closely with data engineers, analysts, and business stakeholders, you will translate complex requirements into effective technical solutions.

    You will also define data standards and governance policies, lead data migration and modernization initiatives, and provide technical leadership and mentorship to the team. Your work will drive data-driven decision-making and enable the organization to efficiently manage and utilize its data assets.

     

    Responsibilities

    • Design and implement robust, scalable, and secure data architectures using Azure Data Services (e.g., Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake).
    • Architect and optimize Snowflake data warehouses for performance, scalability, and cost-efficiency.
    • Develop and maintain complex SQL and T-SQL scripts for data transformation, integration, and reporting.
    • Collaborate with data engineers, analysts, and business stakeholders to translate business requirements into technical solutions.
    • Define and enforce data architecture standards, best practices, and governance policies.
    • Lead data migration and modernization initiatives from legacy systems to cloud-based platforms.
    • Evaluate and recommend new tools, technologies, and frameworks to improve data infrastructure.
    • Mentor junior team members and provide technical leadership across projects.

    Requirements

    • Azure Data Services (Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, Azure Data Lake Storage)
    • Cloud Data Architecture & Design (especially Azure cloud)
    • Snowflake Data Warehouse Design & Optimization
    • Advanced SQL and T-SQL Programming
    • Data Transformation & ETL/ELT Processes
    • Data Integration Techniques
    • Data Modeling (Star Schema, Snowflake Schema, Normalization & Denormalization)
    • Technical Leadership and Mentorship
    • Cross-Functional Collaboration with Business and Technical Teams
    • Requirements Gathering and Technical Solution Design
    • Documentation and Standardization of Data Architecture
    • Problem Solving and Analytical Thinking
    • Work schedule alignment till 5pm UTC -3 (exclusive)
    More
  • Β· 26 views Β· 1 application Β· 22d

    Data Engineer with Expertise in SQL Development and Snowflake

    Hybrid Remote Β· Ukraine Β· 8 years of experience Β· B2 - Upper Intermediate
    Project overview Our client is a leading global travel agency network specializing in luxury and experiential journeys. They are seeking to strengthen their relational database and Azure data platform through enhanced design, architecture, development,...

    Project overview

     

    Our client is a leading global travel agency network specializing in luxury and experiential journeys. They are seeking to strengthen their relational database and Azure data platform through enhanced design, architecture, development, and the creation of new features.

     

    Position overview

     

    We seek a skilled Data Engineer with expertise in SQL development and Snowflake. This role focuses on building data ingestion pipelines, ensuring data integrity, and developing service layers that support external users.

     

    Technology stack

     

    Azure Cloud, SQL / T-SQL, Python, Snowflake, SQL Server

     

    Responsibilities

    • Analyze, plan, develop, deploy, and manage large, scalable, distributed data systems.
    • Develop automated tests for unit, integration, regression, performance, and build verification.
    • Understand and apply advanced principles of entity-relationship model design, proper data typing practices, index management, data management, and data security.
    • Research and prototype new product and database features, design, and architecture ahead of mainstream development.
    • Implement monitoring and logging solutions to ensure reliability and traceability of data flows.
    • Ensure security, scalability, and performance of data services exposed to external users.
    • Review designs, code, and test plans of other developers, providing recommendations for improvement or optimization.
    • Develop and maintain microservices and stateless architectures.
    • Follow defined software development lifecycle best practices.
    • Collaborate with management and stakeholders to accurately identify requirements and establish priorities.

    Requirements

    • More than 8 years of experience designing and developing solutions with SQL, including 3 years specializing in Snowflake cloud data warehouses, along with extensive work on other relational and cloud-based databases.
    • Intermediate-level knowledge of developing solutions using Python and REST APIs.
    • Experience in developing relational and non-relational data platforms/data pipelines using Azure cloud solutions.
    • Familiarity with ETL/ELT processes, data modeling, and data warehousing concepts.
    • Proficiency with Git and CI/CD tools (e.g., Azure DevOps).
    • Desire and ability to work as part of a team with minimal supervision in a results-oriented, fast-paced, dynamic environment.
    • Time zone alignment until 5 PM UTC-3 (exclusive).
    • Good spoken English.

    Nice to have

    • Database architecture and designing experience.
    • Advanced Snowflake experience
    • Advance level knowledge in automation test creation
    • Experience working with foreign clients
    • Understanding of Agile methodologies’ development
    • Microsoft certificates
    • Experience with the Travel domain
    • Team player
    More
  • Β· 21 views Β· 3 applications Β· 22d

    Senior MLOps Engineer (AWS)

    Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    Project overview The project focuses on enhancing the machine learning capabilities of a leading digital marketing company by designing and implementing robust MLOps practices and scalable ML architecture. It aims to streamline model deployment,...

    Project overview

    The project focuses on enhancing the machine learning capabilities of a leading digital marketing company by designing and implementing robust MLOps practices and scalable ML architecture. It aims to streamline model deployment, monitoring, and maintenance to ensure efficient and reliable machine learning workflows. The initiative also leverages cloud infrastructure and modern MLOps tools to support continuous integration and delivery of ML solutions.

     

    Position overview

    We are seeking an MLOps Engineer with strong experience in building, deploying, and maintaining machine learning pipelines on AWS. You will work closely with Data Scientists and Engineers to ensure scalable, reliable, and automated management of the model lifecycle.

     

    Responsibilities

    • Create and manage ML infrastructure using AWS services
    • Develop, optimize, and maintain end-to-end ML pipelines
    • Automate model training, evaluation, deployment, and monitoring processes
    • Ensure adherence to best practices for CI/CD, observability, and security

     

    Requirements

    • 3+ years of experience in MLOps, Data Engineering, or related fields
    • Strong proficiency with AWS (SageMaker, Lambda, S3, ECS/EKS, CloudWatch, Step Functions)
    • Proficiency in Python and familiarity with ML frameworks
    • Hands-on experience with CI/CD tools (GitHub Actions, GitLab, Jenkins, etc.)
    • Solid understanding of containerization (Docker) and orchestration (Kubernetes)
    • Knowledge of monitoring and logging tools

     

    Nice to have

    • Experience with feature stores
    • Knowledge of Infrastructure as Code tools (Terraform or CloudFormation)
    • Experience building data pipelines and working with stream processing
    • Familiarity with ML model governance and drift detection
    More
  • Β· 72 views Β· 9 applications Β· 18d

    Quantitative Developer

    Hybrid Remote Β· Ukraine Β· 3 years of experience Β· B1 - Intermediate
    Client Our client is a fintech company that provides institutional investors with access to digital asset markets through a regulated and reliable trading platform. It offers tools for algorithmic execution, OTC trading, direct market access, lending,...

    Client

     

    Our client is a fintech company that provides institutional investors with access to digital asset markets through a regulated and reliable trading platform. It offers tools for algorithmic execution, OTC trading, direct market access, lending, custody, and risk management. By combining advanced technology with professional support they help institutions trade and manage digital assets efficiently and securely.

     

     

    Position overview

     

    We’re looking for a Quantitative Developer with a strong mix of analytical and programming skills. The role involves developing and optimizing trading systems, working closely with quantitative researchers and developers to deliver robust, data-driven solutions.

     

     

    Responsibilities

    • Design and maintain mathematical and statistical financial models.
    • Develop algorithmic trading tools and infrastructure.
    • Deliver solutions in Java and Python (main languages).
    • Use SQL for data handling and analysis.
    • Collaborate with quantitative teams to implement and refine trading strategies.
    • Ensure code quality through testing, debugging, and performance tuning.

     

    Requirements

    • 3–5+ years of experience in quantitative development.
    • Strong skills in Java or Python (must be open to working with Java).
    • Good knowledge of SQL and relational databases.
    • Quantitative, statistical, or math-oriented background (e.g., statistics, applied math, engineering).
    • Understanding of financial markets and quantitative trading concepts.

     

    Nice to have

    • Knowledge of derivatives instruments pricing and modeling.
    • Experience with crypto, real-time data, or machine learning.
    • Familiarity with Git and modern development workflows
    More
  • Β· 38 views Β· 1 application Β· 8d

    AI/ML Engineer with GenAI Experience

    Bulgaria, Latvia, Poland, Serbia, Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    Client Our client is a top legal recruiting firm creating a data-driven platform that unites news, analytics, real-time case tracking, profiles, salary data, and eventsβ€”powered by AI for actionable insights. Project overview This platform combines news,...

    Client

    Our client is a top legal recruiting firm creating a data-driven platform that unites news, analytics, real-time case tracking, profiles, salary data, and eventsβ€”powered by AI for actionable insights.

     

    Project overview

    This platform combines news, analytics, real-time tracking of deals and cases, interconnected profiles of firms and lawyers, salary information, event schedules, and moreβ€”all enhanced with AI to provide valuable, actionable insights.

     

    Position overview

    We are looking for a talented Senior AI/ML Engineer to join our team and help build advanced NLP and GenAI-powered features on our platform. You will work closely with product and data teams to develop intelligent solutions including entity recognition and linking, AI-powered content summarization, and scalable ML pipelines on AWS.

     

    Responsibilities

    • Develop and deploy AI/ML models leveraging GenAI (OpenAI API) for natural language understanding, summarization, and insights extraction
    • Build named entity recognition (NER) and entity linking solutions tailored to legal domain data (law firms, cases, deals, people)
    • Implement scalable NLP pipelines for processing news, legal documents, and transaction data from multiple sources
    • Design, train, and evaluate ML models to improve search, classification, and recommendation features
    • Collaborate with AWS teams to deploy and maintain models using SageMaker, manage datasets on S3, and ensure reliable operation and scalability
    • Integrate AI capabilities seamlessly into the web platform, working alongside front-end and backend engineers
    • Continuously research new AI models and NLP techniques relevant to legal data and user experience

     

    Requirements

    • Strong experience with GenAI technologies, especially OpenAI’s API for advanced NLP tasks
    • Proven expertise in NER and entity linking techniques, preferably on domain-specific data
    • Solid background in NLP workflows (tokenization, embedding, text classification, summarization)
    • Comfortable with Python ML frameworks (TensorFlow, PyTorch, Hugging Face Transformers)
    • Hands-on experience deploying ML models and workflows on AWS (SageMaker, S3 storage, Lambda, etc.)
    • Familiarity with legal domain data or interest in legal tech is a strong plus
    • Good software engineering practices and version control (Git)
    • Excellent problem-solving skills and ability to work cross-functionally
    • Strong communication skills and eagerness to learn emerging AI/ML technologies

       

    Nice to have

    • Experience with knowledge graphs or graph databases for entity linking
    • Prior work on real-time data ingestion and analytics pipelines
    • Experience with containerization (Docker, Kubernetes)

     


     

    More
  • Β· 16 views Β· 0 applications Β· 8d

    Azure Data Engineer

    Hybrid Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Client The client is a premier institution offering world-class postgraduate business education, including MBA, Executive MBA, and specialised finance and management programs. Its mission is to transform global business practices. The client is globally...

    Client

     

    The client is a premier institution offering world-class postgraduate business education, including MBA, Executive MBA, and specialised finance and management programs. Its mission is to transform global business practices. The client is globally recognized for its rigorous academics, exceptional faculty, and cutting-edge research. It consistently ranks among the world’s top business schools, securing high positions in global MBA rankings. The client fosters leadership and innovation, equipping students for impactful careers in the international business landscape. Join a great company, not merely an individual project.
     

    Project overview

     

    The Client recently completed their Data & AI Strategy and Roadmap, establishing a foundation for a data-driven future. The assessment phase reviewed the current state of Data and AI, analyzing technology, processes, resources, and structure, and provided strategic recommendations aligned with the School’s 5-year transformation plan. Building on this, the discovery phase focused on data governance, use cases, business drivers, service offerings, technology, and roles, delivering further strategic insights. With this groundwork complete, The Client is now entering the delivery phase, which includes implementing a Data Platform, a first use case, and establishing Data Governance roles, processes, and technologies.

     

    Responsibilities
     

    • Design, build, and optimize complex ETL/ELT pipelines using Azure Data Factory, including mapping data flows and orchestration.
    • Develop scalable data processing solutions using Azure Synapse Analytics (dedicated SQL pools, Spark pools) for enterprise-grade analytics.
    • Implement and maintain Medallion architecture (Bronze/Silver/Gold) on Azure Data Lake Storage (ADLS Gen2) with proper data organization, security, and governance.
    • Build large-scale data transformation workflows using Azure Databricks/Synapse Spark with PySpark/Python.
    • Develop and integrate Azure Functions (Python/C#) to enable event-driven processing and custom pipeline logic.
    • Implement CI/CD pipelines for data platforms using Azure DevOps Pipelines or GitHub Actions.
    • Automate cross-environment deployments using IaC tools (ARM, Bicep, Terraform).
    • Optimize SQL queries, database objects, and Spark jobs for performance and reliability.
    • Design dimensional models (Star/Snowflake schemas) and develop production-grade data models.
    • Collaborate with cross-functional teams to clarify requirements, communicate architecture decisions, and resolve data issues.
       

    Requirements
     

    • Proven expertise with core Azure data engineering services:
    • Azure Data Factory
    • Azure Synapse Analytics (SQL & Spark)
    • Azure Data Lake Storage Gen2
    • Azure Databricks / Synapse Spark
    • Azure Functions
    • Strong command of Python and PySpark for data processing and automation.
    • Advanced SQL skills: complex queries, stored procedures, optimization, and performance tuning.
    • Solid understanding of data modeling, including design of facts, dimensions, and analytical structures.
    • Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps or GitHub Actions).
    • Practical experience with Infrastructure-as-Code for multi-environment deployment (ARM, Bicep, Terraform).
    • Excellent Git knowledge: branching strategies, pull requests, code reviews.
    • Strong problem-solving abilities for debugging pipelines, resolving deployment issues, and optimizing performance.
    • Strong communication skills for effectively explaining technical solutions to technical and non-technical stakeholders.
       

    Nice to have
     

    • Experience with Microsoft Fabric ecosystem, including:
    • OneLake & Lakehouse
    • Fabric Data Engineering / Warehouse
    • Notebooks & Spark jobs
    • Power BI development experience:
    • Building dashboards and reports
    • Advanced DAX
    • Optimized data modeling
    • Row-Level Security (RLS) setup
    • Microsoft certifications:
    • DP-203: Azure Data Engineer Associate
    • DP-600: Fabric Data Engineer Associate
    More
Log In or Sign Up to see all posted jobs