DataArt

Joined in 2013
49% answers
DataArt is a global software engineering firm. With over 20 years of experience, teams of highly-trained engineers around the world, deep industry sector knowledge, and ongoing technology research, we help clients create custom software that improves their operations and opens new markets. DataArt started out as a company of friends and has a special culture that distinguishes it from other IT outsourcers, such as:
- Flat structure. There are no β€œbosses” and β€œsubordinates”.
- We hire people not to a project, but to the company. If the project (or your work in it) is over, you go to another project or to a paid β€œIdle”.
- Flexible schedule, ability to change projects, to work from home, to try yourself in different roles.
- Minimal bureaucracy and micromanagement, convenient corporate services
  • Β· 30 views Β· 4 applications Β· 19d

    System Administrator

    Office Work Β· Ukraine (Kharkiv) Β· 1 year of experience Β· A1 - Beginner
    Position overview DataArt is a global software engineering firm and a trusted technology partner for market leaders and visionaries. Our world-class team designs and engineers data-driven, cloud-native solutions to deliver immediate and enduring...

    Position overview

     

    DataArt is a global software engineering firm and a trusted technology partner for market leaders and visionaries. Our world-class team designs and engineers data-driven, cloud-native solutions to deliver immediate and enduring business value.

    We promote a culture of radical respect, prioritizing your personal well-being as much as your expertise. We stand firmly against prejudice and inequality, valuing each of our employees equally.

    We respect the autonomy of others before all else, offering remote, onsite, and hybrid work options. Our Learning and development centers, R&D labs, and mentorship programs encourage professional growth.

    Our long-term approach to collaboration with clients and colleagues alike focuses on building partnerships that extend beyond one-off projects. We provide the ability to switch between projects and technology stacks, creating opportunities for exploration through our learning and networking systems to advance your career.

    We are searching for a specialist for the position of System Administrator for our local IT helpdesk and support team.

     

    Technology stack

     

    Operating Systems: Microsoft Windows, macOS, Linux
    Network Technologies: AD, DNS, DHCP, NAT, VPN, VLAN, Group Policies

     

    Responsibilities

    • Configure and deploy hardware and software for new employees
    • Ensure seamless connectivity and troubleshoot network issues
    • Set up printers, scanners, and other peripherals
    • Monitor system performance and promptly address any disruptions
    • Provide efficient technical support to end-users, troubleshooting issues and answering queries
    • Maintain accurate records of system configurations and changes
    • Regularly update software, perform backups, and manage incidents
    • Implement security measures and adhere to industry standards

    Requirements

    • Excellent understanding of hardware and software troubleshooting
    • Familiarity with Windows Server and desktop operating systems
    • Strong understanding of Active Directory (AD), DNS, DHCP, NAT, VPN, VLAN, and Group Policies
    • Comfortable working with Windows, macOS, and Linux
    • Effective communication with internal customers and colleagues
    • Initiative, ability to multitask, and desire to collaborate
    • Excellent verbal and written communication skills

    Nice to have

    • Experience with VMware ESXi
    • Proficiency in PowerShell scripts for task automation
    • Experience managing Windows Servers
    • Managing access rights and File Server service quotas
    • Experience with Zabbix and Syslog
    • CCNA, MCSA, or other relevant vendor certifications are highly valued

     

    We offer

     

    Vacation

    20 paid days

     

    Health insurance

    We help you to take out an insurance policy for you and your loved ones

     

    Sick pay

    10 days without a doctor's note, afterwards - with the document

     

    Time off for state holidays

    According to the official calendar

     

    Pleasant environment

     

    Comfort service

    Solving technical and everyday problems at work

    More
  • Β· 53 views Β· 2 applications Β· 21d

    System Administrator

    Office Work Β· Ukraine (Dnipro) Β· 1 year of experience Β· B1 - Intermediate
    Client DataArt is a global software engineering firm and a trusted technology partner for market leaders and visionaries. Our world-class team designs and engineers data-driven, cloud-native solutions to deliver immediate and enduring business value. ...

    Client

     

    DataArt is a global software engineering firm and a trusted technology partner for market leaders and visionaries. Our world-class team designs and engineers data-driven, cloud-native solutions to deliver immediate and enduring business value.

     

    Technology stack

    Operating Systems: Microsoft Windows, macOS, Linux
    Network Technologies: AD, DNS, DHCP, NAT, VPN, VLAN, Group Policies

     

    Responsibilities

    • Workstation Setup & Maintenance: Configure and deploy hardware and software for new employees
    • Network Configuration & Support: Ensure seamless connectivity and troubleshoot network issues
    • Peripheral Device Management: Set up printers, scanners, and other peripherals
    • System & Infrastructure Monitoring: Monitor system performance and promptly address any disruptions
    • User Support: Provide efficient technical support to end-users, troubleshooting issues and answering queries
    • Technical Documentation: Maintain accurate records of system configurations and changes
    • Routine Maintenance: Regularly update software, perform backups, and manage incidents
    • Security & Compliance: Implement security measures and adhere to industry standards

    Requirements

    • Computer Proficiency: Excellent understanding of hardware and software troubleshooting
    • Microsoft Windows Expertise: Familiarity with Windows Server and desktop operating systems
    • Network Knowledge: Strong understanding of Active Directory (AD), DNS, DHCP, NAT, VPN, VLAN, and Group Policies
    • Cross-Platform Skills: Comfortable working with Windows, macOS, and Linux
    • Soft Skills: Effective communication with internal customers and colleagues
    • Adaptability & Teamwork: Initiative, ability to multitask, and desire to collaborate
    • English Proficiency: Excellent verbal and written communication skills

    Nice to have

    • Virtualization: Experience with VMware ESXi
    • Scripting: Proficiency in PowerShell scripts for task automation
    • Server Administration: Experience managing Windows Servers
    • Access Management: Managing access rights and File Server service quotas
    • Monitoring Tools: Experience with Zabbix and Syslog
    • Certifications: CCNA, MCSA, or other relevant vendor certifications are highly valued
    More
  • Β· 25 views Β· 0 applications Β· 29d

    Senior Cloud Infrastructure Architect

    Hybrid Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Our client is a leading Fortune 500 financial technology company that provides comprehensive payment solutions and financial services across multiple continents. They process billions of transactions annually and serve millions of customers worldwide. ...

    Our client is a leading Fortune 500 financial technology company that provides comprehensive payment solutions and financial services across multiple continents. They process billions of transactions annually and serve millions of customers worldwide.

     

    You'll collaborate with a world-class team of senior data scientists, ML engineers, and technology consultants from leading organizations in the fintech and cloud computing space. This diverse group brings together deep technical expertise, industry knowledge, and proven experience delivering mission-critical solutions at enterprise scale.

     

    We are seeking an experienced Senior Cloud Infrastructure Architect with deep expertise in AI/ML infrastructure implementations. This role is designed for seasoned cloud architecture professionals who have successfully designed and deployed enterprise-scale AI environments - not for those simply exploring cloud technologies.

     

    Technology stack

     

    AWS Bedrock, SageMaker, and comprehensive AI/ML service ecosystem
    Vector databases and advanced RAG architectures
    Enterprise-scale data processing and real-time model deployment systems
    Automated CI/CD pipelines specifically designed for ML workflows

     

     

    Responsibilities

    • Design and architect AI/ML environments using AWS Bedrock, SageMaker, and vector database infrastructure
    • Implement enterprise-grade networking solutions for AI workloads and data processing pipelines
    • Architect and deploy database and storage solutions optimized for GenAI applications
    • Develop Infrastructure as Code (IaC) using Terraform and CloudFormation for AI platform deployment
    • Design and implement serverless architectures supporting scalable AI/ML workflows
    • Establish security best practices and compliance frameworks for AI infrastructure
    • Optimize performance and tuning of AI environments for enterprise-scale operations
    • Ensure high availability, disaster recovery, and scalability of AI platform infrastructure

    Requirements

    • Hands-on experience architecting and deploying AI environments using AWS Bedrock, SageMaker, and vector databases
    • Advanced knowledge of cloud networking concepts, VPC design, and secure connectivity for AI workloads
    • Proven experience with database and storage deployments optimized for AI/ML applications and large-scale data processing
    • Deep understanding of security best practices and implementation in regulated financial services environments
    • Proficiency in developing IaC solutions using Terraform and CloudFormation for AI platform automation
    • Hands-on experience architecting and deploying serverless solutions supporting AI/ML workflows
    • Demonstrated skills in performance tuning and optimization of cloud environments at enterprise scale
    • Proven track record of designing and managing mission-critical AI infrastructure in production environments
    • Minimum 7+ years cloud infrastructure experience with 3+ years dedicated AI/ML infrastructure architecture experience
    • Availability during US Eastern Time (ET) business hours to collaborate with onsite team

    Nice to have

    • Bachelor's degree in Computer Science, Engineering, Information Technology, or related technical field (Master's preferred)
    • AWS certifications (Solutions Architect Professional, Security Specialty, etc.)
    • Experience with financial services infrastructure and compliance requirements
    • Knowledge of regulatory frameworks (PCI DSS, SOX, etc.) and their infrastructure implications

     

     

    Working Time-zone

    US/Canada (GMT-7)

    More
  • Β· 95 views Β· 21 applications Β· 28d

    Middle Platform Engineer

    Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    Client Our client is a regional leader in the transportation industry, with annual revenue exceeding €100 million, currently launching a new international ticket sales platform. Project overview The client engages users through a mobile app,...

    Client

     

    Our client is a regional leader in the transportation industry, with annual revenue exceeding €100 million, currently launching a new international ticket sales platform.

     

    Project overview

     

    The client engages users through a mobile app, responsive web application, and call center. The system is complex and must comply with numerous regulations.
    The system is built on a microservice architecture with continuous integration. Hundreds of components run independently on AWS, increasing modularity and optimizing service usage.

     

    Team

     

    Multiple independent teams contribute to different parts of the system. DataArt specialists are embedded across these teams. The mobile team is focused on shaping the future of the client’s mobile appβ€”enhancing performance and evolving its functionality.

     

    Position overview

     

    DataArt is currently helping to modernize and develop the IT system of an independent online retailer of railroad tickets. You will be working as part of two different small squads, supported by a Tech Lead (Senior Platform Engineer) and a Delivery Manager, mainly on delivery, both TL and DM will oversee the backlog and project management aspects of the delivery.

     

    Technology stack

     

    AWS (3+ years, live infrastructure)
    Terraform (IaC, tagging strategies, IAM)
    Software engineering (Python OR Node.js / JavaScript)

     

    Responsibilities

    • IAM related work through Terraform and coding against AWS APIs
    • Work with Terraform to create IaC for existing AWS resources ensuring existing tagging strategies

    Requirements

    • 3+ years of Python or Node.js/Javascript development experience
    • Experience with AWS, Terraform, Infrastructure as Code
    • Strong skills in Clean Code and OOP
    • Experience with unit and integration testing, working without QA engineers
    • Understanding of SOLID principles
    • Understanding of Agile development methodology
    • Good spoken English

    Nice to have

    • Load balancers (Nginx / Traefik), ECS, CloudFront, GitHub Actions
    More
  • Β· 174 views Β· 9 applications Β· 27d

    K2 Developer

    Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    Client Our client operates a sophisticated low-code platform ecosystem integrated with advanced data infrastructure, including SQL Server, Snowflake, DBT data pipelines, Power BI analytics, and Python automation. The firm is seeking to optimize and...

    Client

     

    Our client operates a sophisticated low-code platform ecosystem integrated with advanced data infrastructure, including SQL Server, Snowflake, DBT data pipelines, Power BI analytics, and Python automation. The firm is seeking to optimize and maintain their K2 platform implementation, which serves as a critical component for workflow automation, process orchestration, and business process management across enterprise systems.

     

    Position overview

     

    We are looking for an experienced K2 Developer to maintain, enhance, and optimize the K2 platform, ensuring seamless integration with the broader technology stack while delivering robust, scalable workflow solutions that drive operational efficiency.

     

    Technology stack

     

    Workflow Platform: K2 (Nintex K2 Five / K2 Cloud)
    Data Infrastructure: SQL Server, Snowflake, DBT data pipelines
    Analytics & Reporting: Power BI
    Automation & Scripting: Python
    Integration Tools: REST / SOAP APIs, web services, and custom connectors
    Environment: Microsoft Azure ecosystem, including Active Directory and related services

     

    Responsibilities

    • Monitor and maintain K2 system health, performance, and availability
    • Design and develop K2 processes and workflows aligned with business requirements
    • Perform regular system updates, patches, and version upgrades
    • Manage K2 infrastructure scaling and resource optimization
    • Troubleshoot system issues and implement preventive maintenance strategies
    • Maintain comprehensive documentation of platform configurations and procedures

    Requirements

    • Experience as a K2 Developer (minimum 3+ years) with hands-on expertise in designing, developing, and deploying K2 workflows and SmartForms
    • Strong understanding of workflow automation, process orchestration, and BPM principles
    • Proficiency in SQL (queries, stored procedures, data modeling) and experience integrating K2 with relational databases
    • Experience working with API integrations and data-driven process automation
    • Familiarity with Snowflake, DBT, and Python scripting for data transformation or automation is an advantage
    • Ability to collaborate closely with business analysts, data engineers, and process owners to translate business needs into scalable technical solutions
    • Excellent problem-solving skills, attention to detail, and a proactive approach to continuous improvement
    • Strong communication skills and the ability to work effectively in a distributed or hybrid environment

    Nice to have

    • Version Control & CI/CD: Git, Azure DevOps
    More
  • Β· 45 views Β· 3 applications Β· 26d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Client Our client is a leading legal recruiting company aiming to build a data-driven platform specifically designed for lawyers and law firms. The platform brings everything together in one place β€” news and analytics, real-time deal and case tracking...

    Client

    Our client is a leading legal recruiting company aiming to build a data-driven platform specifically designed for lawyers and law firms. The platform brings everything together in one place β€” news and analytics, real-time deal and case tracking from multiple sources, firm and lawyer profiles enriched with cross-linked insights, rankings, and more.

     

    Project overview

    The platform aggregates data from hundreds of public sources including law firm websites, deal announcements, legal databases, and media publications creating a unified ecosystem of structured and interconnected legal data. It combines AI-driven enrichment, automated data processing, and scalable infrastructure to ensure comprehensive and reliable coverage of the legal market.

     

    Position overview

    We are seeking a Senior Data Engineer to join our team to design, build, and scale robust data pipelines for collecting, transforming, and structuring large volumes of legal and financial data collected via scrapers. You will collaborate closely with AI/ML engineers, DevOps, Front-end and Back-end teams to ensure smooth and efficient data workflows integral to the platform.

     

    Responsibilities

    • Design and implement data ingestion pipelines to collect and process structured and unstructured data from multiple online sources (web scraping, APIs, feeds, etc.).
    • Develop and optimize ETL/ELT workflows using Python, Apache Spark, and SQL.
    • Build and orchestrate scalable data workflows leveraging AWS services such as EMR, Batch, S3, and SageMaker.
    • Develop and deploy internal data APIs and utilities supporting platform data access and manipulation.
    • Implement robust text extraction and parsing logic to handle diverse data formats.
    • Ensure data quality through validation, deduplication, normalization, and lineage tracking across Raw β†’ Curated β†’ Enriched data layers.
    • Containerize and orchestrate data workloads using Docker and native AWS solutions.
    • Collaborate closely with AI, Back-end, and Front-end teams to ensure efficient data integration and flow.

     

    Requirements

    • Proven expertise in Python programming.
    • Solid understanding of the AWS ecosystem, including EMR, Batch, S3, Lambda, SageMaker, and Glue.
    • Practical experience with Docker and containerized development workflows.
    • Experience with web scraping, text extraction, or other data ingestion techniques from diverse online sources.
    • Strong analytical mindset, communication skills, and ability to collaborate across multiple teams.

     

    Nice to have

    • Hands-on experience with Apache Spark and SQL for distributed data processing.
    More
  • Β· 23 views Β· 0 applications Β· 26d

    Data Architect (AWS and Python FastAPI)

    Full Remote Β· Ukraine Β· 6 years of experience Β· B2 - Upper Intermediate
    Client Our client is a leading legal recruiting company focused on building a cutting-edge data-driven platform for lawyers and law firms. The platform consolidates news and analytics, real-time deal and case tracking from multiple sources, firm and...

    Client

    Our client is a leading legal recruiting company focused on building a cutting-edge data-driven platform for lawyers and law firms. The platform consolidates news and analytics, real-time deal and case tracking from multiple sources, firm and lawyer profiles with cross-linked insights, rankings, and more β€” all in one unified place.

     

    Position overview

    We are seeking a skilled Data Architect with strong expertise in AWS technologies (Step Functions, Lambda, RDS - PostgreSQL), Python, and SQL to lead the design and implementation of the platform’s data architecture. This role involves defining data models, building ingestion pipelines, applying AI-driven entity resolution, and managing scalable, cost-effective infrastructure aligned with cloud best practices.

     

    Responsibilities

    • Define entities, relationships, and persistent IDs; enforce the Fact schema with confidence scores, timestamps, validation status, and source metadata.
    • Blueprint ingestion workflows from law firm site feeds; normalize data, extract entities, classify content, and route low-confidence items for review.
    • Develop a hybrid of deterministic rules and LLM-assisted matching; configure thresholds for auto-accept, manual review, or rejection.
    • Specify Ops Portal checkpoints, data queues, SLAs, and create a corrections/version history model.
    • Stage phased rollout of data sourcesβ€”from ingestion through processing, storage, replication, to management via CMS.
    • Align architecture with AWS and Postgres baselines; design for scalability, appropriate storage tiers, and cost-effective compute and queuing solutions.

     

    Requirements

    • Proven experience as a Data Architect or Senior Data Engineer working extensively with AWS services.
    • Strong proficiency in Python development, preferably with FastAPI or similar modern frameworks.
    • Deep understanding of data modeling principles, entity resolution, and schema design for complex data systems.
    • Hands-on experience designing and managing scalable data pipelines, workflows, and AI-driven data processing.
    • Familiarity with relational databases such as PostgreSQL.
    • Solid experience in data architecture, including data modelling. Knowledge of different data architectures such as Medallion architecture, Dimensional modelling
    • Strong knowledge of cloud infrastructure cost optimization and performance tuning.
    • Excellent problem-solving skills and ability to work in a collaborative, agile environment.

     

    Nice to have

    • Experience within legal tech or recruiting data domains.
    • Familiarity with Content Management Systems (CMS) for managing data sources.
    • Knowledge of data privacy, security regulations, and compliance standards.
    • Experience with web scraping.
    • Experience with EMR and SageMaker.
    More
  • Β· 34 views Β· 1 application Β· 26d

    AI Developer with expertise in Python

    Full Remote Β· Ukraine Β· 5 years of experience Β· B2 - Upper Intermediate
    Position overview DataArt is a global software engineering firm and a trusted technology partner for market leaders and visionaries. Our world-class team designs and engineers data-driven, cloud-native solutions to deliver immediate and enduring...

    Position overview

     

    DataArt is a global software engineering firm and a trusted technology partner for market leaders and visionaries. Our world-class team designs and engineers data-driven, cloud-native solutions to deliver immediate and enduring business value.

    We promote a culture of radical respect, prioritizing your personal well-being as much as your expertise. We stand firmly against prejudice and inequality, valuing each of our employees equally.

    We respect the autonomy of others before all else, offering remote, onsite, and hybrid work options. Our Learning and development centers, R&D labs, and mentorship programs encourage professional growth.

    Our long-term approach to collaboration with clients and colleagues alike focuses on building partnerships that extend beyond one-off projects. We provide the ability to switch between projects and technology stacks, creating opportunities for exploration through our learning and networking systems to advance your career.

    We are building an advanced Generative AI solution for a Complex Travel Business Intelligence Portal. This cutting-edge system is designed to revolutionize user interaction by delivering accurate, context-aware responses tailored to individual needs. It integrates and aggregates data from diverse sources and datastore types, transforming raw information into actionable, meaningful analytical insights.

     

    Responsibilities

    • Design, develop, and maintain scalable Python-based backend systems for GenAI applications and AI Multi-Agent solutions.
    • Build and optimize AI/ML models for natural language understanding, context-aware response generation, and data summarization.
    • Integrate multiple data sources (structured and unstructured) into a unified analytical framework.
    • Collaborate with data engineers, UI/UX designers, and product managers to deliver seamless user experiences.
    • Implement and fine-tune LLMs and other generative models for travel-related queries and analytics.
    • Ensure high performance, reliability, and security of AI-driven features.
    • Continuously research and apply the latest advancements in GenAI and NLP.

     

    Requirements

    • Strong proficiency in Python, including libraries such as FastAPI, Pandas, NumPy.
    • Proven experience with AI/ML model development, especially in Generative AI, NLP, LLMs, AI Agents.
    • Familiarity with vector databases, embedding techniques, and retrieval-augmented generation (RAG).
    • Experience working with cloud platforms (Azure, AWS, GCP) and CI/CD pipelines.
    • Solid understanding of data integration and API development.
    • Ability to write clean, maintainable, and well-documented code.
    • Excellent problem-solving skills and a proactive mindset.

     

    Nice to have

    • MCP Server/Client implementation.
    • A2A protocol communication.
    • Experience with GraphQL.
    • Knowledge of LangChain, LangGraph, Semantic Kernel or similar frameworks.
    • Experience in the travel or BI domain.
    • Familiarity with data visualization tools and dashboarding.
    • Understanding of user intent modeling and contextual AI.
    More
  • Β· 57 views Β· 3 applications Β· 20d

    Solution Architect (AI/ML, Azure)

    Full Remote Β· Ukraine Β· 7 years of experience Β· B2 - Upper Intermediate
    Client A leading player in the European SaaS market, specializing in payroll and HR management solutions. The client is undertaking a strategic initiative to build an AI-powered assistant designed to transform HR and payroll processes through intelligent...

    Client

    A leading player in the European SaaS market, specializing in payroll and HR management solutions.
    The client is undertaking a strategic initiative to build an AI-powered assistant designed to transform HR and payroll processes through intelligent automation, natural language understanding, and data-driven insights.

     

    Position overview

    We are looking for a seasoned Solution Architect with expertise in Artificial Intelligence and Machine Learning to design and lead the implementation of advanced AI/ML solutions.
    You will collaborate with cross-functional teams including data scientists, engineers, product managers, and business stakeholders to create scalable, secure, and robust AI-driven systems that align with business goals.

     

    Responsibilities

    • Architect end-to-end AI/ML solutions aligned with business requirements and technical standards.
    • Lead the design of scalable machine learning platforms, pipelines, and infrastructure within Azure.
    • Collaborate closely with data scientists and engineers to translate AI research into production-ready solutions.
    • Define data integration strategies encompassing big data, real-time streaming, and structured data sources.
    • Evaluate and select appropriate AI/ML technologies, tools, and frameworks that fit project needs.
    • Ensure scalability, security, and compliance within AI/ML systems architecture.
    • Establish and evolve MLOps practices, including CI/CD for models, monitoring, and retraining workflows.
    • Provide technical leadership and mentorship to development teams, ensuring architectural consistency.
    • Maintain high-quality documentation and communicate designs clearly to both technical and non-technical stakeholders.
    • Stay current with AI/ML trends (LLMs, RAG, agentic systems) and apply them pragmatically to enhance solution efficiency.

     

    Requirements

    • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.
    • 7+ years of experience in solution or software architecture, with a focus on AI/ML systems.
    • Proven experience designing and deploying AI/ML or data-driven solutions on Microsoft Azure (Azure ML, Data Factory, Databricks, Cognitive or OpenAI Services).
    • Strong technical expertise in ML frameworks (PyTorch, TensorFlow, Scikit-learn, Hugging Face).
    • Understanding of MLOps principles and cloud-native architecture.
    • Familiarity with big data technologies and orchestration tools (e.g., Spark, Kafka, Airflow).
    • Experience with containerization and orchestration (Docker, Kubernetes).
    • Excellent communication, documentation, and leadership skills.
    • Strong sense of responsibility, punctuality, and attention to detail in delivery.

     

    Nice to have

    • Experience working with SaaS or HR/payroll systems.
    • Knowledge of AI ethics, data privacy, and GDPR compliance.
    • Familiarity with large language models (LLMs), Retrieval-Augmented Generation (RAG), or agentic systems.
    • Background in knowledge graphs or semantic data modeling.
    • Prior experience with Agile development methodologies and cross-functional collaboration.
    More
  • Β· 26 views Β· 0 applications Β· 20d

    Senior Database Administrator (DBA)

    Hybrid Remote Β· Ukraine Β· 5 years of experience Β· B1 - Intermediate
    Client Our client is a leading global travel agency network specializing in luxury and experiential journeys. This role requires a hands-on technical leader with deep experience in SQL Server administration, automation, and Azure-native migrations....

    Client

     

    Our client is a leading global travel agency network specializing in luxury and experiential journeys.

     

     

    This role requires a hands-on technical leader with deep experience in SQL Server administration, automation, and Azure-native migrations. The DBA will provide operational support while driving key strategic initiatives for 2026.

     

    Responsibilities

    • Oversee the health, performance, and availability of all SQL Server and Snowflake databases.
    • Implement proactive monitoring for warehouse performance, resource consumption, and credit usage (Snowflake).
    • Configure, tune, and troubleshoot SQL Server Agent jobs, indexes, and partitions.
    • Manage and monitor backup integrity, recovery jobs, and log shipping.
    • Manage user accounts, roles, and permissions across SQL Server and Snowflake.
    • Ensure role-based access control (RBAC) is enforced consistently.
    • Audit and maintain compliance for handling of sensitive/PII data.
    • Align database access policies with organizational security standards and ISO 27001 practices.
    • Manage and troubleshoot FiveTran connections and integrations.
    • Work closely with Data Engineering teams to optimize ingestion pipelines into SQL and Snowflake.
    • Support cross-platform connectivity and reporting tools (Grafana, DataDog, Power BI).
    • Conduct in-depth query performance analysis using execution plans and dynamic management views (DMVs).
    • Optimize indexing strategies, partition schemes, and statistics maintenance.
    • Identify and resolve blocking, deadlocks, and high-cost queries.
    • Develop automation scripts (PowerShell, T-SQL, Python) to reduce manual intervention.
    • Standardize deployment of database schema, objects, and configurations using Infrastructure-as-Code (IaC) principles.
    • Document repeatable processes and maintain runbooks for operational continuity.

    Requirements

    • 10+ years as a Database Administrator with deep expertise in SQL Server 2012–2019/2022.
    • Proven experience migrating large-scale databases from IaaS SQL to Azure SQL Managed Instances / Azure SQL Database.
    • Hands-on knowledge of high-availability SQL configurations (AlwaysOn Availability Groups, Failover Clusters, Log Shipping, Replication).
    • Strong performance tuning skills: indexing strategies, query plan analysis & optimization, tempDB and transaction log management, deadlock detection/resolution
    • Deep understanding of backup and disaster recovery strategies (full/differential/log backups, point-in-time recovery, geo-redundant backups).
    • Scripting and automation expertise with T-SQL, PowerShell, and IaC tooling (Bicep, ARM, Terraform preferred).
    • Familiarity with data pipeline tools (FiveTran, ETL/ELT platforms) and integration troubleshooting.
    • Excellent documentation skills for runbooks, migration plans, and incident reviews.
    • Experience with Snowflake administration: warehouse management, RBAC implementation, monitoring and credit optimization, supporting data sharing and secure cleanroom use cases.

    Nice to have

    • Experience with Azure Database Migration Service (DMS) and Azure-native monitoring (Log Analytics, DataDog).
    • Background in BCDR planning and ISO 27001/SOC2 compliance alignment.
    • Prior experience in cloud-native modernization efforts (legacy to PaaS migrations).
    • Strong collaboration skills to partner with Data Engineering, SRE, and Security teams.

     

    More
  • Β· 68 views Β· 5 applications Β· 20d

    Senior Site Reliability Engineer (SRE) – AWS and GCP

    Full Remote Β· Ukraine Β· 4 years of experience Β· B2 - Upper Intermediate
    Client Our client is revolutionizing the retail direct store delivery model by addressing key challenges like communication gaps, out-of-stocks, invoicing errors, and price inconsistencies. Through innovative technology and strong partnerships, they help...

    Client

    Our client is revolutionizing the retail direct store delivery model by addressing key challenges like communication gaps, out-of-stocks, invoicing errors, and price inconsistencies. Through innovative technology and strong partnerships, they help boost sales, increase profits, and enhance customer loyalty.

     

    Position overview

    We are seeking a skilled Middle to Senior Site Reliability Engineer (SRE) with hands-on experience in both AWS and Google Cloud Platform (GCP) to join a fast-paced, innovative project team. This role requires proactive monitoring, automation, and optimization of cloud infrastructure to ensure high availability, scalability, and security of mission-critical retail solutions.

    The candidate should be available for at least four hours of overlapping work time with the New York time zone to ensure smooth collaboration and participation in team activities.

     

    Responsibilities

    • Design, build, and operate scalable and reliable systems on AWS and GCP cloud platforms
    • Develop and maintain automation scripts to improve deployment, monitoring, and incident response
    • Ensure system availability, latency, and overall reliability to meet service level objectives (SLOs)
    • Collaborate with development and operations teams to implement best practices for security, monitoring, and infrastructure management
    • Proactively troubleshoot and resolve infrastructure incidents and performance bottlenecks
    • Participate in on-call rotations and incident management processes
    • Continuously improve system architecture and automation to reduce manual intervention and improve efficiency
    • Support CI/CD pipelines and infrastructure as code (IaC) initiatives

     

    Requirements

    • 4+ years of experience in Site Reliability Engineering, DevOps, or Cloud Engineering roles
    • Strong hands-on experience with AWS services (EC2, S3, VPC, Lambda, CloudWatch, IAM, etc.)
    • Proven expertise with Google Cloud Platform (Compute Engine, GKE, Cloud Storage, IAM, Stackdriver, etc.)
    • Skilled in scripting and automation tools (Python, Bash, Terraform, Ansible, or similar)
    • Experience managing container orchestration platforms such as Kubernetes or GKE
    • Familiarity with CI/CD tools such as Jenkins, GitLab CI, or CircleCI
    • Solid understanding of networking, security best practices, and cloud infrastructure design
    • Comfortable working in agile, collaborative team environments
    • Excellent communication skills and ability to work with distributed teams
    • Availability for a minimum of 4 hours overlap with New York time zone for meetings and collaboration
    More
  • Β· 37 views Β· 0 applications Β· 20d

    Platform Engineer

    Hybrid Remote Β· Ukraine Β· 4 years of experience Β· B1 - Intermediate
    Client Our client is a regional leader in the transportation industry, with annual revenue exceeding €100 million, currently launching a new international ticket sales platform. Team Multiple independent teams contribute to different parts of the...

    Client

     

    Our client is a regional leader in the transportation industry, with annual revenue exceeding €100 million, currently launching a new international ticket sales platform.

     

    Team

     

    Multiple independent teams contribute to different parts of the system. DataArt specialists are embedded across these teams. The mobile team is focused on shaping the future of the client’s mobile appβ€”enhancing performance and evolving its functionality.

     

    Position overview

     

    Our client is seeking skilled Platform Engineers to enhance the security and manageability of their legacy platform. The role focuses primarily on AWS IAM for both users and services, with implementation delivered through Terraform and automation supported in Node.js or Python. The design and technical approach are already defined, with work broken down into clear, deliverable tasks prior to onboarding. Contractors will be responsible for efficiently executing this plan while ensuring quality, reliability, and alignment with platform standards.

     

     

    Technology stack

     

    AWS (3+ years, live infrastructure)
    Terraform (IaC, tagging strategies, IAM)
    Software engineering (Python OR Node.js / JavaScript)

     

     

    Responsibilities

    • Deliver AWS IAM improvements and refactoring for users, roles, and services.
    • Implement infrastructure changes using Terraform following existing module patterns.
    • Apply least-privilege and secure-by-default principles consistently across accounts.
    • Contribute automation scripts or tooling in Node.js or Python as required.
    • Collaborate with the tech lead and delivery manager to achieve defined milestones.
    • Test, validate, and deploy IAM and Terraform changes through CI/CD pipelines.
    • Provide clear delivery updates and proactively highlight blockers or risks.

    Requirements

    • Strong experience with AWS, especially IAM (roles, policies, permissions, service access).
    • Proven delivery experience with Terraform (modules, pipelines, multi-account setups).
    • Proficiency in Node.js or Python for automation or integration tasks.
    • Track record of delivering results within agile, delivery-focused teams.
    • Familiarity with CI/CD pipelines (e.g. GitHub Actions, TeamCity)
    More
  • Β· 27 views Β· 0 applications Β· 15d

    Snowflake Platform Administrator

    Hybrid Remote Β· Poland, Ukraine Β· 4 years of experience Β· B1 - Intermediate
    Client Our client is a leading financial services business operating a comprehensive data marketplace that supports the entire Client’s business. Project overview The Snowflake platform administration team underpins the data marketplace ecosystem,...

    Client

    Our client is a leading financial services business operating a comprehensive data marketplace that supports the entire Client’s business.

     

    Project overview

    The Snowflake platform administration team underpins the data marketplace ecosystem, including Snowflake, Confluent, dbt Labs, and Astronomer. The team is expanding to include new skills in Terraform-based platform configuration automation to better support the platform's operational needs. The role focuses on platform administration and operational stability.

     

    Position overview

    We are seeking an experienced Snowflake Platform Administrator to join the team. The successful candidate will primarily deliver platform configuration automation using Terraform within a CI/CD environment, onboard new consuming applications, troubleshoot user issues, and ensure overall stability of the Snowflake environment. This role requires strong expertise in managing Snowflake environments, platform integrations, and infrastructure-as-code automation with Terraform.

     

    Technology stack

    Snowflake platform and RBAC management
    Terraform for infrastructure as code and change management within CI/CD pipelines
    Azure cloud services, including Azure Functions, security integrations, Private Link, and authentication mechanisms
    SaaS integrations with private link connectivity
    Data ecosystem, including Confluent, dbt Labs, and Astronomer

     

    Responsibilities

    • Manage the full lifecycle of Snowflake environments from account setup through production deployment
    • Administer Snowflake RBAC, storage integrations, and application integrations
    • Deliver platform configuration automation using Terraform in a CI/CD environment, supporting multiple SaaS capabilities
    • Onboard new consumer applications onto the platform and provide support and troubleshooting for platform issues
    • Collaborate across teams to maintain platform stability and reliability
    • Learn and apply new technologies and best practices to enhance team capability in automation and platform administration
    •  

    Requirements

    • Expert experience managing platform changes using Terraform within CI/CD pipelines
    • Minimum 5 years of experience managing Snowflake environments, including implementation and production support
    • Strong skills with Snowflake RBAC, storage, application integrations, and knowledge of Snowpark Container Services
    • Minimum 3 years of strong Azure knowledge, including Azure Functions, security integrations, Private Link, and authentication mechanisms
    • Experience integrating SaaS capabilities using a private link
    • Ability to troubleshoot and resolve platform and integration issues effectively
    • Strong communication skills and ability to collaborate across technical and business teams

       

    Nice to have

    • Experience working within complex, multi-technology ecosystems
    • Ability to work autonomously and as part of a team to manage platform stability and support business needs
    • Willingness to learn new automation technologies and contribute to team growth
    More
  • Β· 25 views Β· 2 applications Β· 15d

    Power Automate developer/ Power Platform Consultant

    Hybrid Remote Β· Armenia, Bulgaria, Poland, Serbia, Ukraine Β· 4 years of experience Β· B1 - Intermediate
    Client Our client is one of the world’s top 20 investment companies, headquartered in Great Britain. The client is building a Centre of Excellence for Power Platform, focusing on delivering impactful digital solutions. The team is growing and is...

    Client

    Our client is one of the world’s top 20 investment companies, headquartered in Great Britain.

     

    The client is building a Centre of Excellence for Power Platform, focusing on delivering impactful digital solutions. The team is growing and is currently looking for an experienced Power Platform Engineer to join.

     

    Position overview

    A Senior Power Platform Consultant with strong expertise in Power Automate and Power Apps will design, develop, and deploy Power Platform solutions. The role involves close collaboration with business stakeholders, providing mentorship to junior team members, and ensuring the adoption of governance and best practices within the Power Platform Centre of Excellence.

    The consultant will also manage a backlog encompassing a wide range of projects β€” from simple tasks, such as building tables, to complex, high-impact payment solutions. The work offers a mix of challenges across various projects.

     

    Responsibilities

    • Lead the design, development, and deployment of Power Platform solutions to address diverse business needs
    • Collaborate with business stakeholders to understand requirements and translate them into effective technical solutions
    • Provide technical leadership and mentor junior consultants and developers
    • Maintain thorough documentation for assets, including data sources, logic, and visualization standards
    • Ensure compliance with governance frameworks and best practices within the Power Platform Centre of Excellence
    • Conduct workshops and training sessions to build capability within client teams

       

    Requirements

    • Proven experience with Power Platform tools, focusing on Power Automate and Power Apps
    • Experience in solution design and development within the Microsoft Power Platform environment
    • Asset Management domain knowledge
    • Microsoft Power Platform certification or equivalent practical experience
    • Demonstrated ability to lead and drive the adoption of Power Platform tools
    • Strong communication skills with the ability to work collaboratively across business and technical teams
    • Proficient in English (spoken and written)
    More
  • Β· 20 views Β· 1 application Β· 14d

    AI Engineer

    Hybrid Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    We are seeking an AI Engineer with a strong software engineering background, proficient in Python and modern cloud-native technologies. The ideal candidate has hands-on experience with Snowflake, BigQuery, or AWS data platforms and solid expertise in...

     

    We are seeking an AI Engineer with a strong software engineering background, proficient in Python and modern cloud-native technologies. The ideal candidate has hands-on experience with Snowflake, BigQuery, or AWS data platforms and solid expertise in data engineering, including ETL, Spark, Spark Streaming, Jupyter Notebooks, data quality, and medallion architecture and design.

    Experience with machine learning best practices such as model training, evaluation, and weighting is essential.

     

    Responsibilities

    • Design, develop, and deploy scalable AI and machine learning models.
    • Build and maintain data pipelines and ETL processes using Spark, Spark Streaming, and related tools.
    • Ensure high data quality and implement medallion architecture design principles.
    • Collaborate with data scientists, engineers, and product teams to translate requirements into technical solutions.
    • Implement best practices for model training, evaluation, and performance tuning.
    • Develop, integrate, and maintain AI agents and conversational AI solutions where applicable.

    Requirements

    • Strong software engineering skills (Python, cloud-native stacks)
    • Hands-on experience with Snowflake, BigQuery, or AWS data platforms
    • Solid data engineering experience (ETL, Spark, Spark Streaming, Jupyter Notebooks, data quality, medallion architecture)
    • Knowledge of machine learning best practices (model training, evaluation, weighting)

    Nice to have

    • Experience building AI agents (Langchain, Langgraph, OpenAI Agents, PydanticAI)
    • Experience building conversational AI agents (AI chats, Evaluation-Driven Development)
    More
Log In or Sign Up to see all posted jobs