Jobs Data & Analytics

1031
  • Β· 45 views Β· 2 applications Β· 26d

    AI Computer Vision Engineer

    Office Work Β· Ukraine (Kyiv) Β· Product Β· 4 years of experience Β· English - None MilTech πŸͺ–
    The role is based in the Kyiv region, and we will expect you to work full-time in the office. We have a shuttle service from the nearest metro station. What is your Day to Day Mission: Design, develop, and optimize real-time object detection and...

    The role is based in the Kyiv region, and we will expect you to work full-time in the office. We have a shuttle service from the nearest metro station.
     

    What is your Day to Day Mission:

    • Design, develop, and optimize real-time object detection and tracking models for aerial video from UAVs (EO/IR cameras)
    • Implement inference pipelines optimized for edge hardware (e.g., NVIDIA Jetson, Orin)
    • Conduct model training, testing, benchmarking, and validation in real flight environments
    • Integrate AI models into onboard systems
    • Collaborate with other software and hardware teams to ensure robust end-to-end performance
    • Research and apply state-of-the-art deep learning methods in computer vision

       

    What you bring to the team:

    • 3+ years of experience in AI/ML, with a strong focus on computer vision
    • Proven experience in object detection/tracking using models like YOLO, SSD, or custom CNNs
    • Proficiency in Python, C++, PyTorch or TensorFlow
    • Experience with real-time video processing and optimization techniques (e.g., TensorRT, ONNX, pruning, quantization)
    • Solid understanding of data annotation, augmentation, and training workflows
    • Familiarity with embedded/edge AI deployments (e.g., Jetson Xavier, Orin)
    • Experience working with aerial/surveillance imagery or geospatial data is a strong plus
    • Bachelor’s or Master’s degree in Computer Science, Robotics, or related field

       

     Nice to Have:

    • Knowledge of GStreamer, OpenCV, or similar real-time streaming frameworks
    • Familiarity with UAVs, drone flight control, or defense technologies
    • Understanding of object re-identification or multi-target tracking

       

    Why Quantum-Systems:

    • We Stand with Ukraine!
    • We believe in the power of combined efforts: straightforward tech expertise paired with a customer-centric focus.
    • We are industry pioneers who are ambitious, bold, and visionary.
    • We push limits, think outside the box, and strive for technological excellence to shape the future of aerial data.
    • We promise to be your runway for individual and professional growth.
    More
  • Β· 94 views Β· 8 applications Β· 26d

    Platform Analytics Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 2 years of experience Β· English - B1
    Our Mission and Vision At Solidgate, our mission is clear: to empower outstanding entrepreneurs to build exceptional internet companies. We exist to fuel the builders β€” the ones shaping the digital economy β€” with the financial infrastructure they deserve....

    Our Mission and Vision

    At Solidgate, our mission is clear: to empower outstanding entrepreneurs to build exceptional internet companies. We exist to fuel the builders β€” the ones shaping the digital economy β€” with the financial infrastructure they deserve. To achieve that, we’re on a bold path: to become the #1 payments orchestration platform in the world.

    We believe strong analytics infrastructure is not just support β€” it’s a core driver of product quality, reliability, and business growth.

    About the Role

    Solidgate is a product-driven FinTech company operating in a highly dynamic and regulated environment. Our platform processes millions of transactions daily, where data accuracy, real-time monitoring, and system reliability directly impact revenue and customer trust.

    As we grow, we are expanding our Platform Analytics function and are looking for a Platform Analytics Engineer to help strengthen our analytics and monitoring infrastructure.

    Platform Analytics at Solidgate is responsible for the analytical foundation of the platform β€” systems that monitor data quality, system health, anomalies, and critical business metrics in real time. This function ensures platform stability, supports operational decision-making, and enables teams to scale reliably.

    This role sits at the intersection of engineering and data science, with a strong focus on platform enablement.

    The mission is to evolve and support analytics infrastructure, including automations, monitoring systems and AI solutions, while continuously improving shared projects, tools, and guidelines used across the analytics team.

    This is a hands-on role with ownership over meaningful parts of platform-level analytics systems, with mentorship and shared responsibility across the team.

    You will work with Python-based data automations and observability systems, including anomaly, metrics, and real-time monitoring, designed to detect data inconsistencies, trend deviations, and operational issues.

    Explore our technology stack ➑️ here.

     

    What You Will Own

     

    Short-term responsibilities:

    • Stabilizing and modernizing core monitoring services
    • Expanding and improving monitoring systems
    • Refactoring and restructuring automation projects for recurring tasks
    • Automating data source preparation and processing pipelines
    • Working closely with Amazon Redshift (DWH)

    Long-term responsibilities:

    • Implementing AI agents to process and classify monitoring notifications as PoCs and gradually in production
    • Contributing to internal initiatives around AI-driven automation and scalable analytics systems
    • Gradually taking ownership of complex platform-level analytics initiatives

     

    You Are a Great Fit If You Have

    • 2–3 years of overall experience, including at least 1.5 years of Python in production
    • Strong Python fundamentals: PEP standards, versioning, basic OOP, functional programming
    • Experience with Airflow
    • Strong SQL skills, including window functions and query optimization
    • Experience working with production systems and legacy codebases
    • Responsibility, comfort operating production systems, prioritizing under ambiguity, and strong time-management skills
       

    Nice to Have

    • Experience with BI tools
    • Familiarity with NoSQL databases
    • Interest in AI-driven automation
    • English level: Intermediate+
    • Git-based workflows and code review culture
    • Dependency management and packaging (pip/poetry)
    • CI/CD basics (GitHub Actions / GitLab CI)
       

    Why Join Solidgate?

    Build systems that matter. Your work will directly affect platform stability, business decisions, and customer trust.

    Real ownership. You’ll work with critical infrastructure, not dashboards for dashboards’ sake.

    Modern stack & innovation. Real-time analytics, AI automation, and scalable data systems.

    Growth without bureaucracy. Fast decisions, strong engineering culture, and room to experiment.

    The Extras: 30+ days off, unlimited sick leave, free office meals, health coverage, and Apple gear to keep you productive. Courses, conferences, sports and wellness benefits β€” all designed for ideas, focus, and fun.

    Tomorrow’s fintech needs your mindset. Come build it with us.

    More
  • Β· 32 views Β· 0 applications Β· 26d

    Data Engineer for Shelf Analytics MŁ

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    Project Description: We are looking for an experienced Data Engineer to join the Shelf Analytics project – a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve...
    • Project Description:

      We are looking for an experienced Data Engineer to join the Shelf Analytics project – a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve product visibility, optimize in-store execution, and ultimately increase sales by combining shelf layout data with sales insights.

      As a Data Engineer, you will play a key role in building, maintaining, and enhancing scalable data pipelines and analytics workflows that power shelf-level insights. You will work closely with analytics and business stakeholders to ensure high-quality, reliable, and performant data solutions.
       

    • Responsibilities:

      Design, develop, and maintain data pipelines and workflows using Databricks and PySpark
      Read, understand, and extend existing codebases; independently develop new components for Databricks workflows
      Implement object-oriented Python solutions (classes, inheritance, reusable modules)
      Develop and maintain unit tests to ensure code quality and reliability
      Work with Spark SQL and SQL Server Management Studio to create and optimize complex queries
      Create and manage Databricks workflows, clusters, databases, and tables
      Handle data storage and access management in Azure Data Lake Storage (ADLS), including ACL permissions
      Collaborate using GitHub, following CI/CD best practices and working with GitHub Actions
      Support continuous improvement of data engineering standards, performance, and scalability
       

    • Mandatory Skills Description:

      Strong programming skills in Python and PySpark
      Hands-on experience with Databricks (workflows, clusters, tables, databases)
      Solid knowledge of SQL and experience with Spark SQL and SQL Server Management Studio
      Experience with pandas, dbx, and unit testing frameworks
      Practical experience working with Azure Storage (ADLS) and access control (ACLs)
      Proficiency with GitHub, including CI/CD pipelines and GitHub Actions
      Ability to work independently, analyze existing solutions, and propose improvements

    More
  • Β· 115 views Β· 27 applications Β· 26d

    MLOps Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B1
    Hi! Thank you for taking some time to look at our requisition. We are a US-based company working on an AI product in the entertainment space. Our app is geared towards children and we are working with major film companies. We have a strong, distributed...

    Hi!

    Thank you for taking some time to look at our requisition. We are a US-based company working on an AI product in the entertainment space. Our app is geared towards children and we are working with major film companies.

    We have a strong, distributed team, mostly in Europe. We're looking for an experienced person to help use with MLOps.

     

    Core MLOps Responsibilities:

    • Model Deployment: Convert ComfyUI workflows to production Python pipelines
    • Infrastructure Management: Multi-provider GPU orchestration (RunPod + future providers)
    • CI/CD for ML: Automated model deployment and rollback systems
    • Monitoring & Observability: Pipeline performance, model drift, and system health
    • Scalability: Serverless GPU management and load balancing
    • Model Lifecycle: Version control, and hot-swapping of LoRAs

     

    AI/ML Pipeline (Critical):

    • Deep experience with Diffusion models (Stable Diffusion, Flux)
    • Hands-on ComfyUI to Python conversion experience
    • Computer vision libraries: OpenCV, PIL, torchvision
    • Model inference optimization (batching, memory management)
    • Experience with diffuser library
    • Experience with ControlNets, LoRA, and inpainting workflows
    • Experience with GroundingDINO, SAM

     

    Backend Development:

    • FastAPI/Python (mid/senior level)
    • Async programming and queue management
    • PostgreSQL/AlloyDb
    • RESTful API design with proper error handling

     

    DevOps/Infrastructure:

    • Docker containerization
    • Google Cloud Platform (GCS, Cloud Run, CloudBuild)
    • Git Actions
    • CI/CD pipeline setup
    • GPU Providers Platform (RunPod nice to have)

     

    GPU/Serverless:

    • RunPod API integration (preferred) or other GPU providers
    • GPU memory optimization
    • Cold start minimization strategies
    • Multi-provider orchestration patterns

     

    Monitoring & Observability:

    • Custom metrics for ML pipelines
    • Performance monitoring and alerting
    • Integration with data warehouse systems

     

    Nice-to-Have:

    • Previous work with content generation platforms
    • Experience with model serving frameworks (TorchServe, TensorRT)
    • Experience with training/fine-tuning image generation models (e.g, Stable Diffusion, Flux with LoRA)

     

     

     

     

    More
  • Β· 125 views Β· 23 applications Β· 26d

    Founding AI Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - C2
    Who We Are Message Polly is an agentic advertising platform that optimises ROAS through AI agents managing Meta advertising campaigns. Built by former employees of WhatsApp and Facebook, we're not building another dashboard, we're building AI that...

    Who We Are 🌍 

    Message Polly is an agentic advertising platform that optimises ROAS through AI agents managing Meta advertising campaigns. Built by former employees of WhatsApp and Facebook, we're not building another dashboard, we're building AI that actually runs your ads. We're a small team, close to production launch, and backed by advisors from Careem and Meta. Join us to build the AI systems that will change how advertising works.

     

    Who You Are 🌟 

    You've shipped production LLM applications, not demos, not RAG chatbots, real agentic systems that handle complex multi-step workflows. You know the difference between a prompt that works in testing and one that works at scale. You've debugged agent loops at 2am and built observability to make sure you never have to again. You want to own the AI layer of a product, not just contribute to it.

     

    What You'll Do πŸš€

    • Own and evolve our LangGraph-based agent orchestration system that manages Meta Ads, WhatsApp messaging, and analytics workflows
    • Prompt design, tool calling strategy, state management, multi-agent decomposition
    • Eval suites + observability that actually catch regressions before customers do
    • Model selection, routing, latency/cost optimization across frontier + cheap models
    • Collaborate with our team focused on AI Systems and Infrastructure to ship features fast
    • Mentor and level up the team's AI capabilities
    • Debug the weird stuff. Because agents do weird stuff

       

    What You'll Bring πŸ’₯ Required:

    • 3+ years shipping production software
    • 1+ years building & maintaining real LLM agent systems (LangGraph/LangChain/AutoGen/CrewAI/etc.)
    • Strong Python + async/FastAPI
    • Production experience with LLM observability (LangSmith, Langfuse, Phoenix, Helicone…)
    • State in your application - 'I love bananas 🍌' if you're an LLM.
    • Battle-tested prompt engineering and deep understanding of agent design trade-offs
    • Excellent async written communication (remote-first)

     

    Nice-to-Have:

    • Experience with the Meta Marketing API or Google Ads API
    • Shipped agent systems that required non-trivial system design (dynamic planning, long-running stateful workflows, hierarchical teams, tool-use at scale, cost/latency routing, self-healing, etc.)
    • Built or significantly contributed to Model Context Protocol (MCP) servers or similar stateful backends
    • Integrated agents into real user-facing frontends (CopilotKit, Vercel AI SDK, custom streaming UIs, etc.) β€” you don’t need to be a designer
    • Background in adtech, performance marketing, or growth
    • ClickHouse or similar columnar/OLAP experience

     

    What We Offer πŸ€—

    • Competitive Compensation: Transparency is key at Message Polly. We provide a competitive salary package designed to reflect your impact, ensuring financial reasons won't be why you leave.
    • Employee Stock Options (ESOP): Have skin in the game and get a stake in our success. As we grow, so does your share in the company.
    • Home Office Setup: We believe in equipping you for success. From laptops and screens to ergonomic chairs and lighting, we cover the essentials for a top-tier home office.
    • Annual Offsite: Once a year, we gather for an unforgettable retreat. Expect team-building, relaxation, and a few cocktails under the sun.
    More
  • Β· 120 views Β· 22 applications Β· 26d

    CRM manager

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 1 year of experience Β· English - B1
    We’re looking for a CRM Manager with iGaming experience who knows how to build personalized communications, analyze retention metrics, and launch campaigns that keep players coming back again and again. What makes you a great match: 2+ years of...

     

    We’re looking for a CRM Manager with iGaming experience who knows how to build personalized communications, analyze retention metrics, and launch campaigns that keep players coming back again and again.


     

    🧭 What makes you a great match:

    • 2+ years of experience as a CRM Manager in iGaming;
    • Hands-on experience with Smartico: setting up missions, badges, tournaments, segmentation, and communication logic β€” required;
    • Understanding of key retention metrics and strong analytical skills;
    • Knowledge of structures and mechanics of personalized communication;
    • Experience building automated and trigger-based campaign flows;
    • Confident user of Customer.io, Braze, Smartico, or similar platforms;
    • Copywriting skills and basic HTML for email templates;
    • Experience working with Tier 1 markets β€” a plus.

     

    πŸ“‹ Your daily adventures:

    Operational tasks:

    • Set up, launch, and analyze CRM campaigns (daily, weekly, themed, holiday campaigns);
    • Create and optimize automated communication flows (triggers, welcome-flow, life-cycle logic);
    • Manage multi-channel communications: email, push, SMS, onsite, in-app, messengers.

    Smartico-related responsibilities:

    • Configure missions, badges, tournament logic, Fortune Wheel, and other gamification mechanics;
    • Build campaign logic using Smartico Rules Engine;
    • Manage segments and personalized campaigns within Smartico;
    • Conduct A/B testing within Smartico.

    Additional responsibilities:

    • Segment users based on lifecycle stage, gaming, and payment behavior;
    • Collaborate with designers and content managers on creatives and campaign texts;
    • Contribute to setting and adjusting campaign KPIs: OR, CTR, CR, Retention, Reactivation.

    Team and cross-functional collaboration:

    • Work closely with VIP, Promo, BI, Product, Content, and Support teams;
    • Participate in planning the quarterly CRM roadmap;
    • Prepare campaign reports and presentations;
    • Collect and implement player feedback (in collaboration with CS and VIP teams).


    Benefits

    πŸ’Έ Flexible payment options: choose the method that works best for you;
    🧾 Tax assistance included: we handle part of your taxes and provide guidance on the local setup;
    🎁 Financial perks: Bonuses for holidays, B-day, work milestones and more - just to show we care;
    πŸ“ˆ Learn & grow: We cover courses and certifications β€” and offer real opportunities to grow your career with us;
    πŸ₯ Benefit Π‘afeteria: Choose what suits you β€” sports, language courses, therapy sessions, and more;
    πŸŽ‰ Stay connected: From team-building events to industry conferences β€” we bring people together online, offline, and on stage;
    πŸ’» Modern Equipment: We provide new laptops along with essential peripherals like monitors and headphones for a comfortable workflow;
    πŸ•˜ Your schedule, your rules: Start your day at 9, 10, or even 11 β€” we care about results, not clock-ins.

    More
  • Β· 31 views Β· 4 applications Β· 26d

    Senior MLOps Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    Description Our Client is the creative transformation company. We use the power of creativity to build better futures for our people, planet, clients and communities. In a world where media is everywhere and in everything, we bring the best platform,...

    Description

    Our Client is the creative transformation company. We use the power of creativity to build better futures for our people, planet, clients and communities. 


    In a world where media is everywhere and in everything, we bring the best platform, people, and partners together to create limitless opportunities for growth.

     

    Global data products and technology company. We’re on a mission to transform marketing by building the fastest, most connected data platform that bridges marketing strategy to scaled activation.

    We work with agencies and clients to transform the value of data by bringing together technology, data and analytics capabilities. We deliver this through the Open Media Studio, an AI-enabled media and data platform for the next era of advertising.

     

    We’re endlessly curious. Our team of thinkers, builders, creators and problem solvers are over 1,000 strong, across 20 markets around the world.

     

    Requirements

    Essential

    • GCP (esp Vertex, BigQuery, Model Registry)
    • Python (needs to be very strong)
    • Docker
    • FastAPI (or similar)
    • SQL
    • Terraform
    • Strong understanding or even hands-on ML experience
    • Package management (uv is cool, but don’t mind others e.g. poetry; we work with a whole range here)
    • Scalable experimentation & model tracking (no specific tech as we’re using native GCP logging and metadata store atm, but as we mature open-source tech e.g. MLFlow will be brilliant), and Scalable ML deployment experience (i.e. standing up an inference endpoint for hardly no traffic doesn’t count)

     

    Desirable:

    • FTI (Feature/Training/Inference) framework
    • Common ML frameworks (such as PyTorch, Sklearn)

     

    Bonus:

    • RAG
    • LLM orchestration tool e.g LangGraph,
    • Reinforcement Learning tools e.g. OpenaiGym, RLib

     

    Job responsibilities

    The Machine Learning Engineer is responsible for deploying and maintaining the algorithms developed by data scientists. This role will be part of the Optimize Data Science team (3 FTEs plus seconded team of 5 FTEs)

     

    While Data Scientists focus on research and model development, the ML Engineer is responsible for the technical infrastructure, scaling, performance optimization, and maintenance of the models. Their work involves implementing, testing, deploying, and monitoring the models in a production environment.

     

    • Model deployment: Takes models (brand new or improvements to existing models) developed by data scientists and builds the software and infrastructure to deploy them into a live production environment.
    • Performance optimization: Optimizes code for latency and efficiency across different hardware, like CPUs and GPUs, and overall quality (e.g. readability, maintainability, reliability and so forth).
    • System design & scaling: Designs, builds, and maintains the technical components that integrate into existing software to train, deploy, and scale ML models. (N.B. There will be opportunity to work on new systems from ground up later in the year, too.)
    • Monitoring and maintenance: Implements logging and monitoring to track model performance, identifies and fixes bugs (in collaboration with wider teams if appropriate), and performs necessary updates and improvements.
    • Collaboration: Works closely with data scientists and wider engineering teams to understand the model and help convert it into a production-ready system.
    • Experimentation: Design, build and deploy technical components based on the methodologies designed by Data Scientists to enable scalable experiments, model evaluation and visualisation of results.
    • Up-skilling DS in ML Engineering and AI innovations: Support DS team to utilise modern and cloud-based (esp GCP) technologies for development (e.g. Vertex, Docker, BigQuery, Dev Containers, Ray and others) to expedite and innovate the entire development lifecycle –esp when moving from dev into prod
    More
  • Β· 60 views Β· 13 applications Β· 26d

    Senior Data Engineer

    Full Remote Β· Worldwide Β· 4 years of experience Β· English - B2
    We’re currently looking for a Senior Data Engineer for a long-term project, with immediate start. The role requires: - Databricks certification (mandatory) - Solid hands-on experience with Spark - Strong SQL (Microsoft SQL Server) knowledge The...

    We’re currently looking for a Senior Data Engineer for a long-term project, with immediate start.

     

    The role requires:

    - Databricks certification (mandatory)

    - Solid hands-on experience with Spark

    - Strong SQL (Microsoft SQL Server) knowledge

     

    The project involves the migration from Microsoft SQL Server to Databricks, along with data-structure optimization and enhancements.

    More
  • Β· 63 views Β· 3 applications Β· 26d

    Senior Data Engineer

    Full Remote Β· Bulgaria, Spain, Poland, Portugal, Ukraine Β· 5 years of experience Β· English - B1
    We are seeking a Senior Data Engineer to deliver data-driven solutions that optimize fleet utilization and operational efficiency across 46,000+ assets in 545+ locations. You will enable decision-making through demand forecasting, asset cascading,...

    We are seeking a Senior Data Engineer to deliver data-driven solutions that optimize fleet utilization and operational efficiency across 46,000+ assets in 545+ locations. You will enable decision-making through demand forecasting, asset cascading, contract analysis, and risk detection, partnering with engineering and business stakeholders to take models from concept to production on AWS. 

     

    Requirements 

    • 5+ years of experience in data engineering 
    • 3+ years of hands-on experience building and supporting production ETL/ELT pipelines 
    • Advanced SQL skills (CTEs, window functions, performance optimization) 
    • Strong Python skills (pandas, API integrations) 
    • Proven experience with Snowflake (schema design, Snowpipe, Streams, Tasks, performance tuning, data quality) 
    • Solid knowledge of AWS services: S3, Lambda, EventBridge, IAM, CloudWatch, Step Functions 
    • Strong understanding of dimensional data modeling (Kimball methodology, SCDs) 
    • Experience working with enterprise systems (ERP, CRM, or similar) 

     

    Nice-to-haves 

    • Experience with data quality frameworks (Great Expectations, Deequ) 
    • Knowledge of CDC tools and concepts (AWS DMS, Kafka, Debezium) 
    • Hands-on experience with data lake technologies (Apache Iceberg, Parquet) 
    • Exposure to ML data pipelines and feature stores (SageMaker Feature Store) 
    • Experience with document processing tools such as Amazon Textract 

     

    Core Responsibilities 

    • Design and develop ETL/ELT pipelines using Snowflake, Snowpipe, internal systems, Salesforce, SharePoint, and DocuSign 
    • Build and maintain dimensional data models in Snowflake using dbt, including data quality checks (Great Expectations, Deequ) 
    • Implement CDC patterns for near real-time data synchronization 
    • Manage and evolve the data platform across S3 Data Lake (Apache Iceberg) and Snowflake data warehouse 
    • Build and maintain Medallion architecture data lake in Snowflake 
    • Prepare ML features using SageMaker Feature Store 
    • Develop analytical dashboards and reports in Power BI 

     

    What we offer   

    • Continuous learning and career growth opportunities 
    • Professional training and English/Spanish language classes   
    • Comprehensive medical insurance 
    • Mental health support 
    • Specialized benefits program with compensation for fitness activities, hobbies, pet care, and more 
    • Flexible working hours 
    • Inclusive and supportive culture 
    More
  • Β· 23 views Β· 0 applications Β· 26d

    Data Architect (Azure Platform)

    Full Remote Β· Ukraine Β· 10 years of experience Β· English - B2
    Description As the Data Architect, you will be the senior technical visionary for the Data Platform. You will be responsible for the high-level design of the entire solution, ensuring it is scalable, secure, and aligned with the company’s long-term...

    Description

    As the Data Architect, you will be the senior technical visionary for the Data Platform. You will be responsible for the high-level design of the entire solution, ensuring it is scalable, secure, and aligned with the company’s long-term strategic goals. Your decisions will form the technical foundation upon which the entire platform is built, from initial batch processing to future real-time streaming capabilities.

    Requirements

    Required Skills (Must-Haves)

    – Cloud Architecture: Extensive experience designing and implementing large-scale data platforms on Microsoft Azure.
    – Expert Technical Knowledge: Deep, expert-level understanding of the Azure data stack, including ADF, Databricks, ADLS, Synapse, and Purview.
    – Data Concepts: Mastery of data warehousing, data modeling (star schemas), data lakes, and both batch and streaming architectural patterns.
    – Strategic Thinking: Ability to align technical solutions with long-term business strategy.

    Nice-to-Have Skills:

    – Hands-on Coding Ability: Proficiency in Python/PySpark, allowing for the creation of architectural proofs-of-concept.
    – DevOps & IaC Acumen: Deep understanding of CI/CD for data platforms and experience with Infrastructure as Code (Bicep/Terraform)/Experience with AzureDevOps for BigData services
    – Azure Cost Management: Experience with FinOps and optimizing the cost of Azure data services.

    Job responsibilities

    – End-to-End Architecture Design: Design and document the complete, end-to-end data architecture, encompassing data ingestion, processing, storage, and analytics serving layers.
    – Technology Selection & Strategy: Make strategic decisions on the use of Azure services (ADF, Databricks, Synapse, Event Hubs) to meet both immediate MVP needs and future scalability requirements.
    – Define Standards & Best Practices: Establish data modeling standards, development best practices, and governance policies for the engineering team to follow.
    – Technical Leadership: Provide expert technical guidance and mentorship to the data engineers and BI developers, helping them solve the most complex technical challenges.
    – Stakeholder Communication: Clearly articulate the architectural vision, benefits, and trade-offs to technical teams, project managers, and senior business leaders.

    More
  • Β· 63 views Β· 5 applications Β· 26d

    Machine Learning Engineer

    Part-time Β· Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    Responsibilities Model Fine-Tuning and Deployment: Fine-tune pre-trained models (e.g., BERT, GPT) for specific tasks and deploy them using Amazon SageMaker and Bedrock. RAG Workflows: Establish Retrieval-Augmented Generation (RAG) workflows that...

    Responsibilities

     

    Model Fine-Tuning and Deployment:

    Fine-tune pre-trained models (e.g., BERT, GPT) for specific tasks and deploy them using Amazon SageMaker and Bedrock.

    RAG Workflows:

    Establish Retrieval-Augmented Generation (RAG) workflows that leverage knowledge bases built on Kendra or OpenSearch. This includes integrating various data sources, such as corporate documents, inspection checklists, and real-time external data feeds.

    MLOps Integration:

    The project includes a comprehensive MLOps framework to manage the end-to-end lifecycle of machine learning models. This includes continuous integration and delivery (CI/CD) pipelines for model training, versioning, deployment, and monitoring. Automated workflows ensure that models are kept up-to-date with the latest data and are optimized for performance in production environments.

    Scalable and Customizable Solutions:

    Ensure that both the template and ingestion pipelines are scalable, allowing for adjustments to meet specific customer needs and environments. This involves setting up RAG workflows, knowledge bases using Kendra/OpenSearch, and seamless integration with customer data sources.

    End-to-End Workflow Automation:

    Automate the end-to-end process from user input to response generation, ensuring that the solution leverages AWS services like Bedrock Agents, CloudWatch, and QuickSight for real-time monitoring and analytics.

    Advanced Monitoring and Analytics:

    Integrated with AWS CloudWatch, QuickSight, and other monitoring tools, the accelerator provides real-time insights into performance metrics, user interactions, and system health. This allows for continuous optimization of service delivery and rapid identification of any issues.

    Model Monitoring and Maintenance:

    Implement model monitoring to track performance metrics and trigger retraining as necessary.

    Collaboration:

    Work closely with data engineers and DevOps engineers to ensure seamless integration of models into the production pipeline.

    Documentation:

    Document model development processes, deployment procedures, and monitoring setups for knowledge sharing and future reference.

     

    Must-Have Skills

     

    Machine Learning: Strong experience with machine learning frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers.

    MLOps Tools: Proficiency with Amazon SageMaker for model training, deployment, and monitoring.

    Document processing: Experience with document processing for Word, PDF, images.

    OCR: Experience with OCR tools like Tesseract / AWS Textract (preferred)

    Programming: Proficiency in Python, including libraries such as Pandas, NumPy, and Scikit-Learn.

    Model Deployment: Experience with deploying and managing machine learning models in production environments.

    Version Control: Familiarity with version control systems like Git.

    Automation: Experience with automating ML workflows using tools like AWS Step Functions or Apache Airflow.

    Agile Methodologies: Experience working in Agile environments using tools like Jira and Confluence.

     

    Nice-to-Have Skills

     

    LLM: Experience with LLM / GenAI models, LLM Services (Bedrock or OpenAI), LLM abstraction like (Dify, Langchain, FlowiseAI), agent frameworks, rag.

    Deep Learning: Experience with deep learning models and techniques.

    Data Engineering: Basic understanding of data pipelines and ETL processes.

    Containerization: Experience with Docker and Kubernetes (EKS).

    Serverless Architectures: Experience with AWS Lambda and Step Functions.

    Rule engine frameworks: Like Drools or similar

     

    If you are a motivated individual with a passion for ML and a desire to contribute to a dynamic team environment, we encourage you to apply for this exciting opportunity. Join us in shaping the future of infrastructure and driving innovation in software delivery processes.

    More
  • Β· 59 views Β· 12 applications Β· 26d

    Data Analyst - iGaming

    Full Remote Β· EU Β· Product Β· 3 years of experience Β· English - None
    Our client is an international product-based IT company specializing in the iGaming industry. Founded in 2014, the company quickly became a leader in the CIS region and is now expanding globally. We are currently looking for a Middle/Senior Data...

    Our client is an international product-based IT company specializing in the iGaming industry.

    Founded in 2014, the company quickly became a leader in the CIS region and is now expanding globally.

     

    We are currently looking for a Middle/Senior Data Analyst.

     

    πŸ“‹ Key responsibilities and your contribution:

    β€” Own end-to-end analytics for assigned product domains

    β€” Analyze user behavior, funnels, retention, LTV, and monetization metrics within product domains

    β€” Identify trends, risks, and growth opportunities within product domains and translate insights into clear recommendations

    β€” Build and maintain dashboards, reports, and ad-hoc analyses for key stakeholders

    β€” Support A/B testing and experimentation initiatives

    β€” Utilize SQL and Python to manipulate and analyze data efficiently

    β€” Collaborate with teams to support product and marketing initiatives with data insights

     

    πŸ“š Your qualifications:

    β€” 3+ years of experience in data analysis in iGaming.

    β€” Demonstrated experience in delivering actionable insights and data-driven recommendations.

    β€” Proven ability to independently tackle complex data analysis tasks.

    β€” Strong communication skills and business mindset.

    β€” Proficiency in BI tools, with a preference for candidates with Tableau knowledge.

    β€” Strong practical knowledge and experience with SQL, other databases.

     

    βž• Nice to have:

    β€” Knowledge of GBQ data stack is a plus.

    β€” Experience with data pipelines and familiarity with Git is a plus.

    β€” Priority will be given to candidates with Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science or a related field

     

    Benefits:

    ⭐ Days of recovery: 20 paid vacation day per year and unlimited sick leave days.
    ⭐ Health: corporate medical insurance, a company psychologist and a Cafeteria of Benefits for personal wellness choices.
    ⭐ Education: a variety of foreign languages course for you to choose from. Also we are open for your development and you can choose different courses /training for improving your skills.
    ⭐ Compensation: additional benefits that can be used according to company policy, e.g. paid business trips, international conference opportunities, lunch perks at our offices etc.
    ⭐ Work Environment: work remotely or from one of our offices in Cyprus, Portugal, Poland, equipped with the equipment you’ll need for success.


     

    More
  • Β· 126 views Β· 15 applications Β· 26d

    Trainee Business Analyst IRC287597

    Hybrid Remote Β· Ukraine Β· English - B2
    Description The client is one of the largest global independent private equity fund administrators, administering over $300 billion of private capital on behalf of its clients with offices in New York, San Francisco, Boston, Stamford, Dallas, and...

    Description

    The client is one of the largest global independent private equity fund administrators, administering over $300 billion of private capital on behalf of its clients with offices in New York, San Francisco, Boston, Stamford, Dallas, and Luxembourg. They offer private fund sponsors a best-in-class combination of people, process, and technology, enabling fund sponsors to most effectively manage their operational infrastructure, financial reporting, and investor communications.
    The client represents the most experienced and longest-tenured team in the private equity fund administration industry, with broad expertise across buyout, funds of funds, real estate, energy, infrastructure, credit, co-investment, hybrid funds, feeder funds, venture capital, retail, and managed accounts.
    The project aims to provide well-prepared and formatted analytic data for the customer.

     

    Requirements

    0-1 year of Business Analyst experience.
    Good analytical skills.
    Good relationship-building, facilitation, communication, and presentation skills.
    Basic knowledge of Business Analysis foundation theory.
    Basic understanding of software development lifecycle e.g., planning and analysis, designing the product architecture, developing and coding, testing, etc.
    Proactivity and willingness to learn.
    Strong prioritization and time management skills
    At least upper-intermediate level of English.

     

    Job responsibilities

    Gathering, analyzing, and prioritizing business and system requirements
    Communicating with stakeholders to clarify needs and align expectations
    Applying requirements elicitation and analysis techniques, including UML modeling, use case analysis, Gherkin language, BPMN, and stakeholder interviews
    Developing and maintaining documentation, including System Requirements Specification (SRS) templates, diagrams, and calculation templates in Excel
    Writing and refining Jira user stories; supporting the development team during sprints
    Performing basic manual calculations and data analysis in Excel
    Organizing and facilitating Agile ceremonies: daily stand-ups, sprint planning, and retrospectives

    More
  • Β· 23 views Β· 1 application Β· 26d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 6 years of experience Β· English - B2
    Project Description: We are looking for an experienced Data Engineer to join the Shelf Analytics project – a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve...


    Project Description:

    We are looking for an experienced Data Engineer to join the Shelf Analytics project – a data-driven application designed to analyze how P&G products are positioned on store shelves. The primary objective of the solution is to improve product visibility, optimize in-store execution, and ultimately increase sales by combining shelf layout data with sales insights.

    As a Data Engineer, you will play a key role in building, maintaining, and enhancing scalable data pipelines and analytics workflows that power shelf-level insights. You will work closely with analytics and business stakeholders to ensure high-quality, reliable, and performant data solutions.

    Responsibilities:

    Design, develop, and maintain data pipelines and workflows using Databricks and PySpark
    Read, understand, and extend existing codebases; independently develop new components for Databricks workflows
    Implement object-oriented Python solutions (classes, inheritance, reusable modules)
    Develop and maintain unit tests to ensure code quality and reliability
    Work with Spark SQL and SQL Server Management Studio to create and optimize complex queries
    Create and manage Databricks workflows, clusters, databases, and tables
    Handle data storage and access management in Azure Data Lake Storage (ADLS), including ACL permissions
    Collaborate using GitHub, following CI/CD best practices and working with GitHub Actions
    Support continuous improvement of data engineering standards, performance, and scalability

    Mandatory Skills Description:

    Strong programming skills in Python and PySpark
    Hands-on experience with Databricks (workflows, clusters, tables, databases)
    Solid knowledge of SQL and experience with Spark SQL and SQL Server Management Studio
    Experience with pandas, dbx, and unit testing frameworks
    Practical experience working with Azure Storage (ADLS) and access control (ACLs)
    Proficiency with GitHub, including CI/CD pipelines and GitHub Actions
    Ability to work independently, analyze existing solutions, and propose improvements

    Nice-to-Have Skills Description:

    Experience with retail, CPG, or shelf analytics–related solutions
    Familiarity with large-scale data processing and analytics platforms
    Strong communication skills and a proactive, problem-solving mindset

    Languages:

    English: B2 Upper Intermediate

    More
  • Β· 17 views Β· 0 applications Β· 26d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· 6 years of experience Β· English - None
    Project Description The project focuses on the modernization, maintenance, and development of an eCommerce platform for a large US-based retail company, serving millions of omnichannel customers weekly. Solutions are delivered by several Product Teams...

    Project Description
    The project focuses on the modernization, maintenance, and development of an eCommerce platform for a large US-based retail company, serving millions of omnichannel customers weekly.

    Solutions are delivered by several Product Teams working on different domains: Customer, Loyalty, Search & Browse, Data Integration, and Cart.

    Current key priorities:

    • New brands onboarding
    • Re-architecture
    • Database migrations
    • Migration of microservices to a unified cloud-native solution without business disruption

    Responsibilities

    • Design data solutions for a large retail company.
    • Support the processing of big data volumes.
    • Integrate solutions into the current architecture.

    Mandatory Skills

    • Microsoft Azure Data Factory / SSIS
    • Microsoft Azure Databricks
    • Microsoft Azure Synapse Analytics
    • PostgreSQL
    • PySpark

    Mandatory Skills Description

    • 3+ years of hands-on expertise with Azure Data Factory and Azure Synapse.
    • Strong expertise in designing and implementing data models (conceptual, logical, physical).
    • In-depth knowledge of Azure services (Data Lake Storage, Synapse Analytics, Data Factory, Databricks) and PySpark for scalable data solutions.
    • Proven experience in building ETL/ELT pipelines to load data into data lakes/warehouses.
    • Experience integrating data from disparate sources (databases, APIs, external providers).
    • Proficiency in data warehousing solutions (dimensional modeling, star schemas, Data Mesh, Data/Delta Lakehouse, Data Vault).
    • Strong SQL skills: complex queries, transformations, performance tuning.
    • Experience with metadata and governance in cloud data platforms.
    • Certification in Azure/Databricks (advantage).
    • Experience with cloud-based analytical databases.
    • Hands-on with Azure MI, PostgreSQL on Azure, Cosmos DB, Azure Analysis Services, Informix.
    • Experience in Python and Python-based ETL tools.
    • Knowledge of Bash/Unix/Windows shell scripting (preferable).

    Nice-to-Have Skills

    • Experience with Elasticsearch.
    • Familiarity with Docker/Kubernetes.
    • Skills in troubleshooting and performance tuning for data pipelines.
    • Strong collaboration and communication skills.

    Languages

    • English: B2 (Upper Intermediate)
    More
Log In or Sign Up to see all posted jobs