Jobs Data & Analytics

1030
  • Β· 43 views Β· 1 application Β· 17d

    Senior Data Engineer

    Full Remote Β· EU Β· 6 years of experience Β· English - B2
    OUR COMPANY HBM is a European company building exciting new products from scratch for startups and helping mature companies in their journey towards data-driven innovation and AI based solutions. Our expertise refers to EnergyTech, FinTech, ClimateTech,...

    OUR COMPANY  

    HBM is a European company building exciting new products from scratch for startups and helping mature companies in their journey towards data-driven innovation and AI based solutions. Our expertise refers to EnergyTech, FinTech, ClimateTech, SocialTech, PropTech, etc. 

    Founded in Ukraine and developed based on Scandinavian culture, HBM is hiring both in Ukraine and the EU for our customers located in Europe and USA.  

      

    Our values include skills, passion, excellence, equality, openness, mutual respect, and trust. 

      

    At HBM, you can become a part of growing company, work with creative colleagues, and enjoy modern technologies and creating AI-based solutions. You’ll be part of a strong corporate culture combined with the agility and flexibility of a start-up backed by proven outsourcing and development practices, a human-oriented leadership team, an entrepreneurial mindset, and an approach to work-life balance. 

      

    PROJECT 

    Our customer is an Icelandic energy company providing electricity, geothermal water, cold water, carbon storage and optic network.  

    We are looking for a Senior Data Engineer who will be responsible for developing, enhancing, and maintaining enterprise data warehouse, data platform, and analytical data flows. The role supports all company’s subsidiaries and contributes to creating maximum value from data for internal stakeholders. 

    The qualified candidate will work as part of the Data Engineering team and will handle complex 3rd-line issues, long-term improvements, and new data development. The work will be aligned with the team’s structured 3-week planning cycles, and tight collaboration with the on-site Team Lead is expected. 

    Tech stack: MS SQL Server, Azure/Databricks, Power BI, Tableau, Microsoft BI stack (SSRS, SSIS,SSAS [Olap and Tabular]) , TimeXtender, exMon. 

     

    WE PROVIDE YOU WITH THE FOLLOWING EXCITING CHALLENGES 

    • Develop and maintain the enterprise data warehouse, data marts, staging layers, and transformation logic 
    • Design, implement, and optimize ETL/ELT pipelines (SQL Server, Azure data components, Databricks, etc.) 
    • Build and maintain robust data models (dimensional/star-schema, semantic layers, analytical datasets) 
    • Develop and improve the BI environment and the underlying data processes used by analysts across the company 
    • Implement processes for controlled, reliable data delivery to BI specialists, analysts, and modelling teams (e.g., forecasting, scenario modelling) 
    • Support data quality frameworks and implement testing/validation procedures 
    • Investigate and resolve escalated 3rd-line operational issues and guide 2nd-line support in root cause analysis 
    • Conduct stakeholder workshops to understand business requirements and translate them into technical data solutions 
    • Identify opportunities to improve data usability, analytical value, and process automation 
    • Document data processes, models, pipelines, and architectural decisions in Confluence 
    • Collaborate with the on-site Team Lead during sprint planning, backlog refinement, and prioritization. 

     

      

    WE EXPECT FROM YOU 

    • Degree (bachelor or master) in computer science or a comparable course of study 
    • 6+ years of experience working with DWH solutions and data pipelines 
    • Strong SQL development skills, preferably in MS SQL Server 
    • Experience building and maintaining ETL/ELT workflows using: 
    • Databricks 
    • Azure Data Factory or similar cloud-based data orchestration tools 
    • Azure data platform services (e.g., storage, compute, data lake formats) 
    • Solid understanding of data warehouse architectures and dimensional modelling 
    • Experience with data quality checks, validation frameworks, and monitoring 
    • Understanding of BI concepts and ability to prepare user-friendly analytical datasets 
    • Experience collaborating with business stakeholders and capturing analytical or operational data requirements 
    • Strong communication skills and the ability to explain data concepts clearly 
    • Willingness to document solutions and share knowledge within the team 
    • Excellent communication skills, ability to communicate to stakeholders on multiple levels 
    • Action and quality-oriented 
    • Experience of work the distributed, cross-culture Agile environment 
    • English: upper-intermediate / advanced 

     

    WOULD BE A PLUS 

    • Experience with Python or similar languages for data processing 
    • Experience with performance tuning for SQL or data pipelines 
    • Interest in visual clarity, usability of data models, and BI-driven design 

     

     

     WE OFFER YOU 

      

    • Modern technologies, new products development, different business domains. 
    • Start-up agility combined with mature delivery practices and management team. 
    • Strong focus on your technical and personal growth. 
    • Transparent career development and individual development plan. 
    • Flexible working mode (remote/work from office), full remote possibility. 
    • Competitive compensation and social package 
    • Focus on the well-being and human touch. 
    • Flat organization where everyone is heard and is invited to contribute. 
    • Work-life balance approach to work. 
    • Passion and Fun in everything we do. 
    More
  • Β· 44 views Β· 5 applications Β· 17d

    Senior Data Platform Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 7 years of experience Β· English - B2
    What You’ll Actually Do Architect and run high-load, production-grade data pipelines where correctness and latency matter. Design systems that survive schema changes, reprocessing, and partial failures. Own data availability, freshness, and trust - not...

    🎯 What You’ll Actually Do

    • Architect and run high-load, production-grade data pipelines where correctness and latency matter.
    • Design systems that survive schema changes, reprocessing, and partial failures.
    • Own data availability, freshness, and trust - not just pipeline success.
    • Make hard calls: accuracy vs cost, speed vs consistency, rebuild vs patch.
    • Build guardrails so downstream consumers (Analysts, Product, Ops) don’t break.
    • Improve observability: monitoring, alerts, data quality checks, SLAs.
    • Partner closely with backend engineers, data analysts, and Product - no handoffs, shared ownership.
    • Debug incidents, own RCA, and make sure the same class of failure doesn’t return.

    This is a hands-on IC role with platform-level responsibility.

     

    🧠 What You Bring

    • 5+ years in data or backend engineering on real production systems.
    • Strong experience with columnar analytical databases (ClickHouse, Snowflake, BigQuery, similar).
    • Experience with event-driven / streaming systems (Kafka, pub/sub, CDC, etc.).
    • Strong SQL + at least one general-purpose language (Python, Java, Scala).
    • You think in failure modes, not happy paths.
    • You explain why something works - and when it shouldn’t be used.

    Bonus: You’ve rebuilt or fixed a data system that failed in production.

     

    πŸ”§ How We Work

    • Reliability > elegance. Correct data beats clever data.
    • Ownership > tickets. You run what you build.
    • Trade-offs > dogma. Context matters.
    • Direct > polite. We fix problems, not dance around them.
    • One team, one system. No silos.
    •  

    πŸ”₯ What We Offer

    • Fully remote.
    • Unlimited vacation + paid sick leave.
    • Quarterly performance bonuses.
    • Medical insurance for you and your partner.
    • Learning budget (courses, conferences, certifications).
    • High trust, high autonomy.
    • Zero bureaucracy. Real engineering problems.

       

    πŸ‘‰ Apply if you see data platforms as systems to be engineered - not pipelines to babysit.

    More
  • Β· 43 views Β· 10 applications Β· 17d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 7 years of experience Β· English - B2
    What You’ll Actually Do Design and run high-throughput, production-grade data pipelines. Own data correctness, latency, and availability end to end. Make hard trade-offs: accuracy vs speed, cost vs freshness, rebuild vs patch. Design for change - schema...

    🎯 What You’ll Actually Do

    • Design and run high-throughput, production-grade data pipelines.
    • Own data correctness, latency, and availability end to end.
    • Make hard trade-offs: accuracy vs speed, cost vs freshness, rebuild vs patch.
    • Design for change - schema evolution, reprocessing, and new consumers.
    • Protect BI, Product, and Ops from breaking changes and silent data issues.
    • Build monitoring, alerts, and data quality checks that catch problems early.
    • Work side-by-side with Product, BI, and Engineering β€” no handoffs, shared ownership.
    • Step into incidents, own RCA, and make sure the same class of failure never repeats.

    This is a hands-on senior IC role with real accountability.

     

     

    🧠 What You Bring (Non-Negotiable)

    • 5+ years in data or backend engineering on real production systems.
    • Strong experience with analytical databases
      (ClickHouse, Snowflake, BigQuery, or similar).
    • Experience with event-driven or streaming systems
      (Kafka, CDC, pub/sub).
    • Solid understanding of:
      • at-least-once vs exactly-once semantics
      • schema evolution & backfills
      • mutation and reprocessing costs
    • Strong SQL and at least one programming language
      (Python, Java, Scala, etc.).
    • You don’t just ship - you own what happens after.

       

    πŸ”§ How We Work

    • Reliability > cleverness.
    • Ownership > process.
    • Impact > output.
    • Direct > polite.
    • One team, one system.

       

    πŸ”₯ What We Offer

    • Fully remote (Europe).
    • Unlimited vacation + paid sick leave.
    • Quarterly performance bonuses.
    • Medical insurance for you and your partner.
    • Learning budget (courses, conferences, certifications).
    • High trust, high autonomy.
    • No bureaucracy. Real data problems.

       

    πŸ‘‰ Apply if you treat data like production software - and feel uncomfortable when numbers can’t be trusted.

    More
  • Β· 36 views Β· 13 applications Β· 17d

    Deep Learning / Computer Vision Engineer

    Full Remote Β· Worldwide Β· Product Β· 5 years of experience Β· English - B2
    We’re looking for a brilliant Deep Learning & Computer Vision Engineer to join our Algorithms group. What will you do? Be a researcher of the core deep learning algorithms and communicate with other researchers. Design and implement advanced algorithms...

    We’re looking for a brilliant Deep Learning & Computer Vision Engineer to join our Algorithms group.

     

    What will you do?

    • Be a researcher of the core deep learning algorithms and communicate with other researchers.
    • Design and implement advanced algorithms to solve complex problems.
    • Developing the next generation algorithms including different CNN architectures, CV algorithms and algorithms that utilize Physics in complex environment.

     

    Requirements

    • At least 5 years’ experience as a researcher and development of deployed models.
    • Strong understanding in deep-learning algorithms and computer vision.
    • M.Sc. in computer science, or very experienced B.Sc Degrees Online 2025 - Best Course After 12th . or Engineering, or a related technical field.
    • Experience in Python & PyTorch or other machine learning frameworks and mathematical modelling.
    • Good communication skills, self-motivated, proactive, flexible and passionate about learning.
    • Focused on end-to-end delivery of solutions for real world scenario in challenging timeframe.
    More
  • Β· 41 views Β· 5 applications Β· 17d

    Senior Data Scientist (Ukraine only) to $7000

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· English - B2
    Ukraine (Kyiv / Lviv / Remote) We consider candidates located in Ukraine only We are looking for a Senior Data Scientist who enjoys solving complex ML problems, working with large datasets, and delivering models that directly impact real business...

    πŸ“ Ukraine (Kyiv / Lviv / Remote)
    ⚠️ We consider candidates located in Ukraine only

     

    We are looking for a Senior Data Scientist who enjoys solving complex ML problems, working with large datasets, and delivering models that directly impact real business metrics.

    This is not a research-only role. Your models will go to production and influence real decisions.

    What you will do

    β€’ Build and deploy supervised and reinforcement learning models
    β€’ Optimize ad targeting and campaign effectiveness
    β€’ Perform deep data analysis and feature engineering
    β€’ Apply optimization algorithms (linear / non-linear / combinatorial)
    β€’ Improve accuracy, scalability, and performance of existing models
    β€’ Work closely with engineering to integrate models into production
    β€’ Monitor and troubleshoot deployed solutions
    β€’ Communicate insights to product and business teams
    β€’ Mentor junior data scientists


    Requirements

    β€’ 5+ years of experience in Data Science / Machine Learning
    β€’ Strong experience with supervised learning
    β€’ Hands-on reinforcement learning experience
    β€’ Solid understanding of optimization techniques
    β€’ Python
    β€’ SQL and relational databases
    β€’ scikit-learn / TensorFlow / PyTorch
    β€’ pandas / NumPy
    β€’ Production mindset and ownership


    Nice to have

    β€’ OpenAI Gym or other RL frameworks
    β€’ AWS
    β€’ MLOps practices (Docker, CI/CD, versioning, Kubernetes)
    β€’ Experience with large-scale production systems


    Soft skills

    β€’ Strong analytical thinking
    β€’ Clear communication
    β€’ Ability to work independently
    β€’ Prioritization in fast-paced environments
    β€’ Mentorship experience

     

    More
  • Β· 58 views Β· 8 applications Β· 17d

    Computer Vision/Machine Learning Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 1 year of experience Β· English - B2
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently...

    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the role:
    We are looking for a Computer Vision / Machine Learning Engineer to develop offline CV models for industrial visual inspection.


    Your main task will be to design, train, and evaluate models on inspection data in order to:

     

    • Improve discrimination between good vs. defect samples
    • Provide insights into key defect categories (e.g., terminal electrode irregularities, surface chipping)
    • Significantly reduce false-positive rates, optimizing for either precision, or recall
    • Prepare the solution for future deployment, scaling, and maintenance
    •  

    Key Responsibilities:
    Data Analysis & Preparation
    - Conduct dataset audits, including class balance checks and sample quality reviews
    - Identify low-frequency defect classes and outliers
    - Design and implement augmentation strategies for rare defects and edge cases
    Model Development & Evaluation
    - Train deep-learning models on inspection images for defect detection
    - Use modern computer vision / deep learning frameworks (e.g., PyTorch, TensorFlow)
    - Evaluate models using confusion matrices, ROC curves, precision–recall curves, F1 scores and other relevant metrics
    - Analyze false positives/false negatives and propose thresholds or model improvements
    Reporting & Communication
    - Prepare clear offline performance reports and model evaluation summaries
    - Explain classifier decisions, limitations, and reliability in simple, non-technical language when needed
    - Provide recommendations for scalable deployment in later phases (e.g., edge / on-prem inference, integration patterns)

    Candidate Requirements:
    Must-have:
    - 1-2 years of hands-on experience with computer vision and deep learning (classification, detection, or segmentation)
    - Strong proficiency in Python and at least one major DL framework (PyTorch or TensorFlow/Keras)
    - Solid understanding of:

    • Image preprocessing and augmentation techniques
    • Classification metrics: accuracy, precision, recall, F1, confusion matrix, ROC, PR curves
    • Handling imbalanced datasets and low-frequency classes

    - Experience training and evaluating offline models on real production or near-production datasets
    - Ability to structure and document experiments, compare baselines, and justify design decisions
    - Strong analytical and problem-solving skills; attention to detail in data quality and labelling
    - Good communication skills in English (written and spoken) to interact with internal and client stakeholders

    Nice-to-have:
    - Experience with industrial / manufacturing computer vision (AOI, quality inspection, defect detection, etc.)
    - Familiarity with ML Ops/deployment concepts (ONNX, TensorRT, Docker, REST APIs, edge devices)
    - Experience working with time-critical or high-throughput inspection systems
    - Background in electronics, semiconductors, or similar domains is an advantage
    - Experience preparing client-facing reports and presenting technical results to non-ML audiences

    We offer:
    - Free English classes with a native speaker and external courses compensation;
    - PE support by professional accountants;
    - 40 days of PTO;
    - Medical insurance;
    - Team-building events, conferences, meetups, and other activities;
    - There are many other benefits you’ll find out at the interview.

    More
  • Β· 51 views Β· 9 applications Β· 17d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - None
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with the organization of the first Data Science UA conference, setting the foundation for our growth. Over the past 9 years, we have diligently fostered the largest Data Science Community in Eastern Europe, boasting a network of over 30,000 AI top engineers.

    About the client:
    We are working with a new generation of data service provider, specializing in data consulting and data-driven digital marketing, dedicated to transforming data into business impact across the entire value chain of organizations. The company’s data-driven services are built upon the deep AI expertise the company’s acquired with a 1000+ client base around the globe. The company has 1000 employees across 20 offices who are focused on accelerating digital transformation.

    About the role:
    We are seeking a Senior Data Engineer (Azure) to design and maintain data pipelines and systems for analytics and AI-driven applications. You will work on building reliable ETL/ELT workflows and ensuring data integrity across the organization.

    Required skills:
    - 6+ years of experience as a Data Engineer, preferably in Azure environments.
    - Proficiency in Python, SQL, NoSQL, and Cypher for data manipulation and querying.
    - Hands-on experience with Airflow and Azure Data Services for pipeline orchestration.
    - Strong understanding of data modeling, ETL/ELT workflows, and data warehousing concepts.
    - Experience in implementing DataOps practices for pipeline automation and monitoring.
    - Knowledge of data governance, data security, and metadata management principles.
    - Ability to work collaboratively with data science and analytics teams.
    - Excellent problem-solving and communication skills.

    Responsibilities:
    - Transform data into formats suitable for analysis by developing and maintaining processes for data transformation;
    - Structuring, metadata management, and workload management.
    - Design, implement, and maintain scalable data pipelines on Azure.
    - Develop and optimize ETL/ELT processes for various data sources.
    - Collaborate with data scientists and analysts to ensure data readiness.
    - Monitor and improve data quality, performance, and governance.

    More
  • Β· 71 views Β· 4 applications Β· 17d

    Data Engineer

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· English - None
    About us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...

    About us:
    Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.

    About the client:
    Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.

    About the role:
    We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel the Ukrainian LLM and NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling the Data Scientists and ML Engineers to develop cutting-edge language models.

    You will work at the intersection of data engineering and machine learning, ensuring that the datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context.

    Requirements:
    - Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.
    - NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given the project’s focus.
    Understanding of FineWeb2 or a similar processing pipeline approach.
    - Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.
    - Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.
    - Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as the NLP applications may require embedding storage and fast similarity search.
    - Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.
    - Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.
    - Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.

    Nice to have:
    - Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.
    - Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.
    - CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.
    - Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.
    - Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimizing existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve the workflows.

    Responsibilities:
    - Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information.
    - Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.
    - Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to the language modeling efforts.
    - Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data.
    - Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.
    - Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.
    - Automate data processing workflows and ensure their scalability and reliability.
    - Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.
    - Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs.
    - Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.
    - Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models.
    - Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.
    - Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.
    - Manage data security, access, and compliance.
    - Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.

    The company offers:
    - Competitive salary.
    - Equity options in a fast-growing AI company.
    - Remote-friendly work culture.
    - Opportunity to shape a product at the intersection of AI and human productivity.
    - Work with a passionate, senior team building cutting-edge tech for real-world business use.

    More
  • Β· 85 views Β· 14 applications Β· 17d

    Data Analyst

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· English - B1
    The role We’re looking for a proactive Data Analyst who is passionate about analytics and finding growth opportunities! If you’re skilled at spotting trends, uncovering real insights behind the numbers, and improving user engagement with the product β€”...

    The role

    We’re looking for a proactive Data Analyst who is passionate about analytics and finding growth opportunities! If you’re skilled at spotting trends, uncovering real insights behind the numbers, and improving user engagement with the product β€” this role is for you.

     

    Your key responsibilities for the coming months

    • Analyze user behavior to identify trends and growth opportunities.
    • Build reports in Tableau to monitor key metrics.
    • Work on recommendations to improve effectiveness.
    • Automate data collection and processing using SQL queries and scripts.
    • Help to set up and optimize user tracking.

       

    Key requirements

    • 3+ years of experience in data analytics (Data Analyst | Product Analyst role) within web, desktop products.
    • Expertise with subscription-based monetization model.
    • Strong knowledge of SQL and Python for data analysis.
    • A/B testing experience.
    • Ability to calculate predictive LTV.
    • Advanced skills in BI tools (Tableau, Power BI, etc.)
    • Experience in setting up and optimizing user activity tracking
    • Ability to communicate/interpret complex data analysis to both technical and non-technical colleagues.

     

    Nice to have

    • Experience with BigQuery and other Google Cloud services for data analysis.
    • Experience with DBT.

     

    What we offer for your success

    • Very warm and friendly working environment and flexible working schedule.
    • 20 days off + paid national holidays, and 12 sick days paid by the company per year.
    • Medical insurance, and health protection programs (with COVID-19 and dental coverage).
    • Continuous professional development and growth opportunities.
    More
  • Β· 18 views Β· 1 application Β· 17d

    Senior Snowflake Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data...

    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data infrastructure through the transition to Snowflake as a priority, as it will enhance capabilities for implementing advanced AI solutions and unlock numerous opportunities for innovation and growth.

    We are seeking a highly skilled Snowflake Data Engineer to design, build, and optimize scalable data pipelines and cloud-based solutions across AWS, Azure, and GCP. The ideal candidate will have strong expertise in Snowflake, ETL Tools like DBT, Python, visualization tools like Tableau and modern CI/CD practices, with a deep understanding of data governance, security, and role-based access control (RBAC). Knowledge of data modeling methodologies (OLTP, OLAP, Data Vault 2.0), data quality frameworks, Stream lit application development and SAP integration and infrastructure-as-code with Terraform is essential. Experience working with different file formats such as JSON, Parquet, CSV, and XML is highly valued.

    • Responsibilities:

      β€’ In-depth knowledge of Snowflake's data warehousing capabilities.
      β€’ Understanding of Snowflake's virtual warehouse architecture and how to optimize performance
      and cost.
      β€’ Proficiency in using Snowflake's data sharing and integration features for seamless collaboration.
      β€’ Develop and optimize complex SQL scripts, stored procedures, and data transformations.
      β€’ Work closely with data analysts, architects, and business teams to understand requirements and
      deliver reliable data solutions.
      β€’ Implement and maintain data models, dimensional modeling for data warehousing, data marts,
      and star/snowflake schemas to support reporting and analytics.
      β€’ Integrate data from various sources including APIs, flat files, relational databases, and cloud
      services.
      β€’ Ensure data quality, data governance, and compliance standards are met.
      β€’ Monitor and troubleshoot performance issues, errors, and pipeline failures in Snowflake and
      associated tools.
      β€’ Participate in code reviews, testing, and deployment of data solutions in development and production environments.

    • Mandatory Skills Description:

      β€’ 5+ years of experience
      β€’ Strong proficiency in Snowflake (Snowpipe, RBAC, performance tuning).
      β€’ Ability to write complex SQL queries, stored procedures, and user-defined functions.
      β€’ Skills in optimizing SQL queries for performance and efficiency.
      β€’ Experience with ETL/ELT tools and techniques, including Snowpipe, AWS Glue, openflow, fivetran
      or similar tools for real-time and periodic data processing.
      β€’ Proficiency in transforming data within Snowflake using SQL, with Python being a plus.
      β€’ Strong understanding of data security, compliance and governance.
      β€’ Experience with DBT for database object modeling and provisioning.
      β€’ Experience in version control tools, particularly Azure DevOps.
      β€’ Good documentation and coaching practice.

    More
  • Β· 92 views Β· 17 applications Β· 17d

    Senior AI Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - B1 MilTech πŸͺ–
    Are you passionate about safeguarding countries, societies, and businesses from information threats? Look no further! Our rapidly scaling company is passionately committed to this crucial mission, offering you a unique opportunity to collaborate with some...

    Are you passionate about safeguarding countries, societies, and businesses from information threats? Look no further! Our rapidly scaling company is passionately committed to this crucial mission, offering you a unique opportunity to collaborate with some of the world’s most influential organizations, including NATO and the EU. As a Ukrainian team, we are determined to deliver substantial and meaningful change.

    If you’re ready to join a dynamic team working toward a safer digital world, we invite you to be part of our journey. Shape the future with us and help defend against online threats with purpose and innovation.

     

    Role Overview

    We’re looking for a Senior AI Engineer with a solid engineering background and hands-on AI experience. In this role, you will help design and implement innovative AI-powered solutions that are core to our platform’s mission. You’ll work across the stack to build and integrate AI capabilities that are both scalable and practical.

    You won’t need to build everything from scratch, but you’ll need the skills to fine-tune, adapt, and creatively apply state-of-the-art models and tools to solve real-world problems. Your responsibilities will range from working on backend services to shaping AI agents and LLM-based pipelines for production use.

     

    Key Responsibilities

    • Understand business challenges and generate innovative ideas to address them using AI and machine learning.
    • Research, evaluate, and adapt existing deep learning and machine learning models for practical applications.
    • Develop and maintain services for data enrichment using LLMs
    • Implement AI-powered summarization and mapping tools
    • Design and build AI agents for dynamic use cases
    • Monitor industry advancements and identify opportunities to leverage the latest tools and techniques.

     

    Requirements

    Engineering Skills:

    • Strong Python skills
    • Experience with FastAPI, Docker, and asynchronous programming
    • Good knowledge of DBs, including PostgreSQL and Elasticsearch

    AI Skills:

    • At least 3+ years of hands-on experience in AI/ML (more is a plus)
    • Experience working with LLMs, RAG, and AI agents in production
    • Strong understanding of transformer architectures
    • Practical experience with prompt engineering
    • Knowledge of LLM infrastructure and deployment
    • Experience with deep learning frameworks such as PyTorch or TensorFlow

     

    We Offer

    • Opportunity to contribute to a mission-driven startup and support prestigious clients, including governments worldwide, enterprise clients, and leading NGOs in addressing information threats and tackling security challenges.
    • Autonomy and freedom to drive experiments and bring your own ideas to life.
    • Flexibility of fully remote work.
    • Flexible, unlimited time-off policy.
    More
  • Β· 86 views Β· 18 applications Β· 17d

    AI Engineer

    Full Remote Β· Worldwide Β· Product Β· 1 year of experience Β· English - B2
    Company Overview: Webxloo, LLC is an international software development company founded in 2005, with headquarters in Florida, USA, and a branch in Ukraine. Our products, including Autoxloo and Simulcast , focus on business automation and real-time...

    Company Overview:
    Webxloo, LLC is an international software development company founded in 2005, with headquarters in Florida, USA, and a branch in Ukraine. Our products, including Autoxloo and Simulcast , focus on business automation and real-time systems in the automotive industry. We are expanding and seeking a highly skilled AI & PHP Backend Developer to join our team.
     

    Role Overview:
    As an AI & PHP Backend Developer at Webxloo, you will play a dual role: designing and implementing AI-based automation solutions while developing and maintaining robust backend systems using PHP. You will collaborate with cross-functional teams to enhance our product offerings, ensure system scalability, and deliver intelligent automation solutions that provide real business value.
     

    Key Responsibilities:

    • Design, implement, and maintain AI-driven automation workflows to optimize internal and external processes.
    • Build intelligent automation using machine learning, LLMs, and low-code/no-code tools such as n8n, Zapier, or Power Automate.
    • Integrate AI features into enterprise platforms and software products.
    • Develop and maintain backend modules using PHP frameworks such as YII2, Laravel, or Symfony.
    • Work with databases including MySQL/MariaDB, MongoDB, Redis, and ensure efficient data management.
    • Implement clean, maintainable code following OOP principles, SOLID, DRY, KISS, YAGNI, and PSR standards.
    • Collaborate with front-end developers, QA, design, and support teams to deliver high-quality features.
    • Monitor, debug, and enhance existing codebases to ensure system reliability, performance, and security.
       

    Required Skills & Qualifications:

    • 1+ year of experience in AI development, workflow automation, or backend PHP development.
    • Solid experience with PHP frameworks (YII2, Laravel, Symfony) and OOP principles.
    • Proficiency with MySQL/MariaDB, MongoDB, and caching/message queue systems (Redis, RabbitMQ).
    • Experience with RESTful APIs, XML, DOM, CURL, and Regex.
    • Familiarity with AI/ML tools, LLMs, and enterprise automation platforms.
    • Strong analytical skills and attention to detail.
    • English proficiency (intermediate or higher) for technical documentation and collaboration.
    • University degree in Computer Science, Engineering, or a related field.
       

    Preferred/Advantageous Experience:

    • Experience with cloud platforms, Docker, Kubernetes, or serverless architectures.
    • Experience in automotive, healthcare, or digital commerce industries.
    • Knowledge of frontend frameworks (React, Vue.js) is a plus.
       

    What We Offer:

    • Competitive salary with regular reviews.
    • Standard working schedule with no overtime (9:00 - 18:00, lunch break 13:00 - 14:00).
    • Exposure to international projects with various AI and backend development challenges.
    • Career growth, skill expansion, and leadership opportunities.
    • Collaborative team environment with experienced professionals.
       

    Technical Requirements:

    • Computer or laptop with at least 16GB of RAM.
    • High-speed internet (minimum 20 Mbps).
    • Headset and webcam for regular video calls.
       

    If you are interested in this opportunity, we encourage you to apply now.

     

    More
  • Β· 112 views Β· 11 applications Β· 17d

    Trading Operations Analyst (Heavy Python focused)

    Part-time Β· Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - None
    Location: Remote (EU) Preferred: Warsaw/London (regular meetings in the city centre are highly appreciated) Type: Freelance / Contract (B2B) Compensation: Hourly or per-task (negotiated) About Hi, I’m Bohdan:) I’m setting up a new multi-asset trading...

    Location: Remote (EU)
    Preferred: Warsaw/London (regular meetings in the city centre are highly appreciated)
    Type: Freelance / Contract (B2B)
    Compensation: Hourly or per-task (negotiated)

     

    About

    Hi, I’m Bohdan:)

     

    I’m setting up a new multi-asset trading desk and I’m looking for a Trading Operations Analyst to help with analytical and operational tasks around trading data. This role is research/support-focused: you will work with datasets, reports, and post-trade analytics to help decisions be made faster and with higher confidence.

     

    What you’ll do

    • Perform post-trade analysis and prepare concise reports (tables + charts)
    • Break down performance by 
      • instruments/tickers
      • time windows
      • trade reasons/tags (if provided)
      • holding time / entry quality buckets (if applicable)
    • Detect anomalies and data-quality issues (missing fields, outliers, inconsistent logs)
    • Produce clear visualisations and summaries that can be used in decision-making
    • Communicate findings clearly and defend conclusions when challenged

     

    Requirements

    • Strong statistical analysis skills (practical, not academic)
    • Strong Python for data analysis (pandas / numpy; polars is a plus)
    • Strong data visualisation skills (matplotlib / plotly; ability to present insights)
    • Ability to state conclusions under pressure (clear, direct, defendable)
    • High attention to detail and strong ownership of output quality
    • Ukrainian PE (FOP) or equivalent legal ability to provide services (B2B)
    • Must have full legal right to provide services β€” no visa sponsorship

     

    Note: Strong analytical thinking and clarity of reasoning are valued more than years of formal work experience.

     

    Nice to have

    • Prior experience with trading / market data
    • Familiarity with performance metrics (PnL attribution, drawdown, slippage concepts)
    • Availability for regular Warsaw/London meetups

     

    Compensation

    I’m open to negotiating pay depending on task difficulty:

    • hourly rate, or
    • fixed price per deliverable

     

    Selection process

    1. Intro call (fit + expectations + availability) + Light analytical task (quick, practical)
    2. Post-trade analysis test (more realistic, based on sample data)

     

    Disclaimer

    • Contract / freelance only
    • No visa sponsorship
    • Long-term collaboration possible if the fit is strong
    More
  • Β· 301 views Β· 73 applications Β· 17d

    Intern Business Analyst

    Full Remote Β· Worldwide Β· Product Β· English - B1
    At DICEUS, we’re inviting motivated individuals to join us as Intern Business Analysts and start their journey in software delivery and product development. This internship is a great opportunity to gain hands-on experience working with enterprise-level...

    At DICEUS, we’re inviting motivated individuals to join us as Intern Business Analysts and start their journey in software delivery and product development.

    This internship is a great opportunity to gain hands-on experience working with enterprise-level solutions in the insurance domain, collaborate with cross-functional teams, and learn Agile practices, requirement analysis, and documentation preparation. You will work closely with experienced BAs and development teams to understand how real digital products are built and delivered.

    What You’ll Be Learning

    • Fundamentals of business analysis
    • How to collect, document, and analyze business and technical requirements
    • Preparing documentation: user stories, specifications, requirements, presentations
    • Working with tools such as Jira, Confluence, and Trello
    • Participation in project meetings: daily stand-ups, planning, demos
    • Understanding and applying SDLC in practice
    • Communication with clients and internal stakeholders

    What We Expect From You

    • Basic understanding of software development processes
    • Strong analytical and communication skills
    • Interest in business analysis, IT consulting, and working with requirements
    • Responsibility, attention to detail, willingness to learn and grow
    • English level β€” at least Intermediate for communication and documentation

    What We Offer

    • Hands-on business analysis experience on real IT projects
    • Opportunity to work with enterprise-level products in the insurance industry
    • Supportive and professional environment with continuous learning culture
    • Potential career growth to Junior Business Analyst after successful internship completion
    More
  • Β· 54 views Β· 6 applications Β· 17d

    Uniface Developer / SQL Engineer (Oracle env) to $5000

    Full Remote Β· Worldwide Β· 1 year of experience Β· English - B2
    Our client is a stable Swiss software company with over 35 years of history. They build reliable IT solutions for public sector and various industries. You will be working with a team of experienced specialists on a solid, interesting project. We value...

    Our client is a stable Swiss software company with over 35 years of history. They build reliable IT solutions for public sector and various industries. 

    You will be working with a team of experienced specialists on a solid, interesting project. We value a healthy process: this means no overtime, no constant "fires" to put out, and a calm working pace.

    Compensation: Salary in EUR or USD.
     

    The Role: 
    We are looking for an experienced developer to join the Customer Solutions team. The main focus is maintaining and developing business applications using Uniface. This is a 100% remote position.

    What you will do:

    • Develop and maintain software (UI, business logic, functions) primarily using Uniface.
    • Work with database structures (SQL).
    • Handle the full development cycle: from specs to release.
    • Fix bugs and help with 3rd level support when needed.
    • Collaborate with the team on technical concepts.

     

    Requirements (Must Have):

    • Strong experience with Uniface. We need someone who knows this environment well, not just willing to learn.
    • Solid knowledge of SQL, preferably with Oracle databases.
    • Good English (written and spoken) is a must.
    • Ready to start ASAP.

     

    Nice to have (but not critical):

    • Experience with other technologies like Angular, JavaScript, HTML, .NET, or C#.
    • Background in administrative or municipal software solutions.
    More
Log In or Sign Up to see all posted jobs