Jobs

104
  • Β· 49 views Β· 4 applications Β· 14d

    AI engineer with Data Engineering and Machine Learning Expertise

    Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· A2 - Elementary
    ΠΠΌΠ΅Ρ€ΠΈΠΊΠ°Π½ΡΡŒΠΊΠ° Π†Π’-компанія ΡˆΡƒΠΊΠ°Ρ” AI Engineer Π·Ρ– знаннями машинного навчання Ρ‚Π° Ρ–Π½ΠΆΠ΅Π½Π΅Ρ€Ρ–Ρ— Π΄Π°Π½ΠΈΡ…. ΠžΠ±ΠΎΠ²β€™ΡΠ·ΠΊΠΈ: Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠ° ML-ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ, ΠΏΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° Π΄Π°Ρ‚Π°-ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Ρ–Π², очищСння ΠΉ інтСграція Π΄Π°Π½ΠΈΡ…. Π’ΠΈΠΌΠΎΠ³ΠΈ: Python (Pandas, NumPy, TensorFlow Π°Π±ΠΎ PyTorch), SQL, досвід...

    ΠΠΌΠ΅Ρ€ΠΈΠΊΠ°Π½ΡΡŒΠΊΠ° Π†Π’-компанія ΡˆΡƒΠΊΠ°Ρ” AI Engineer Π·Ρ– знаннями машинного навчання Ρ‚Π° Ρ–Π½ΠΆΠ΅Π½Π΅Ρ€Ρ–Ρ— Π΄Π°Π½ΠΈΡ….

     

    ΠžΠ±ΠΎΠ²β€™ΡΠ·ΠΊΠΈ: Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠ° ML-ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ, ΠΏΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° Π΄Π°Ρ‚Π°-ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Ρ–Π², очищСння ΠΉ інтСграція Π΄Π°Π½ΠΈΡ….

     

    Π’ΠΈΠΌΠΎΠ³ΠΈ:

    • Python (Pandas, NumPy, TensorFlow Π°Π±ΠΎ PyTorch), SQL, досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· ETL-процСсами. Π‘ΡƒΠ΄Π΅ плюсом досвід Π· AWS / GCP, Airflow, Spark Π°Π±ΠΎ Docker.
    • розглядаємо Ρ‚Ρ–Π»ΡŒΠΊΠΈ Ρ‚ΠΈΡ…, Ρ…Ρ‚ΠΎ Π·Π°ΠΊΡ–Π½Ρ‡ΠΈΠ² ΠΏΡ€ΠΎΡ„Ρ–Π»ΡŒΠ½Ρƒ Π²ΠΈΡ‰Ρƒ освіту (Бтатистика, ΠœΠ°Ρ‚Π΅ΠΌΠ°Ρ‚ΠΈΠΊΠ°, ΠŸΡ€ΠΈΠΊΠ»Π°Π΄Π½Π° ΠΌΠ°Ρ‚Π΅ΠΌΠ°Ρ‚ΠΈΠΊΠ°, ІнТСнСрія ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ½ΠΎΠ³ΠΎ забСзпСчСння, Π†Π½Ρ„ΠΎΡ€ΠΌΠ°Ρ†Ρ–ΠΉΠ½Ρ– Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ—, ΠšΠΎΠΌΠΏβ€™ΡŽΡ‚Π΅Ρ€Π½Ρ– Π½Π°ΡƒΠΊΠΈ, ΠšΠΎΠΌΠΏβ€™ΡŽΡ‚Π΅Ρ€Π½Π° інТСнСрія, БистСмний Π°Π½Π°Π»Ρ–Π·, ΠšΡ–Π±Π΅Ρ€Π±Π΅Π·ΠΏΠ΅ΠΊΠ°, Π†Π½Ρ„ΠΎΡ€ΠΌΠ°Ρ†Ρ–ΠΉΠ½Ρ– систСми, Автоматизація Ρ‚Π° ΠΊΠΎΠΌΠΏβ€™ΡŽΡ‚Π΅Ρ€Π½ΠΎ-Ρ–Π½Ρ‚Π΅Π³Ρ€ΠΎΠ²Π°Π½Ρ– Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ—, Π•Π»Π΅ΠΊΡ‚Ρ€ΠΎΠ½Ρ–ΠΊΠ°, Π’Π΅Π»Π΅ΠΊΠΎΠΌΡƒΠ½Ρ–ΠΊΠ°Ρ†Ρ–Ρ— ΠšΡ–Π±Π΅Ρ€Π½Π΅Ρ‚ΠΈΠΊΠ°);
    • ΠΏΡ€ΠΎΡ„Ρ–Π»ΡŒΠ½Ρ– курси, школи, самонавчання Π²Ρ–Ρ‚Π°ΡŽΡ‚ΡŒΡΡ, Π°Π»Π΅ Π±Π΅Π· ΠΏΡ€ΠΎΡ„Ρ–Π»ΡŒΠ½ΠΎΡ— Π²ΠΈΡ‰ΠΎΡ— Π½Π΅ Ρ€ΠΎΠ·Π³Π»ΡΠ΄Π°Ρ”ΠΌΠΎ;
    • ΠΌΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ присвячувати Ρ€ΠΎΠ±ΠΎΡ‚Ρ– ΠΏΠΎ 5 Π΄Π½Ρ–Π²;
    • Π°Π½Π³Π»Ρ–ΠΉΡΡŒΠΊΠ° ΠΌΡ–Π½Ρ–ΠΌΡƒΠΌ a2, для листування Π· Π°ΠΌΠ΅Ρ€ΠΈΠΊΠ°Π½ΡΡŒΠΊΠΈΠΌΠΈ ΠΊΠΎΠ»Π΅Π³Π°ΠΌΠΈ; Π²Ρ–Π»ΡŒΠ½Π° ΠΠ• ΠΏΠΎΡ‚Ρ€Ρ–Π±Π½Π°;

     

    ΠŸΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ:

    • Π²Ρ–Π΄Π΄Π°Π»Π΅Π½Π° Ρ€ΠΎΠ±ΠΎΡ‚Π° Π· Π±ΡƒΠ΄ΡŒ-якого міста Π£ΠΊΡ€Π°Ρ—Π½ΠΈ;
    • 5 Π΄Π½Ρ–Π² Π½Π° Ρ‚ΠΈΠΆΠ΄Π΅Π½ΡŒ, Ρƒ ΡΠ΅Ρ€Π΅Π΄Π½ΡŒΠΎΠΌΡƒ ΠΏΠΎ 7βˆ’8 Π³ΠΎΠ΄ΠΈΠ½ Π½Π° Π΄Π΅Π½ΡŒ; ΠΎ 9.00 β€” постановка завдання Π½Π° Π΄Π΅Π½ΡŒ, ΠΎ 19.00 β€” підбиття підсумків; протягом дня Π³Ρ€Π°Ρ„Ρ–ΠΊ Π²Ρ–Π»ΡŒΠ½ΠΈΠΉ;
    • ставка Π·Π°Π»Π΅ΠΆΠΈΡ‚ΡŒ Π²Ρ–Π΄ досвіду, ΠΎΠ²Π΅Ρ€Ρ‚Π°ΠΉΠΌΠΈ ΠΎΠΏΠ»Π°Ρ‡ΡƒΡŽΡ‚ΡŒΡΡ.

     

    ΠΠ°Π΄Ρ–ΡˆΠ»Ρ–Ρ‚ΡŒ Ρ€Π΅Π·ΡŽΠΌΠ΅ Ρ€Π°Π·ΠΎΠΌ Ρ–Π· ΠΊΠΎΡ€ΠΎΡ‚ΠΊΠΈΠΌ супровідним листом: Ρ‡ΠΎΠΌΡƒ Π²Π²Π°ΠΆΠ°Ρ”Ρ‚Π΅, Ρ‰ΠΎ ΡΠ°ΠΌΠ΅ Π²ΠΈ ΠΏΡ–Π΄Ρ…ΠΎΠ΄ΠΈΡ‚Π΅ для Ρ†Ρ–Ρ”Ρ— Ρ€ΠΎΠ»Ρ–. ΠΠ½Π³Π»Ρ–ΠΉΡΡŒΠΊΠΎΡŽ. Π”ΠΎΠ΄Π°ΠΉΡ‚Π΅, Π±ΡƒΠ΄ΡŒ ласка, посилання Π½Π° ΠΏΠΎΡ€Ρ‚Ρ„ΠΎΠ»Ρ–ΠΎ Π°Π±ΠΎ GitHub Ρ–Π· ΠΏΡ€ΠΈΠΊΠ»Π°Π΄Π°ΠΌΠΈ ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–Π² Ρƒ ΡΡ„Π΅Ρ€Ρ– machine learning / data engineering.

     

    Job Title: AI Engineer with Data Engineering & Machine Learning Expertise

     

    About Us:

    We’re building the next generation of intelligent systems that power key decision-making processes for businesses across various industries. Our work combines advanced AI, data engineering, and cutting-edge business intelligence tools to create solutions that unlock data-driven insights.

    We are looking for an AI Engineer who is not only proficient in machine learning techniques but also has a solid understanding of data engineering principles. The ideal candidate will have hands-on experience in data pipeline creation, ETL processes, and data cleansing & harmonization. You will play a key role in developing intelligent models that enable our team to make strategic, data-backed decisions.

     

    What You’ll Do:

    Design and implement AI models and algorithms using machine learning techniques (e.g., decision trees, neural networks, and nonparametric models).

    Collaborate with the data engineering team to design, build, and maintain data pipelines for real-time and batch processing.

    Develop and optimize ETL processes for data integration and transformation from multiple sources.

    Cleanse and harmonize data from disparate sources to ensure accuracy and consistency.

    Build automated workflows for data cleansing, feature engineering, and model deployment.

    Partner with business intelligence teams to create data-driven solutions that provide actionable insights for decision-makers.

    Continuously evaluate and improve model performance by iterating based on feedback and new data.

     

    Key Skills & Requirements:

    Machine Learning: Strong experience with supervised and unsupervised learning algorithms (decision trees, neural networks, random forests, k-nearest neighbors, etc.).

    Data Engineering: Solid understanding of ETL pipelines, data integration, transformation, and cleansing using tools like Apache Airflow, dbt, Talend, or similar.

    Programming Languages: Proficiency in Python (with libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch).

    Data Structures & Algorithms: Familiarity with data structures, model evaluation techniques, and optimization strategies.

    Data Analysis: Ability to perform exploratory data analysis (EDA) and feature selection, using statistical methods.

    Business Intelligence: Familiarity with BI tools such as Power BI, Tableau, or Looker, and ability to transform analytical results into meaningful visualizations.

    Cloud Platforms: Experience with cloud technologies such as AWS, GCP, or Azure for model deployment and data pipeline management.

    SQL: Strong knowledge of relational databases and SQL for querying and manipulating large datasets.

    Problem Solving & Communication: Ability to analyze complex data issues, communicate technical concepts to non-technical stakeholders, and work well in a collaborative environment.

     

    Nice to Have:

    Experience with nonparametric models (e.g., kernel methods, k-NN) and model selection.

    Familiarity with big data frameworks like Hadoop or Spark.

    Exposure to DevOps practices for model deployment (Docker, Kubernetes, CI/CD).

     

    Why Join Us?

    Innovative Culture: Be a part of a forward-thinking company focused on cutting-edge AI and data-driven solutions.

    Growth Opportunities: Continuous learning with access to new technologies, training, and career development resources.

    Collaborative Environment: Work alongside talented engineers, data scientists, and business analysts in a team-oriented culture.

    Flexible Work Arrangements: Fully remote with a distributed team to support work-life balance.

     

    How to Apply:

    If you’re passionate about AI and data engineering, we want to hear from you! Submit your resume, along with a brief cover letter explaining why you’re a great fit for the role. Please include any relevant portfolio or GitHub links showcasing your machine learning or data engineering projects.

    More
  • Β· 19 views Β· 3 applications Β· 14d

    Data Architect - GCP

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B1 - Intermediate
    About JUTEQ JUTEQ is an AI-native and cloud-native consulting firm helping enterprises in financial services, telecom, and automotive retail build intelligent, production-grade platforms. We combine the power of GenAI, scalable cloud architecture, and...

    About JUTEQ

    JUTEQ is an AI-native and cloud-native consulting firm helping enterprises in financial services, telecom, and automotive retail build intelligent, production-grade platforms. We combine the power of GenAI, scalable cloud architecture, and automation to deliver next-generation business tools. Our platform supports multi-tenant AI agent workflows, real-time lead processing, and deep analytics pipelines.

    We are seeking an experienced Data Architect with deep Google Cloud Platform (GCP) experience to lead our data lake, ingestion, observability, and compliance infrastructure. This role is critical to building a production-grade, metadata-aware data stack aligned with SOC2 requirements.

    What You'll Do

    Data Architecture & Lakehouse Design

    • Architect and implement a scalable GCP-based data lake across landing, transformation, and presentation zones.
    • Use native GCP services such as GCS, Pub/Sub, Apache Beam, Cloud Composer, and BigQuery for high-volume ingestion and transformation.
    • Design and implement infrastructure landing zones using Terraform with strong IAM boundaries, secrets management, and PII protection.
    • Build ingestion pipelines using Apache NiFi (or equivalent) to support batch, streaming, and semi-structured data from external and internal systems.

    Data Ingestion & Integration

    • Develop robust ingestion patterns for CRM, CDP, and third-party sources via APIs, file drops, or scraping.
    • Build real-time and batch ingestion flows with schema-aware validation, parsing, and metadata handling.
    • Implement transformation logic and ensure staging β†’ curated flow adheres to quality, performance, and lineage standards.

    Metadata & Lineage Management

    • Define and enforce metadata templates across all sources.
    • Establish data lineage tracking from ingestion to analytics using standardized tools or custom solutions.
    • Drive schema mapping, MDM support, and data quality governance across ingestion flows.

    SRE & Observability for Data Pipelines

    • Implement alerting, logging, and monitoring for all ingestion and transformation services using Cloud Logging, Cloud Monitoring, OpenTelemetry, and custom dashboards.
    • Ensure platform SLAs/SLOs are tracked and incidents are routed to lightweight response workflows.
    • Support observability for cloud functions, GKE workloads, and Cloud Run-based apps interacting with the data platform.

    Security & Compliance

    • Enforce SOC2 and PII compliance controls: IAM policies, short-lived credentials, encrypted storage, and access logging.
    • Collaborate with security teams (internal/external) to maintain audit readiness.
    • Design scalable permissioning and role-based access for production datasets.

    What We're Looking For

    Core Experience

    • 5+ years in data engineering or architecture roles with strong GCP experience.
    • Deep familiarity with GCP services: BigQuery, Pub/Sub, Cloud Storage, Cloud Functions, Dataflow/Apache Beam, Composer, IAM, and Logging.
    • Expertise in Apache NiFi or similar ingestion/orchestration platforms.
    • Experience with building multi-environment infrastructure using Terraform, including custom module development.
    • Strong SQL and schema design skills for analytics and operational reporting.

    Preferred Skills

    • Experience in metadata management, MDM, and schema evolution workflows.
    • Familiarity with SOC2, GDPR, or other data compliance frameworks.
    • Working knowledge of incident response systems, alert routing, and lightweight ITSM integration (JIRA, PagerDuty, etc.).
    • Experience with data lineage frameworks (open-source or commercial) is a plus.
    • Exposure to graph databases or knowledge graphs is a plus but not required.

    Why Join Us

    • Help design a full-stack, production-grade data infrastructure from the ground up.
    • Work in a fast-paced AI-driven environment with real product impact.
    • Contribute to a platform used by automotive dealerships across North America.
    • Be part of a high-trust, hands-on team that values autonomy and impact.
    More
  • Β· 50 views Β· 6 applications Β· 13d

    Data Scientist to $6000

    Full Remote Β· Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    The Business Data Science team at Noom (www.noom.com) is seeking a mid-level - senior Data Scientist Contractor to build and maintain high-impact models that drive business understanding, forecast long-term value, and evaluate product and marketing...

    The Business Data Science team at Noom (www.noom.com) is seeking a mid-level - senior Data Scientist Contractor to build and maintain high-impact models that drive business understanding, forecast long-term value, and evaluate product and marketing initiatives.

    Noom is more than just a health-tech company β€” it's a mission-driven organization that has helped over 50 million users worldwide build healthier habits through science-backed behavioral psychology. Backed by $650M+ in funding and recognized by Forbes as one of America's Best Startup Employers, Noom is scaling rapidly. If you're looking to work on a product that genuinely changes lives β€” this is the place.
     

    β€œProduct” at Noom covers all their in-app programs and features (including Noom Med!), as well as our Growth efforts (i.e., how they position and price their different product offerings!). 
     

    Key Responsibilities:

    • LTV/CRM Modeling & Data Infrastructure

      - Build, develop, and maintain robust LTV and CRM models, data pipelines, and dashboards

      - Improve model accuracy and scalability as the business evolves

      - Debug and optimize data flows to ensure reliability and reproducibility

    • Strategic Analysis & Insights

      - Proactively identify and investigate LTV/CRM trends and key performance drivers

      - Evaluate unit economics and the impact of product or marketing changes

      - Conduct deep-dive analyses to uncover root causes of anomalies and provide actionable explanations

    • Stakeholder Enablement

      - Partner with Growth, Marketing, and Finance to deliver insights that guide investment and product decisions

      - Communicate findings and model outputs clearly to non-technical audiences, both in writing and live discussions

      - Craft compelling data narratives that explain business performance, risks, and opportunities
       

    Requirements:

    • 3+ years of experience in a Data Science or Quantitative Analytics role
    • Advanced proficiency in SQL and Python (pandas, NumPy, scikit-learn, statsmodels)
    • Experience in statistical modeling, predictive analytics, or machine learning
    • Strong ability to translate technical work into clear business implications
    • Strong English level: ability to collaborate with both technical and non-technical stakeholders
    • Ability to perform business modeling and conduct robust statistical analysis

       

    What We Offer:

    • Strong goal-oriented team, and a research mindset
    • Opportunity to leverage your engineering skills for fellow engineers and shape the future of AI
    • Working with the newest technical equipment
    • 20 working days of annual vacation leave
    • English courses, Educational Events & Conferences
    • Medical insurance
    More
  • Β· 36 views Β· 18 applications Β· 12d

    Data Scientist

    Countries of Europe or Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    Project Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics. Overview We are looking for a Data Scientist with strong background in...

    Project
    Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics.

    Overview
    We are looking for a Data Scientist with strong background in statistics and probability theory to help us build intelligent analytical solutions. The current focus is on outlier detection in freight management data, with further development toward anomaly detection and forecasting models for logistics and freight spend. The role requires both deep analytical thinking and practical hands-on work with data, from SQL extraction to model deployment.

    Key Responsibilities

    • Apply statistical methods and machine learning techniques for outlier and anomaly detection.
    • Design and develop forecasting models to predict freight costs, shipment volumes, and logistics trends.
    • Extract, preprocess, and transform large datasets directly from SQL databases.
    • Categorize exceptions into business-defined groups (e.g., High Value Exceptions, Accessorial Charge Exceptions, Unexpected Origin/Destination).
    • Collaborate with business analysts to align analytical approaches with domain requirements.
    • Use dashboards (e.g., nSight) for validation, visualization, and reporting of results.
    • Ensure models are interpretable, scalable, and deliver actionable insights.

    Requirements

    • Strong foundation in statistics and probability theory.
    • Proficiency in Python with libraries such as pandas, numpy, matplotlib, scikit-learn.
    • Proven experience with outlier/anomaly detection techniques.
    • Hands-on experience in forecasting models (time-series, regression, or advanced ML methods).
    • Strong SQL skills for working with large datasets.
    • Ability to communicate findings effectively to both technical and non-technical stakeholders.

    Nice to Have

    • Experience with ML frameworks (TensorFlow, PyTorch).
    • Familiarity with MLOps practices and model deployment.
    • Exposure to logistics, supply chain, or financial data.
    • Knowledge of cloud platforms (AWS, GCP, Azure).
    More
  • Β· 40 views Β· 10 applications Β· 12d

    Machine Learning Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B1 - Intermediate
    We’re looking for a Machine Learning Engineer with a strong background in Computer Vision and Generative AI to join our R&D team. You’ll build and optimize pipelines for virtual try-on, pose-guided image generation, and garment transfer systems using...

    We’re looking for a Machine Learning Engineer with a strong background in Computer Vision and Generative AI to join our R&D team. You’ll build and optimize pipelines for virtual try-on, pose-guided image generation, and garment transfer systems using cutting-edge diffusion and vision models.

     

    Must-Have Skills

    Core ML & Engineering

    • Proficiency in Python and PyTorch (or JAX, but PyTorch preferred)
    • Strong understanding of CUDA and GPU optimization
    • Ability to build exportable, production-ready pipelines (TorchScript, ONNX)
    • Experience deploying REST inference services, managing batching, VRAM, and timeouts

    Computer Vision

    • Hands-on experience with image preprocessing, keypoint detection, segmentation, optical flow, and depth/normal estimation
    • Experience with human parsing & pose estimation using frameworks such as HRNet, SegFormer, Mask2Former, MMPose, or OpenPifPaf
    • Bonus: familiarity with DensePose or UV-space mapping

    Generative Models

    • Strong practical experience with diffusion models (e.g., Stable Diffusion, SDXL, Flux, ControlNet, IP-Adapter)
    • Skilled in inpainting, conditioning on pose, segmentation, or depth maps
    • Understanding of prompt engineering, negative prompts, and fine-tuning for control

    Garment Transfer Pipelines

    • Ability to align source garments to target bodies via pose-guided warping (TPS/thin-plate, flow-based) or DensePose mapping
    • Must ensure preservation of body, skin, hair, and facial integrity

    Data & Experimentation

    • Experience in dataset creation and curation, augmentation, and experiment reproducibility
    • Competence in using W&B or MLflow for experiment tracking and DVC for data versioning

     

    Nice-to-Have

    • Understanding of SMPL rigging/retargeting and cloth simulation (blendshapes, drape heuristics)
    • Experience fine-tuning diffusion models via LoRA or Textual Inversion for brand or style consistency
    • Familiarity with NeRF or Gaussian Splatting (3D try-on and rendering)
    • Experience with model optimization for mobile/edge deployment (TensorRT, xFormers, half-precision, 8-bit quantization)
    • Awareness of privacy, consent, and face-handling best practices

     

    Tools & Frameworks

    • PyTorch, diffusers, xFormers
    • OpenCV, MMDetection, MMSeg, MMPose, or Detectron2
    • DensePose / SMPL toolchains
    • Weights & Biases, MLflow, DVC

     

    We Offer

    • Opportunity to work on cutting-edge generative AI applications in computer vision
    • R&D-focused environment with freedom to explore, test, and innovate
    • Competitive compensation and flexible work structure
    • Collaboration with a team of ML engineers, researchers, and designers pushing boundaries in human-centered AI
    More
  • Β· 15 views Β· 0 applications Β· 12d

    Data Science Consultant

    Full Remote Β· Ukraine Β· 7 years of experience Β· B2 - Upper Intermediate
    This role is well-regarded for leaders with a strong mathematics background and a desire to work hands-on with Data Science, AI, ML and could be the perfect opportunity to join EPAM as a Senior Manager - Data Science Consultant! Kindly note that this...

    This role is well-regarded for leaders with a strong mathematics background and a desire to work hands-on with Data Science, AI, ML and could be the perfect opportunity to join EPAM as a Senior Manager - Data Science Consultant!
     

    Kindly note that this role supports remote work, but only from within Ukraine.


    #LI-DNI#LI-VC3

    Technologies

    • Python, Databricks, Azure ML, Big Data (Hadoop, Spark, Hive, etc.), AWS, Docker, Kubernetes, DB (Pl SQL, HQL, Mongo), Google (Vertex AI) or similar

    Responsibilities

    • Discover, envision and land Data Science, AI and Machine Learning opportunities alongside EPAM teams & clients
    • Lead cross-functional EPAM and/or EPAM clients` teams through the journey of understanding business challenges and defining solutions leveraging AI, Data Science, Machine Learning and MLOpsβ€―
    • Work with clients to deliver AI Products which provide value to end-usersβ€―
    • Participate and drive EPAM competencies development, work on new EPAM offerings in AI, Data Science, ML and MLE space, as well as work on refining existing offeringsβ€―
    • Bring your creative engineering mind to deliver real-life practical applications of Machine Learningβ€―
    • Work closely with DevOps practice on infrastructure and release planningβ€―

    Requirements

    • Consulting: Experience in exploring the business problem and converging it to applied AI technical solutions; expertise in pre-sales, solution definition activitiesβ€―β€―
    • Data Science: 3+ years of hands-on experience with core Data Science, as well as knowledge of one of the advanced Data Science and AI domains (Computer Vision, NLP, Advanced Analytics etc.)β€―β€―
    • Engineering: Experience delivering applied AI from concept to production, familiarity, and experience with MLOps, Data, design of Data Analytics platforms, data engineering, and technical leadership
    • Leadership: Track record of delivering complex AI-empowered and/or AI-empowering programs to the clients in a leadership position. Experience managing and growing the team to scale up Data Science, AI &ML capability is a big plus
    • Excellent communication skills (active listening, writing and presentation), drive for problem solving and creative solutions, high EQ

    Nice to have

    • One or more business domains expertise (e.g. CPG, Retail, Financial Services, Insurance, Healthcare/ Life Science)
    More
  • Β· 46 views Β· 6 applications Β· 12d

    Senior AI / Machine Learning Engineer

    Part-time Β· Full Remote Β· EU Β· 5 years of experience Β· B2 - Upper Intermediate
    About the Project We are collaborating with a leading Healthcare company developing advanced AI-powered solutions for medical data processing, diagnostics support, and automation. The project focuses on scalable AI deployments with strong compliance and...

    About the Project
    We are collaborating with a leading Healthcare company developing advanced AI-powered solutions for medical data processing, diagnostics support, and automation. The project focuses on scalable AI deployments with strong compliance and data security standards within the EU.

    The team is seeking a Senior AI / Machine Learning Engineer with solid hands-on experience in Large Language Models (LLMs), model fine-tuning, and cloud-based AI infrastructure. This is a part-time, long-term engagement with flexible working hours and potential for extension.

    Key Responsibilities

    • Lead the development, optimization, and maintenance of AI/ML systems.
    • Design, fine-tune, and deploy Large Language Model (LLM) solutions adapted to healthcare use cases.
    • Build and optimize APIs and pipelines using FastAPI, LangChain, and LangGraph.
    • Collaborate closely with cross-functional teams to define and implement AI-driven features.
    • Provide architectural guidance on cloud infrastructure (Azure / AWS) for scalable AI deployments.
    • Stay updated on the latest trends in Generative AI, LLM research, and best engineering practices.
    • Mentor and guide AI/ML engineers on technical and strategic initiatives.

    Requirements

    • 5+ years of experience in AI/ML engineering (preferably 7–10 years).
    • Proven hands-on experience with Large Language Models (LLMs) and model fine-tuning.
    • Strong experience with Python, FastAPI, LangChain, and/or LangGraph.
    • Practical experience with Azure OpenAI or OpenAI APIs.
    • Deep understanding of cloud environments (Azure, AWS) and scalable AI architecture.
    • Experience leading AI/ML projects from prototype to production.
    • Excellent communication skills in English (B2+ level).
    • Ability to work independently and mentor junior team members.
    • Experience in Healthcare projects is a plus (but not mandatory).

    Working Conditions

    • Part-time: approximately 16 hours per week (2–3 working days).
    • Duration: 12 months, with potential extension.
    • Remote work, flexible schedule based on mutual availability.
    • Location: Only candidates based in the European Union or relocated from Ukraine, CIS, Balkans, Asia, or Africa to EU countries (required due to data protection policies).

    Additional Information

    Please include the following details along with your CV:

    • Years of experience in AI/ML:
    • Experience with LLMs and fine-tuning:
    • English level:
    • Current country of residence:
    • Citizenship:
    • Availability to start:
    • Confirmation of part-time (16h/week) availability:
    More
  • Β· 61 views Β· 2 applications Β· 11d

    Data Scientist to $3250

    Full Remote Β· Ukraine Β· 2 years of experience Β· B2 - Upper Intermediate
    About Opal Data Consulting: At Opal Data we combine business and technical expertise to create solutions for businesses. Traditional management consultants offer strategic advice without the technical skills to implement their proposed solutions....

    About Opal Data Consulting: 

    At Opal Data we combine business and technical expertise to create solutions for businesses. Traditional management consultants offer strategic advice without the technical skills to implement their proposed solutions. Software consultants often build tools that aren’t truly optimized for a business or organizational need. We combine the best of both worlds.

     

    We do several kinds of projects: building tools to help our clients understand their organizations in real time, building predictive models to improve our clients’ operations, and building custom applications. Our clients are typically small to medium sized companies across industries (typically $2M – 100M revenue) or government agencies with similarly sized annual budgets.

     

    Building real time understanding of an organization often involves creating and populating a data warehouse by using APIs, scrapers, or prebuilt connectors to integrate all of a clients systems (ERP, CMS, marketing platforms, accounting systems, etc), writing ETL scripts in Python or SQL to shape and combine that data (often necessitating creating of a cloud environment to host serverless functions), and then building visualizations that allow the clients to see what is happening in real time (in Tableau, Power BI, Looker, etc). We often do a significant amount of related analytical work looking for patterns, identifying areas of improvement, and creating software tools to reinforce those learnings (e.g., building notification systems for operations teams to follow the best practices we identify in our analysis, automating tasks, etc.) 

     

    Building predictive models to improve our clients’ operations involves using machine learning to solve particular organizational challenges or take advantage of opportunities. For instance, we have built models to predict which customers will churn (unsubscribe) in advance in order to identify the causal factors leading to churn as well as prioritize customers for outreach from customer retention teams. In other cases, we have built models to predict the performance of individual stores within a network to identify and spread best practices from outperforming stores as well as identify ideal locations for new store expansion.

    We are a small but nimble team looking to bring on a self-starter who excels at data science. 

     

    You can read more about us at: www.opal-data.com. 

     

    Job Summary:

    The Data Scientist will report directly to Opal’s founder / technical lead. As a core member of a small team, the position provides an opportunity for growth across a wide range of technical skillsets, as well as experience working in a wide range of industries across our client projects. 

     

    Because of the broad range of work we do, candidates are not expected to be experts in everything. The ideal candidate should have experience in many of the areas listed below in Major Responsibilities, and have strong interest in learning the tools and techniques in which they do not already have expertise. Raw intelligence, curiosity, and excitement about experimentation and learning are some of the most important determinants of success in this position. We believe strongly in developing our team members and promoting from within, and are looking for candidates who are interested in continuing to learn and grow within the organization.

     

    In addition to generous base compensation commensurate with experience, this position will also earn profit sharing. Each month, the total compensation earned will be the greater of base compensation or profit sharing, whichever is larger for that month. In good months, our staff typically earn 30-60% more than their base compensation.

     

    Major Responsibilities:

    • Use APIs and build scrapers to ingest data
    • Setup and work within cloud environments (Azure, AWS, GCP) to create and populate data warehouses and deploy ETL code to serverless functions
    • Create ETL scripts / data pipelines in Python / SQL to shape data for visualization and automation tasks
    • Visualize data and create dashboards in Tableau, Power BI, Looker, etc
    • Conduct one-off analyses to look for patterns and insights, make suggestions on future improvements to data collection, etc.
    • Create machine learning models, including variable creation from available data to turn hypotheses we want to test into variables
    • Work on application backends
    • Write clean, well-documented code
    • Create agents via prompt engineering, fine-tuning, and decision pooling open source LLMs

     

    Qualifications:

    • Bachelor’s degree in a computational field and a minimum of 2 years of full time work experience using Python as a Data Scientist, Data Engineer, or backend Software Developer; or, in lieu of formal education, a minimum of 4 years of technical work experience in those fields
    • Extremely proficient in Python
    • Proficient in SQL
    • Fluency in English
    • Please mention:
      • Experience with Javascript, particularly for scrapers
      • Any other programming languages used
      • Experience with Tableau, Power BI, DOMO, Looker, or other dashboarding platforms
      • Experience building or expanding data warehouses
      • Experience with Dev Ops - setting up cloud environments and deploying containerized (Docker) or serverless functions
      • Machine learning / predictive modeling experience
      • Prompt engineering / fine-tuning of LLMs for business workflows
    More
  • Β· 26 views Β· 4 applications Β· 11d

    Game Mathematician

    Full Remote Β· EU Β· Product Β· 2 years of experience
    Ixilix is a technology-driven company that builds high-quality solutions and long-term partnerships. Our team is growing, and we are looking for a Game Mathematician. Responsibilities: Writing rules for slots, showing the mechanics of the game, and...

    Ixilix is a technology-driven company that builds high-quality solutions and long-term partnerships. Our team is growing, and we are looking for a Game Mathematician.

    Responsibilities:

    • Writing rules for slots, showing the mechanics of the game, and documenting them in the Wiki;
    • Creation of slots in accordance with the requirements of RTP, imposed by the rules (Main spins, Free games, Bonus games, features, etc.);
    • Analyze and optimize game volatility and payout curves;
    • Conducting a review of the rules of the slots checking the statistical data collected by bots when writing the server implementation of the game;
    • Participation in general meetings that affect such things as the mechanics of slots;
    • Actively participate in game planning sessions and propose innovative mechanics.

    Required Skills:

    • 3+ years of experience on positions like Mathematician in gambling is must;
    • Degree in Mathematics, Statistics, or related fields (Bachelor’s, Master’s, or Ph.D.);
    • Understanding of applied mathematics: probability theory and statistics, numerical methods, linear algebra, optimization;
    • Ability to describe processes in detail and clearly articulate their thoughts in writing;
    • Analytical approach in the process of working on problems;
    • Attention to detail, generation of ideas.
    • English B2+.
    • Ukrainian C1+.

    Preferred Skills:

    • Knowledge of algorithms and data structures, interpolation methods, regression models, theory of stochastic processes, cryptography;

    What we offer:

    Rewards & Celebrations 

    • Quarterly Bonus System
    • Team Buildings Compensations
    • Memorable Days Financial Benefit

    Learning & Development

    • Annual fixed budget for personal learning 
    • English Language Courses Compensation

    Time Off & Leave

    • Paid Annual Leave (Vacation) - 24 working days
    • Sick leave - unlimited number of days, fully covered

    Wellbeing Support

    • Mental Health Support (Therapy Compensation)
    • Holiday Helper Service

    Workplace Tools & Assistance

    • Laptop provided by Company (after probation)

    Work conditions:

    • Remote work from EU
    • Flexible 8-hour workday, typically between 9:00 - 18:00 CET
    • Five working days, Monday to Friday
    • Public holidays observed according to Ukrainian legislation
    • Business trips to Bratislava every 3-6 months (company provides compensation of expenses)


    At Ixilix, we value transparency, trust, and ownership. We believe that great results come from people who care - about their work, their team, and the impact they create. 

    Sounds like you? Let’s connect! We’re just one click away.

     

    More
  • Β· 51 views Β· 9 applications Β· 11d

    Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B1 - Intermediate
    Trinetix is looking for a skilled Data Scientist. We are looking for a Data Scientist with strong expertise in Machine Learning and Generative AI. You will design, train, and fine-tune models using Python, TensorFlow, PyTorch, and Hugging Face, applying...

    Trinetix is looking for a skilled Data Scientist

    We are looking for a Data Scientist with strong expertise in Machine Learning and Generative AI. You will design, train, and fine-tune models using Python, TensorFlow, PyTorch, and Hugging Face, applying advanced statistical analysis and prompt engineering techniques. The role involves working with Azure ML, Databricks, and OpenAI APIs to deliver scalable, responsible AI solutions that drive data-informed decision-making. 

     

    Requirements 

    • Proven experience in building and fine-tuning Machine Learning (ML) and Generative AI models 
    • Strong proficiency in Python and familiarity with R for statistical modeling and data analysis 
    • Hands-on experience with TensorFlow, PyTorch, and Hugging Face frameworks 
    • Solid understanding of statistical analysis, feature engineering, and experimental design 
    • Practical experience using OpenAI API or similar LLM-based platforms 
    • Experience working with Azure Machine Learning or Databricks for model training and deployment 
    • Commitment to responsible AI practices, including model transparency, fairness, and bias mitigation 

     

    Nice-to-haves 

    • Experience with Generative AI prompt design and optimization techniques 
    • Familiarity with data visualization and storytelling tools (e.g., Power BI, Tableau) 
    • Understanding of MLOps and CI/CD workflows for ML models 
    • Experience collaborating in cross-functional AI teams or research-driven environments 
    • Background in cloud-based model orchestration or multi-modal AI systems 

     

    Core Responsibilities 

    • Design, train, and fine-tune ML and Generative AI models to solve complex business and analytical problems 
    • Conduct data preprocessing, feature engineering, and exploratory analysis to ensure model readiness 
    • Apply statistical and experimental methods to evaluate model performance and reliability 
    • Develop and optimize prompts for LLM-based solutions to improve accuracy and contextual relevance 
    • Collaborate with engineers and data teams to deploy models within Azure ML, Databricks, or other cloud environments 
    • Ensure adherence to responsible AI and ethical data use principles across all modeling stages 

     

    What we offer   

    • Continuous learning and career growth opportunities 
    • Professional training and English/Spanish language classes   
    • Comprehensive medical insurance 
    • Mental health support 
    • Specialized benefits program with compensation for fitness activities, hobbies, pet care, and more 
    • Flexible working hours 
    • Inclusive and supportive culture 
    More
  • Β· 23 views Β· 1 application Β· 11d

    Data Architect

    Full Remote Β· Poland, Romania, Spain, Portugal Β· 5 years of experience Β· B2 - Upper Intermediate
    Project tech stack: Snowflake, AWS, Python/dbt, DWH design & implementation of medallion architecture, strong integration experience, data modelling for analytical solutions, CI/CD We are looking for a hands-on Data architect to build and scale a...

    Project tech stack: Snowflake, AWS, Python/dbt, DWH design & implementation of medallion architecture, strong integration experience, data modelling for analytical solutions,  CI/CD

     

     

    We are looking for a hands-on Data architect to build and scale a Snowflake-based data platform supporting Credit Asset Management and Wealth Solutions. The role involves ingesting data from SaaS investment platforms via data shares and custom ETL, establishing a medallion architecture, modeling data into appropriate data marts for exposing it to analytical consumptions. 

     

     

     

    Our client is a global real estate services company specializing in the management and development of commercial properties. Over the past several years, the organization has made significant strides in systematizing and standardizing its reporting infrastructure and capabilities. Due to the increased demand for reporting, the organization is seeking a dedicated team to expand capacity and free up existing resources.

     

    Location

    Remote: Europe (Poland, Romania, Spain, Portugal)

     

    Skills & Experience

    • Bachelor's degree in Computer Science, Engineering, or related field;
    • 7+ years of experience in data engineering roles;
    • Database management and SQL proficiency
    • Strong experience in Snowflake,  proven experience building scalable data pipelines into Snowflake,  data shares and custom connectors.
    • Proficiency with AWS platforms for scalable solutions, Azure is a plus;
    • Expertise in streaming pipeline design and complex data transformation: hands on ETL/ELT experience (Workato strongly preferred) and proficiency in Python and/or dbt experience for transformations and testing
    • Proven implementation of medallion architecture and data quality frameworks.
    • Data modeling and design for analytical solutions
    • Experience with data governance, data lifecycle management, cataloging, lineage, and access control design.
    • Experience setting up IAM/role-based access, cost optimization, and CI/CD for data pipelines.
    • Ability to analyze system requirements and translate them into effective technical designs;
    • Experience with performance optimization for large-scale databases;
    • Problem-solving mindset to address technical challenges in dynamic environments;
    • Collaboration skills to work effectively with cross-functional teams;
    • Expertise in using and/or introducing AI-based coding practices to the projects.
       

    Nice to Have

    • Domain exposure to credit/investments and insurance data
    • Familiarity with schemas and data models from:BlackRock Aladdin, Clearwater, WSO, SSNC PLM
    • Experience with Databricks, Airflow, or similar orchestration tools
    • Prior vendor/staff augmentation experience in fast-moving environments

     

     

     

    Responsibilities

    • Design, build, and maintain scalable data pipelines into Snowflake using Workato and native Snowflake capabilities 
    • Pipeline design and complex data transformation,integrate heterogeneous vendor data via data shares and custom ETL
    • Define, implement and enforce medallion architecture (bronze/silver/gold) and data quality checks.
    • Collaborate with tech lead and business partners to define logical data marts for analytics and reporting.
    • Set standards and best practices for CDC, data lineage, metadata management, and master data management (MDM);
    • Define enterprise-level policies for data governance, security, privacy, and compliance, working closely with risk and security teams;
    • Contribute to non-functional setup: IAM/role-based access, data cataloging, lineage, access provisioning, monitoring, and cost optimization.
    • Operate effectively in a less-structured environment; proactively clarify priorities and drive outcomes.
    • Collaborate closely with the team members and other stakeholders;
    • Provide technical leadership across teams: guiding engineers, analysts, and scientists in adopting architecture standards;
    • Document data pipelines, processes, and best practices;
    • Evaluate and recommend new data technologies.
    More
  • Β· 62 views Β· 21 applications Β· 10d

    Data Scientist / Developer (PT to FT)

    Part-time Β· Full Remote Β· Worldwide Β· 3 years of experience Β· B2 - Upper Intermediate
    * Part-time (80 hours per month) transitioning to Full-time (160 hours per month) within 4–6 months. Company Description The Client is an American nonprofit organization providing essential services and support for individuals with disabilities. Their...

    * Part-time (80 hours per month) transitioning to Full-time (160 hours per month) within 4–6 months.

     

    Company Description

    The Client is an American nonprofit organization providing essential services and support for individuals with disabilities. Their programs focus on adult disability services, mental health, and employment support β€” empowering people to live, learn, work, and thrive in their communities. They are dedicated to advancing full equity, inclusion, and access for all through life-changing disability and community services.

     

    Project Description

    We are seeking an experienced Part-Time Data Scientist / Developer to support our enterprise data and artificial intelligence initiatives. This role will be responsible for developing and maintaining robust data pipelines within Microsoft Fabric, enabling advanced analytics through Power BI, and supporting the development and maintenance of large language model (LLM)β€”based solutions. The ideal candidate will demonstrate expertise in both data engineering and AI/ML, with the ability to deliver high-quality, scalable solutions that meet organizational needs.

     

    Requirments

    • BA or MS in Statistics, Mathematics, Computer Science, or other quantitative field.
    • 3+ years of industry experience delivering and scaling ML products both in scope and on time.
    • Strong skills in Python and querying languages (e.g. SQL).
    • Experience in production-level coding with broad knowledge of healthy code practices.
    • Tech Stack: Microsoft Fabric, Power BI, Dax, Python, MS SQL Server.
    • Business acumen with proven ability to understand, analyze and problem solve business goals in detail.
    • Fluent English (both written and spoken).

     

    Duties and responsibilities

    Data Engineering and Integration

    • Design, develop, and maintain data pipelines in Microsoft Fabric.
    • Ingest, transform, and model data from multiple sources into the data lakehouse/warehouse.
    • Ensure data integrity, scalability, and performance optimization.

    Analytics & Reporting

    • Develop and optimize complex SQL queries to support analytics and reporting.
    • Build and maintain datasets and semantic models for Power BI.
    • Design, implement, and modify advanced Power BI visualizations and dashboards.
    • Partner with stakeholders to deliver accurate and actionable insights.

    AI & LLM Development

    • Develop, fine-tune, and maintain large language models for enterprise use cases.
    • Integrate LLM solutions with data assets in Microsoft Fabric and reporting environments.
    • Monitor and retrain models as required to maintain performance.

       

    Working conditions

    Mon - Fri 9-5 (EST) overlap with team at least 4 hours.

    More
  • Β· 33 views Β· 3 applications Β· 10d

    Senior Data Scientist

    Full Remote Β· Worldwide Β· 7 years of experience Β· C1 - Advanced
    Location: Remote (Europe preferred) Contract Type: B2B Experience: 7+ years English: C1 (Advanced) Compensation: Gross (TBD) Holidays: 10 public holidays (vacation & sick days unpaid) Role Overview We’re looking for a Senior Data Scientist to design,...

    Location: Remote (Europe preferred)
    Contract Type: B2B
    Experience: 7+ years
    English: C1 (Advanced)
    Compensation: Gross (TBD)
    Holidays: 10 public holidays (vacation & sick days unpaid)

    Role Overview

    We’re looking for a Senior Data Scientist to design, develop, and deploy advanced ML models solving core business challenges. You’ll work across research, data strategy, and productionization β€” from experimentation to scalable delivery.

    Key Responsibilities

    • Build and optimize ML/statistical models (NLP, recommender systems, time-series, deep learning).
    • Analyze structured and unstructured data; define data pipelines with engineers.
    • Ensure reproducibility, scalability, and model explainability.
    • Deploy and monitor models in production; handle retraining and performance tracking.
    • Contribute to MLOps standards (CI/CD, automated retraining).

    Requirements

    • 7+ years of experience in Data Science or Applied ML.
    • Strong Python, SQL, and ML frameworks (scikit-learn, TensorFlow, PyTorch).
    • Experience with model deployment, versioning, and monitoring in production.
    • Solid understanding of statistics, experimentation, and data processing.
    More
  • Β· 26 views Β· 0 applications Β· 10d

    Data Scientist GenAI Engineer

    Office Work Β· Poland Β· Product Β· 3 years of experience Β· B1 - Intermediate
    A product company is looking for an GenAI Engineer (on-site) in Warsaw. It's a Silicon Valley Company, a successful market leader. The B2C platform, based on the best-quality global video technology. They offer: stock options grant, medical...

    A product company is looking for an GenAI Engineer (on-site) in Warsaw.

     

    βœ… It's a Silicon Valley Company, a successful market leader.

    The B2C platform, based on the best-quality global video technology.

     

    βœ…They offer: stock options grant, medical insurance (for you + 75% coverage for relatives), free lunches and parking, multisport card.

     

    βœ…Requirements: experience with Python, LLM, Generative AI experience.
    Experience with RAG will be a plus.

    More
  • Β· 36 views Β· 0 applications Β· 10d

    Data Scientist AI Lead Engineer

    Office Work Β· Poland Β· Product Β· 5 years of experience Β· B1 - Intermediate
    A product company is looking for an AI Lead Engineer (on-site) in Warsaw. It's a Silicon Valley Company, a successful market leader. The B2C platform, based on the best-quality global video technology. They offer: stock options grant, medical...

    A product company is looking for an AI Lead Engineer (on-site) in Warsaw.

     

    βœ… It's a Silicon Valley Company, a successful market leader.

    The B2C platform, based on the best-quality global video technology.

     

    βœ…They offer: stock options grant, medical insurance (for you + 75% coverage for your relatives), free lunches and parking, multisport card.

     

    βœ…Requirements: experience with Python, LLM, and RAG. Strong leadership experience.

    More
Log In or Sign Up to see all posted jobs