Jobs

82
  • Β· 42 views Β· 5 applications Β· 13d

    Machine Learning Team Lead

    Full Remote Β· Countries of Europe or Ukraine Β· 6 years of experience Β· B2 - Upper Intermediate
    About the Role We’re seeking a Team Lead – Generative AI Engineer to lead a team of engineers in designing, developing, and deploying production-grade AI systems. This role is ideal for someone who is both a strong technical contributor and an...

    About the Role

    We’re seeking a Team Lead – Generative AI Engineer to lead a team of engineers in designing, developing, and deploying production-grade AI systems. This role is ideal for someone who is both a strong technical contributor and an experienced people leaderβ€”capable of driving architecture decisions, mentoring team members, and delivering innovative AI solutions at scale.

    You will be responsible for guiding the end-to-end lifecycle of Generative AI applications, ensuring technical excellence, scalability, and alignment with business needs. This includes working across backend systems, RAG pipelines, prompt engineering, and cloud-native deployments.

    Key Responsibilities

    • Leadership & Team Management
      • Lead and mentor a team of AI/ML engineers, fostering growth and technical excellence.
      • Define and enforce coding standards, architectural patterns, and best practices.
      • Collaborate with stakeholders to translate business needs into AI-driven solutions.
      • Manage project timelines, technical risks, and team deliverables.
      • Generative AI & Prompt Engineering
        • Integrate and optimize applications using LLM provider APIs (OpenAI, Anthropic, etc.).
        • Design prompts with advanced techniques (few-shot, chain-of-thought, chaining, context crafting).
        • Implement safeguards (guardrails, structured output validation, injection protection).
      • Architecture & Backend Development
        • Build scalable backend services in .NET (C#) and Python, working with SQL and APIs.
        • Develop and manage RAG pipelines, conversational AI systems, and summarization tools.
        • Drive observability: tracing, logging, monitoring for LLM-powered systems.
      • Evaluation & Optimization

        • Benchmark and evaluate LLMs using custom datasets and automated testing.
        • Oversee system reliability, performance tuning, caching, and optimization.
        • Ensure solutions meet enterprise-grade standards for security and scalability.

         

        Required Skills & Qualifications

      • 5+ years of professional experience in Machine Learning / AI engineering.
      • 1–2+ years hands-on experience in Generative AI application development.
      • Proven leadership or team lead experience (mentoring, managing, or leading AI/ML engineers).
      • Strong backend engineering skills in .NET (C#), SQL, and Python.
      • Solid knowledge of LLM providers (OpenAI, Anthropic, etc.) and prompt engineering techniques.
      • Experience with RAG pipelines, AI workflows, and productionizing LLM systems.
      • Hands-on with Docker, Kubernetes, REST APIs, and Azure (AKS, ACR, containerized deployments).
      • Excellent communication skills (English, written and spoken).
      • Preferred / Nice-to-Have
      • Azure AI ecosystem: OpenAI, PromptFlow, Azure ML, AI Services.
      • Familiarity with CosmosDB, KQL, Azure Log Analytics, App Insights.
      • Experience with multiple LLM providers (Anthropic, Mistral, Cohere, etc.)
      • Prompt caching, compression, and output validation strategies.
      • Redis caching for performance optimization.
      • Frontend experience with React and Next.js.
    More
  • Β· 45 views Β· 2 applications Β· 26d

    Data Scientist

    Full Remote Β· Ukraine Β· Product Β· 3 years of experience Β· B1 - Intermediate
    We’re looking for a proactive Data Scientist to join our growing team and help us redefine the future of eSports betting! You will develop and implement advanced models for esports betting, focusing on games such as Dota 2, League of Legends, etc. Oversee...

    We’re looking for a proactive Data Scientist to join our growing team and help us redefine the future of eSports betting! You will develop and implement advanced models for esports betting, focusing on games such as Dota 2, League of Legends, etc. Oversee the entire model development lifecycle, including design, testing, and performance optimization.

     

    What you’ll be doing:

     

    • Develop and implement betting models using Python and Golang.
    • Design and optimize mathematical models, including statistical models, classical machine learning techniques, and neural networks.
    • Analyze the performance and impact of models, ensuring operational efficiency.
    • Prototype and assess models to evaluate their effectiveness and user acceptance.
    • Integrate models with existing systems and optimize performance.
    • Collaborate with backend and frontend teams for seamless implementation and decision-making.
    • Conduct in-depth statistical analysis and apply machine learning methods to enhance forecasting accuracy.

     

    What we’re looking for:

     

    • 3+ years of commercial experience with Python.
    • Strong knowledge of statistics and practical experience with ML techniques.
    • Proven expertise in mathematical modeling, including statistical methods, classical ML approaches, and neural networks.
    • Strong data processing skills (validation, parsing, visualization).
    • Understanding business logic of decisions and analytical thinking ability.
    • Excellent communication skills and the ability to work in a team.
    • Ability to assess the business value of tasks.
    • English – Intermediate level.
    • Nice to have: Higher education in computer science, mathematics, statistics, or a related discipline is a plus.
    • Nice to have: Experience in the betting industry or related fields is an advantage.

       

    What We Offer:

     

    • Your wellbeing and a comfortable work environment are our top priorities:
    • Flexible schedule & work format (office/hybrid): work where and when you feel most productive.
    • 20 paid + 15 unpaid vacation days: take time off whenever you need to reset.
    • An extra day off on your birthday β€” celebrate it your way!
    • Medical insurance: take care of your health with extended coverage (available in Ukraine only).
    • 22 sick days: 8 days without a doctor’s note (for sick leave or mental health), 10 with a note, plus 4 Personal Days per year β€” for personal matters, when it is necessary.
    • Gifts and bonuses for life’s big moments: weddings, new babies, kindergarten support (available in Ukraine only).

     

     

    Who We Are:

     

    We’re DATA.BET β€” a product-driven IT company transforming the world of sports, esports, and virtual betting with our innovative sportsbook solution. 

    Since 2017, we’ve been building tech that directly shapes the industry.
    Our team of experts blends hands-on experience with AI technologies to deliver cutting-edge solutions that set new standards in betting.

     

    More
  • Β· 38 views Β· 12 applications Β· 24d

    Data Scientist

    Full Remote Β· Ukraine Β· 3 years of experience Β· B1 - Intermediate
    We are looking for you! We are seeking a Senior Data Scientist to drive the next generation of data-driven solutions. This role calls for deep expertise in data architecture, advanced analytics, and pipeline design. If you are a seasoned professional...

    We are looking for you!

    We are seeking a Senior Data Scientist to drive the next generation of data-driven solutions. This role calls for deep expertise in data architecture, advanced analytics, and pipeline design. If you are a seasoned professional ready to lead initiatives, innovate with cutting-edge techniques, and deliver impactful data solutions, we’d be excited to have you join our journey.

    Contract type: Gig contract
     

    Skills and experience you can bring to this role

    Qualifications & experience:

    • At least 3 years of commercial experience with Python, Data Stack (NumPy, Pandas, scikit-learn) and web stack (Fast API / Flask / Django);
    • Familiarity with one or more machine learning frameworks (XGBoost, TensorFlow, PyTorch);
    • Strong mathematical and statistical skills;
    • Good Understanding of SQL/RDBMS and familiarity with data warehouses (BigQuery, Snowflake, Redshift, etc.);
    • Experience building ETL data pipelines (Airflow, Prefect, Dagster, etc);
    • Knowledge of Amazon Web Services (AWS) ecosystem (S3, Glue, Athena);
    • Experience with at least one MMM or marketing analytics framework (e.g., Robyn, PyMC Merydian or similar);
    • Strong communication skills to explain technical insights to non-technical stakeholders.

    Nice to have:

    • Knowledge of digital advertising platforms (Google Ads, DV360, Meta, Amazon, etc.) and campaign performance metrics;
    • Exposure to clean rooms (Google Ads Data Hub, Amazon Marketing Cloud);
    • Familiarity with industry and syndicated data sources (Nielsen, Kantar etc);
    • Experience with optimisation techniques (budget allocation, constrained optimisation);
    • Familiarity with gen AI (ChatGPT APIs/agents, prompt engineering, RAG, vector databases). 

    Educational requirements:

    • Bachelor’s degree in Computer Science, Information Systems, or a related discipline is preferred. A Master's degree or higher is a distinct advantage.

     

    What impact you’ll make 

    • Build and validate marketing measurement models (e.g., MMM, attribution) to understand the impact of media spend on business outcomes;
    • Develop and maintain data pipelines and transformations to prepare campaign, performance, and contextual data for modelling;
    • Run exploratory analyses to uncover trends, correlations, and drivers of campaign performance;
    • Support the design of budget optimisation and scenario planning tools;
    • Collaborate with engineers, analysts, and planners to operationalise models into workflows and dashboards;
    • Translate model outputs into clear, actionable recommendations for client and internal teams.

     

    What you’ll get 

    Regardless of your position or role, we have a wide array of benefits in place, including flexible working (hybrid/remote models) and generous time off policies (unlimited vacations, sick and parental leaves) to make it easier for all people to thrive and succeed at Star. On top of that, we offer an extensive reward and compensation package, intellectually and creatively stimulating space, health insurance and unique travel opportunities.

    Your holistic well-being is central at Star. You'll join a warm and vibrant multinational environment filled with impactful projects, career development opportunities, mentorship and training programs, fun sports activities, workshops, networking and outdoor meet-ups.
     

    More
  • Β· 83 views Β· 6 applications Β· 6d

    ML Researcher / Mathematician

    Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· B1 - Intermediate
    Remote – Full-time – Flexible Hours - Shifted schedule to EST (New York) timezone About TenViz TenViz Predictive Analytics builds AI-powered solutions that help Fortune 100 investors anticipate market shifts before they become consensus. We combine...

    Remote – Full-time – Flexible Hours - Shifted schedule to EST (New York) timezone

     

    About TenViz

    TenViz Predictive Analytics builds AI-powered solutions that help Fortune 100 investors anticipate market shifts before they become consensus. We combine financial expertise, mathematical rigor, and AI research to deliver predictive analytics for institutional investors.

     

    Your Expertise

    • Strong interest in quantitative and algorithmic approaches
    • BSc/MSc in Math, Physics or related field with solid background in statistics, linear algebra and probability theory
    • 2+ years in data research, quantitative modeling, ML prototyping, and backtesting
    • Skilled in Python, SQL, Git
    • Experience with  experiment tracking, pipeline orchestration and workflow automation

     

    Nice to have:

    • Bayesian inference, reinforcement learning, or ensemble methods
    • Financial econometrics, time-series forecasting, or regime modeling

     

    What You’ll Do

    • Design predictive models using probabilistic, ML, and hybrid approaches
    • Analyze large-scale financial datasets to detect hidden patterns and early regime shifts
    • Prototype and evaluate algorithms, from deep learning to Bayesian models
    • Write production-grade research code in Python/SQL that integrates into pipelines
    • Translate results into insights and themes investors can act on β€” not just outputs

     

    Example projects:

    • Forecasting inflation via commodity price signals
    • Constructing probabilistic forecasts of currency moves around macroeconomic announcements
    • Designing a Bayesian model to estimate recession probabilities from yield curve dynamics
    • Designing a regime-aware ML pipeline that switches models depending on market conditions
    • Creating hybrid ensembles that outperform traditional benchmarks

     

    What’s in It for You

    • Remote-first, flexible hours
    • Paid vacation & holidays (after 8 months)
    • Continuous learning: advanced ML/DL workshops, probabilistic methods, optimization
    • Supportive, collaborative team β€” no micromanagement, focus on trust and creativity
    • Work on real financial problems where your models influence billion-dollar decisions
    More
  • Β· 25 views Β· 3 applications Β· 20d

    Senior Data Scientist

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper Intermediate
    We are looking for a senior data scientist to lead the development and implementation of next-generation AI solutions for life-cycle marketing. As a senior member of our team, you will shape methodology, manage product direction in uncertainty and...

    We are looking for a senior data scientist to lead the development and implementation of next-generation AI solutions for life-cycle marketing.

    As a senior member of our team, you will shape methodology, manage product direction in uncertainty and represent the Company in conversations with high-stakes customers. You'll combine deep experience in causal inference, reinforcement learning, and experiments with strong communication skills to impact the entire company and our customers.

    About Us

    The company was founded in 2020 on the premise that market fundamentals move companies around the world from all-cost growth strategies to efficient and responsible growth practices, focusing on improving divisional economies. With a bold mission to use AI to rethink the whole process of growth, optimize this transition and ensure its sustainability, removes guesses about creating value for customers by providing leaders with actionable strategies and tactics to acquire, nurture and retain high-value customers, their businesses are really needed, with actions and time that would be most effective in achieving their goals.

    Leading companies such as Miro, Rappi and Moneylion rely on us to effectively apply these predictions. They use our product to attract high-value customers on platforms such as Google and Meta, optimize incentives with Salesforce and Braze, and perfectly increase time, leading to a 20%-40% increase in ROI.

    The company is well supported by leading venture capitalists such as Target Global and SquarePeg. The company has tripled annually over the past two years and now boasts a team of 70 people with offices in California, New York and Tel Aviv.

    Requirements

    • M.Sc. or Ph.D. in Computer Science, Statistics, Mathematics, or related field.
    • 6+ years of hands-on data science experience, including direct work with lifecycle marketing, personalization, or customer analytics.
    • Advanced expertise in statistics, causal inference, and experimental design.
    • Proven track record of implementing reinforcement learning / multi-armed bandit models in production.
    • Strong software engineering skills in Python, SQL, and ML frameworks
    • Exceptional communication skills β€” able to influence internal teams and present to executive-level customers.
    • Entrepreneurial mindset: thrives in uncertainty, challenges assumptions, and pushes for impactful solutions.
    More
  • Β· 55 views Β· 4 applications Β· 20d

    AI engineer with Data Engineering and Machine Learning Expertise

    Full Remote Β· Ukraine Β· Product Β· 2 years of experience Β· A2 - Elementary
    ΠΠΌΠ΅Ρ€ΠΈΠΊΠ°Π½ΡΡŒΠΊΠ° Π†Π’-компанія ΡˆΡƒΠΊΠ°Ρ” AI Engineer Π·Ρ– знаннями машинного навчання Ρ‚Π° Ρ–Π½ΠΆΠ΅Π½Π΅Ρ€Ρ–Ρ— Π΄Π°Π½ΠΈΡ…. ΠžΠ±ΠΎΠ²β€™ΡΠ·ΠΊΠΈ: Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠ° ML-ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ, ΠΏΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° Π΄Π°Ρ‚Π°-ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Ρ–Π², очищСння ΠΉ інтСграція Π΄Π°Π½ΠΈΡ…. Π’ΠΈΠΌΠΎΠ³ΠΈ: Python (Pandas, NumPy, TensorFlow Π°Π±ΠΎ PyTorch), SQL, досвід...

    ΠΠΌΠ΅Ρ€ΠΈΠΊΠ°Π½ΡΡŒΠΊΠ° Π†Π’-компанія ΡˆΡƒΠΊΠ°Ρ” AI Engineer Π·Ρ– знаннями машинного навчання Ρ‚Π° Ρ–Π½ΠΆΠ΅Π½Π΅Ρ€Ρ–Ρ— Π΄Π°Π½ΠΈΡ….

     

    ΠžΠ±ΠΎΠ²β€™ΡΠ·ΠΊΠΈ: Ρ€ΠΎΠ·Ρ€ΠΎΠ±ΠΊΠ° ML-ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ, ΠΏΠΎΠ±ΡƒΠ΄ΠΎΠ²Π° Π΄Π°Ρ‚Π°-ΠΏΠ°ΠΉΠΏΠ»Π°ΠΉΠ½Ρ–Π², очищСння ΠΉ інтСграція Π΄Π°Π½ΠΈΡ….

     

    Π’ΠΈΠΌΠΎΠ³ΠΈ:

    • Python (Pandas, NumPy, TensorFlow Π°Π±ΠΎ PyTorch), SQL, досвід Ρ€ΠΎΠ±ΠΎΡ‚ΠΈ Π· ETL-процСсами. Π‘ΡƒΠ΄Π΅ плюсом досвід Π· AWS / GCP, Airflow, Spark Π°Π±ΠΎ Docker.
    • розглядаємо Ρ‚Ρ–Π»ΡŒΠΊΠΈ Ρ‚ΠΈΡ…, Ρ…Ρ‚ΠΎ Π·Π°ΠΊΡ–Π½Ρ‡ΠΈΠ² ΠΏΡ€ΠΎΡ„Ρ–Π»ΡŒΠ½Ρƒ Π²ΠΈΡ‰Ρƒ освіту (Бтатистика, ΠœΠ°Ρ‚Π΅ΠΌΠ°Ρ‚ΠΈΠΊΠ°, ΠŸΡ€ΠΈΠΊΠ»Π°Π΄Π½Π° ΠΌΠ°Ρ‚Π΅ΠΌΠ°Ρ‚ΠΈΠΊΠ°, ІнТСнСрія ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠ½ΠΎΠ³ΠΎ забСзпСчСння, Π†Π½Ρ„ΠΎΡ€ΠΌΠ°Ρ†Ρ–ΠΉΠ½Ρ– Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ—, ΠšΠΎΠΌΠΏβ€™ΡŽΡ‚Π΅Ρ€Π½Ρ– Π½Π°ΡƒΠΊΠΈ, ΠšΠΎΠΌΠΏβ€™ΡŽΡ‚Π΅Ρ€Π½Π° інТСнСрія, БистСмний Π°Π½Π°Π»Ρ–Π·, ΠšΡ–Π±Π΅Ρ€Π±Π΅Π·ΠΏΠ΅ΠΊΠ°, Π†Π½Ρ„ΠΎΡ€ΠΌΠ°Ρ†Ρ–ΠΉΠ½Ρ– систСми, Автоматизація Ρ‚Π° ΠΊΠΎΠΌΠΏβ€™ΡŽΡ‚Π΅Ρ€Π½ΠΎ-Ρ–Π½Ρ‚Π΅Π³Ρ€ΠΎΠ²Π°Π½Ρ– Ρ‚Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³Ρ–Ρ—, Π•Π»Π΅ΠΊΡ‚Ρ€ΠΎΠ½Ρ–ΠΊΠ°, Π’Π΅Π»Π΅ΠΊΠΎΠΌΡƒΠ½Ρ–ΠΊΠ°Ρ†Ρ–Ρ— ΠšΡ–Π±Π΅Ρ€Π½Π΅Ρ‚ΠΈΠΊΠ°);
    • ΠΏΡ€ΠΎΡ„Ρ–Π»ΡŒΠ½Ρ– курси, школи, самонавчання Π²Ρ–Ρ‚Π°ΡŽΡ‚ΡŒΡΡ, Π°Π»Π΅ Π±Π΅Π· ΠΏΡ€ΠΎΡ„Ρ–Π»ΡŒΠ½ΠΎΡ— Π²ΠΈΡ‰ΠΎΡ— Π½Π΅ Ρ€ΠΎΠ·Π³Π»ΡΠ΄Π°Ρ”ΠΌΠΎ;
    • ΠΌΠΎΠΆΠ»ΠΈΠ²Ρ–ΡΡ‚ΡŒ присвячувати Ρ€ΠΎΠ±ΠΎΡ‚Ρ– ΠΏΠΎ 5 Π΄Π½Ρ–Π²;
    • Π°Π½Π³Π»Ρ–ΠΉΡΡŒΠΊΠ° ΠΌΡ–Π½Ρ–ΠΌΡƒΠΌ a2, для листування Π· Π°ΠΌΠ΅Ρ€ΠΈΠΊΠ°Π½ΡΡŒΠΊΠΈΠΌΠΈ ΠΊΠΎΠ»Π΅Π³Π°ΠΌΠΈ; Π²Ρ–Π»ΡŒΠ½Π° ΠΠ• ΠΏΠΎΡ‚Ρ€Ρ–Π±Π½Π°;

     

    ΠŸΡ€ΠΎΠΏΠΎΠ½ΡƒΡ”ΠΌΠΎ:

    • Π²Ρ–Π΄Π΄Π°Π»Π΅Π½Π° Ρ€ΠΎΠ±ΠΎΡ‚Π° Π· Π±ΡƒΠ΄ΡŒ-якого міста Π£ΠΊΡ€Π°Ρ—Π½ΠΈ;
    • 5 Π΄Π½Ρ–Π² Π½Π° Ρ‚ΠΈΠΆΠ΄Π΅Π½ΡŒ, Ρƒ ΡΠ΅Ρ€Π΅Π΄Π½ΡŒΠΎΠΌΡƒ ΠΏΠΎ 7βˆ’8 Π³ΠΎΠ΄ΠΈΠ½ Π½Π° Π΄Π΅Π½ΡŒ; ΠΎ 9.00 β€” постановка завдання Π½Π° Π΄Π΅Π½ΡŒ, ΠΎ 19.00 β€” підбиття підсумків; протягом дня Π³Ρ€Π°Ρ„Ρ–ΠΊ Π²Ρ–Π»ΡŒΠ½ΠΈΠΉ;
    • ставка Π·Π°Π»Π΅ΠΆΠΈΡ‚ΡŒ Π²Ρ–Π΄ досвіду, ΠΎΠ²Π΅Ρ€Ρ‚Π°ΠΉΠΌΠΈ ΠΎΠΏΠ»Π°Ρ‡ΡƒΡŽΡ‚ΡŒΡΡ.

     

    ΠΠ°Π΄Ρ–ΡˆΠ»Ρ–Ρ‚ΡŒ Ρ€Π΅Π·ΡŽΠΌΠ΅ Ρ€Π°Π·ΠΎΠΌ Ρ–Π· ΠΊΠΎΡ€ΠΎΡ‚ΠΊΠΈΠΌ супровідним листом: Ρ‡ΠΎΠΌΡƒ Π²Π²Π°ΠΆΠ°Ρ”Ρ‚Π΅, Ρ‰ΠΎ ΡΠ°ΠΌΠ΅ Π²ΠΈ ΠΏΡ–Π΄Ρ…ΠΎΠ΄ΠΈΡ‚Π΅ для Ρ†Ρ–Ρ”Ρ— Ρ€ΠΎΠ»Ρ–. ΠΠ½Π³Π»Ρ–ΠΉΡΡŒΠΊΠΎΡŽ. Π”ΠΎΠ΄Π°ΠΉΡ‚Π΅, Π±ΡƒΠ΄ΡŒ ласка, посилання Π½Π° ΠΏΠΎΡ€Ρ‚Ρ„ΠΎΠ»Ρ–ΠΎ Π°Π±ΠΎ GitHub Ρ–Π· ΠΏΡ€ΠΈΠΊΠ»Π°Π΄Π°ΠΌΠΈ ΠΏΡ€ΠΎΡ”ΠΊΡ‚Ρ–Π² Ρƒ ΡΡ„Π΅Ρ€Ρ– machine learning / data engineering.

     

    Job Title: AI Engineer with Data Engineering & Machine Learning Expertise

     

    About Us:

    We’re building the next generation of intelligent systems that power key decision-making processes for businesses across various industries. Our work combines advanced AI, data engineering, and cutting-edge business intelligence tools to create solutions that unlock data-driven insights.

    We are looking for an AI Engineer who is not only proficient in machine learning techniques but also has a solid understanding of data engineering principles. The ideal candidate will have hands-on experience in data pipeline creation, ETL processes, and data cleansing & harmonization. You will play a key role in developing intelligent models that enable our team to make strategic, data-backed decisions.

     

    What You’ll Do:

    Design and implement AI models and algorithms using machine learning techniques (e.g., decision trees, neural networks, and nonparametric models).

    Collaborate with the data engineering team to design, build, and maintain data pipelines for real-time and batch processing.

    Develop and optimize ETL processes for data integration and transformation from multiple sources.

    Cleanse and harmonize data from disparate sources to ensure accuracy and consistency.

    Build automated workflows for data cleansing, feature engineering, and model deployment.

    Partner with business intelligence teams to create data-driven solutions that provide actionable insights for decision-makers.

    Continuously evaluate and improve model performance by iterating based on feedback and new data.

     

    Key Skills & Requirements:

    Machine Learning: Strong experience with supervised and unsupervised learning algorithms (decision trees, neural networks, random forests, k-nearest neighbors, etc.).

    Data Engineering: Solid understanding of ETL pipelines, data integration, transformation, and cleansing using tools like Apache Airflow, dbt, Talend, or similar.

    Programming Languages: Proficiency in Python (with libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch).

    Data Structures & Algorithms: Familiarity with data structures, model evaluation techniques, and optimization strategies.

    Data Analysis: Ability to perform exploratory data analysis (EDA) and feature selection, using statistical methods.

    Business Intelligence: Familiarity with BI tools such as Power BI, Tableau, or Looker, and ability to transform analytical results into meaningful visualizations.

    Cloud Platforms: Experience with cloud technologies such as AWS, GCP, or Azure for model deployment and data pipeline management.

    SQL: Strong knowledge of relational databases and SQL for querying and manipulating large datasets.

    Problem Solving & Communication: Ability to analyze complex data issues, communicate technical concepts to non-technical stakeholders, and work well in a collaborative environment.

     

    Nice to Have:

    Experience with nonparametric models (e.g., kernel methods, k-NN) and model selection.

    Familiarity with big data frameworks like Hadoop or Spark.

    Exposure to DevOps practices for model deployment (Docker, Kubernetes, CI/CD).

     

    Why Join Us?

    Innovative Culture: Be a part of a forward-thinking company focused on cutting-edge AI and data-driven solutions.

    Growth Opportunities: Continuous learning with access to new technologies, training, and career development resources.

    Collaborative Environment: Work alongside talented engineers, data scientists, and business analysts in a team-oriented culture.

    Flexible Work Arrangements: Fully remote with a distributed team to support work-life balance.

     

    How to Apply:

    If you’re passionate about AI and data engineering, we want to hear from you! Submit your resume, along with a brief cover letter explaining why you’re a great fit for the role. Please include any relevant portfolio or GitHub links showcasing your machine learning or data engineering projects.

    More
  • Β· 20 views Β· 3 applications Β· 20d

    Data Architect - GCP

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B1 - Intermediate
    About JUTEQ JUTEQ is an AI-native and cloud-native consulting firm helping enterprises in financial services, telecom, and automotive retail build intelligent, production-grade platforms. We combine the power of GenAI, scalable cloud architecture, and...

    About JUTEQ

    JUTEQ is an AI-native and cloud-native consulting firm helping enterprises in financial services, telecom, and automotive retail build intelligent, production-grade platforms. We combine the power of GenAI, scalable cloud architecture, and automation to deliver next-generation business tools. Our platform supports multi-tenant AI agent workflows, real-time lead processing, and deep analytics pipelines.

    We are seeking an experienced Data Architect with deep Google Cloud Platform (GCP) experience to lead our data lake, ingestion, observability, and compliance infrastructure. This role is critical to building a production-grade, metadata-aware data stack aligned with SOC2 requirements.

    What You'll Do

    Data Architecture & Lakehouse Design

    • Architect and implement a scalable GCP-based data lake across landing, transformation, and presentation zones.
    • Use native GCP services such as GCS, Pub/Sub, Apache Beam, Cloud Composer, and BigQuery for high-volume ingestion and transformation.
    • Design and implement infrastructure landing zones using Terraform with strong IAM boundaries, secrets management, and PII protection.
    • Build ingestion pipelines using Apache NiFi (or equivalent) to support batch, streaming, and semi-structured data from external and internal systems.

    Data Ingestion & Integration

    • Develop robust ingestion patterns for CRM, CDP, and third-party sources via APIs, file drops, or scraping.
    • Build real-time and batch ingestion flows with schema-aware validation, parsing, and metadata handling.
    • Implement transformation logic and ensure staging β†’ curated flow adheres to quality, performance, and lineage standards.

    Metadata & Lineage Management

    • Define and enforce metadata templates across all sources.
    • Establish data lineage tracking from ingestion to analytics using standardized tools or custom solutions.
    • Drive schema mapping, MDM support, and data quality governance across ingestion flows.

    SRE & Observability for Data Pipelines

    • Implement alerting, logging, and monitoring for all ingestion and transformation services using Cloud Logging, Cloud Monitoring, OpenTelemetry, and custom dashboards.
    • Ensure platform SLAs/SLOs are tracked and incidents are routed to lightweight response workflows.
    • Support observability for cloud functions, GKE workloads, and Cloud Run-based apps interacting with the data platform.

    Security & Compliance

    • Enforce SOC2 and PII compliance controls: IAM policies, short-lived credentials, encrypted storage, and access logging.
    • Collaborate with security teams (internal/external) to maintain audit readiness.
    • Design scalable permissioning and role-based access for production datasets.

    What We're Looking For

    Core Experience

    • 5+ years in data engineering or architecture roles with strong GCP experience.
    • Deep familiarity with GCP services: BigQuery, Pub/Sub, Cloud Storage, Cloud Functions, Dataflow/Apache Beam, Composer, IAM, and Logging.
    • Expertise in Apache NiFi or similar ingestion/orchestration platforms.
    • Experience with building multi-environment infrastructure using Terraform, including custom module development.
    • Strong SQL and schema design skills for analytics and operational reporting.

    Preferred Skills

    • Experience in metadata management, MDM, and schema evolution workflows.
    • Familiarity with SOC2, GDPR, or other data compliance frameworks.
    • Working knowledge of incident response systems, alert routing, and lightweight ITSM integration (JIRA, PagerDuty, etc.).
    • Experience with data lineage frameworks (open-source or commercial) is a plus.
    • Exposure to graph databases or knowledge graphs is a plus but not required.

    Why Join Us

    • Help design a full-stack, production-grade data infrastructure from the ground up.
    • Work in a fast-paced AI-driven environment with real product impact.
    • Contribute to a platform used by automotive dealerships across North America.
    • Be part of a high-trust, hands-on team that values autonomy and impact.
    More
  • Β· 46 views Β· 21 applications Β· 5d

    Data Scientist

    Countries of Europe or Ukraine Β· 3 years of experience Β· B2 - Upper Intermediate
    Project Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics. Overview We are looking for a Data Scientist with strong background in...

    Project
    Global freight management solutions and services, specializing in Freight Audit & Payment, Order Management, Supplier Management, Visibility, TMS and Freight Spend Analytics.

    Overview
    We are looking for a Data Scientist with strong background in statistics and probability theory to help us build intelligent analytical solutions. The current focus is on outlier detection in freight management data, with further development toward anomaly detection and forecasting models for logistics and freight spend. The role requires both deep analytical thinking and practical hands-on work with data, from SQL extraction to model deployment.

    Key Responsibilities

    • Apply statistical methods and machine learning techniques for outlier and anomaly detection.
    • Design and develop forecasting models to predict freight costs, shipment volumes, and logistics trends.
    • Extract, preprocess, and transform large datasets directly from SQL databases.
    • Categorize exceptions into business-defined groups (e.g., High Value Exceptions, Accessorial Charge Exceptions, Unexpected Origin/Destination).
    • Collaborate with business analysts to align analytical approaches with domain requirements.
    • Use dashboards (e.g., nSight) for validation, visualization, and reporting of results.
    • Ensure models are interpretable, scalable, and deliver actionable insights.

    Requirements

    • Strong foundation in statistics and probability theory.
    • Proficiency in Python with libraries such as pandas, numpy, matplotlib, scikit-learn.
    • Proven experience with outlier/anomaly detection techniques.
    • Hands-on experience in forecasting models (time-series, regression, or advanced ML methods).
    • Strong SQL skills for working with large datasets.
    • Ability to communicate findings effectively to both technical and non-technical stakeholders.

    Nice to Have

    • Experience with ML frameworks (TensorFlow, PyTorch).
    • Familiarity with MLOps practices and model deployment.
    • Exposure to logistics, supply chain, or financial data.
    • Knowledge of cloud platforms (AWS, GCP, Azure).
    More
  • Β· 44 views Β· 12 applications Β· 18d

    Machine Learning Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· B1 - Intermediate
    We’re looking for a Machine Learning Engineer with a strong background in Computer Vision and Generative AI to join our R&D team. You’ll build and optimize pipelines for virtual try-on, pose-guided image generation, and garment transfer systems using...

    We’re looking for a Machine Learning Engineer with a strong background in Computer Vision and Generative AI to join our R&D team. You’ll build and optimize pipelines for virtual try-on, pose-guided image generation, and garment transfer systems using cutting-edge diffusion and vision models.

     

    Must-Have Skills

    Core ML & Engineering

    • Proficiency in Python and PyTorch (or JAX, but PyTorch preferred)
    • Strong understanding of CUDA and GPU optimization
    • Ability to build exportable, production-ready pipelines (TorchScript, ONNX)
    • Experience deploying REST inference services, managing batching, VRAM, and timeouts

    Computer Vision

    • Hands-on experience with image preprocessing, keypoint detection, segmentation, optical flow, and depth/normal estimation
    • Experience with human parsing & pose estimation using frameworks such as HRNet, SegFormer, Mask2Former, MMPose, or OpenPifPaf
    • Bonus: familiarity with DensePose or UV-space mapping

    Generative Models

    • Strong practical experience with diffusion models (e.g., Stable Diffusion, SDXL, Flux, ControlNet, IP-Adapter)
    • Skilled in inpainting, conditioning on pose, segmentation, or depth maps
    • Understanding of prompt engineering, negative prompts, and fine-tuning for control

    Garment Transfer Pipelines

    • Ability to align source garments to target bodies via pose-guided warping (TPS/thin-plate, flow-based) or DensePose mapping
    • Must ensure preservation of body, skin, hair, and facial integrity

    Data & Experimentation

    • Experience in dataset creation and curation, augmentation, and experiment reproducibility
    • Competence in using W&B or MLflow for experiment tracking and DVC for data versioning

     

    Nice-to-Have

    • Understanding of SMPL rigging/retargeting and cloth simulation (blendshapes, drape heuristics)
    • Experience fine-tuning diffusion models via LoRA or Textual Inversion for brand or style consistency
    • Familiarity with NeRF or Gaussian Splatting (3D try-on and rendering)
    • Experience with model optimization for mobile/edge deployment (TensorRT, xFormers, half-precision, 8-bit quantization)
    • Awareness of privacy, consent, and face-handling best practices

     

    Tools & Frameworks

    • PyTorch, diffusers, xFormers
    • OpenCV, MMDetection, MMSeg, MMPose, or Detectron2
    • DensePose / SMPL toolchains
    • Weights & Biases, MLflow, DVC

     

    We Offer

    • Opportunity to work on cutting-edge generative AI applications in computer vision
    • R&D-focused environment with freedom to explore, test, and innovate
    • Competitive compensation and flexible work structure
    • Collaboration with a team of ML engineers, researchers, and designers pushing boundaries in human-centered AI
    More
  • Β· 17 views Β· 0 applications Β· 18d

    Data Science Consultant

    Full Remote Β· Ukraine Β· 7 years of experience Β· B2 - Upper Intermediate
    This role is well-regarded for leaders with a strong mathematics background and a desire to work hands-on with Data Science, AI, ML and could be the perfect opportunity to join EPAM as a Senior Manager - Data Science Consultant! Kindly note that this...

    This role is well-regarded for leaders with a strong mathematics background and a desire to work hands-on with Data Science, AI, ML and could be the perfect opportunity to join EPAM as a Senior Manager - Data Science Consultant!
     

    Kindly note that this role supports remote work, but only from within Ukraine.


    #LI-DNI#LI-VC3

    Technologies

    • Python, Databricks, Azure ML, Big Data (Hadoop, Spark, Hive, etc.), AWS, Docker, Kubernetes, DB (Pl SQL, HQL, Mongo), Google (Vertex AI) or similar

    Responsibilities

    • Discover, envision and land Data Science, AI and Machine Learning opportunities alongside EPAM teams & clients
    • Lead cross-functional EPAM and/or EPAM clients` teams through the journey of understanding business challenges and defining solutions leveraging AI, Data Science, Machine Learning and MLOpsβ€―
    • Work with clients to deliver AI Products which provide value to end-usersβ€―
    • Participate and drive EPAM competencies development, work on new EPAM offerings in AI, Data Science, ML and MLE space, as well as work on refining existing offeringsβ€―
    • Bring your creative engineering mind to deliver real-life practical applications of Machine Learningβ€―
    • Work closely with DevOps practice on infrastructure and release planningβ€―

    Requirements

    • Consulting: Experience in exploring the business problem and converging it to applied AI technical solutions; expertise in pre-sales, solution definition activitiesβ€―β€―
    • Data Science: 3+ years of hands-on experience with core Data Science, as well as knowledge of one of the advanced Data Science and AI domains (Computer Vision, NLP, Advanced Analytics etc.)β€―β€―
    • Engineering: Experience delivering applied AI from concept to production, familiarity, and experience with MLOps, Data, design of Data Analytics platforms, data engineering, and technical leadership
    • Leadership: Track record of delivering complex AI-empowered and/or AI-empowering programs to the clients in a leadership position. Experience managing and growing the team to scale up Data Science, AI &ML capability is a big plus
    • Excellent communication skills (active listening, writing and presentation), drive for problem solving and creative solutions, high EQ

    Nice to have

    • One or more business domains expertise (e.g. CPG, Retail, Financial Services, Insurance, Healthcare/ Life Science)
    More
  • Β· 48 views Β· 7 applications Β· 18d

    Senior AI / Machine Learning Engineer

    Part-time Β· Full Remote Β· EU Β· 5 years of experience Β· B2 - Upper Intermediate
    About the Project We are collaborating with a leading Healthcare company developing advanced AI-powered solutions for medical data processing, diagnostics support, and automation. The project focuses on scalable AI deployments with strong compliance and...

    About the Project
    We are collaborating with a leading Healthcare company developing advanced AI-powered solutions for medical data processing, diagnostics support, and automation. The project focuses on scalable AI deployments with strong compliance and data security standards within the EU.

    The team is seeking a Senior AI / Machine Learning Engineer with solid hands-on experience in Large Language Models (LLMs), model fine-tuning, and cloud-based AI infrastructure. This is a part-time, long-term engagement with flexible working hours and potential for extension.

    Key Responsibilities

    • Lead the development, optimization, and maintenance of AI/ML systems.
    • Design, fine-tune, and deploy Large Language Model (LLM) solutions adapted to healthcare use cases.
    • Build and optimize APIs and pipelines using FastAPI, LangChain, and LangGraph.
    • Collaborate closely with cross-functional teams to define and implement AI-driven features.
    • Provide architectural guidance on cloud infrastructure (Azure / AWS) for scalable AI deployments.
    • Stay updated on the latest trends in Generative AI, LLM research, and best engineering practices.
    • Mentor and guide AI/ML engineers on technical and strategic initiatives.

    Requirements

    • 5+ years of experience in AI/ML engineering (preferably 7–10 years).
    • Proven hands-on experience with Large Language Models (LLMs) and model fine-tuning.
    • Strong experience with Python, FastAPI, LangChain, and/or LangGraph.
    • Practical experience with Azure OpenAI or OpenAI APIs.
    • Deep understanding of cloud environments (Azure, AWS) and scalable AI architecture.
    • Experience leading AI/ML projects from prototype to production.
    • Excellent communication skills in English (B2+ level).
    • Ability to work independently and mentor junior team members.
    • Experience in Healthcare projects is a plus (but not mandatory).

    Working Conditions

    • Part-time: approximately 16 hours per week (2–3 working days).
    • Duration: 12 months, with potential extension.
    • Remote work, flexible schedule based on mutual availability.
    • Location: Only candidates based in the European Union or relocated from Ukraine, CIS, Balkans, Asia, or Africa to EU countries (required due to data protection policies).

    Additional Information

    Please include the following details along with your CV:

    • Years of experience in AI/ML:
    • Experience with LLMs and fine-tuning:
    • English level:
    • Current country of residence:
    • Citizenship:
    • Availability to start:
    • Confirmation of part-time (16h/week) availability:
    More
  • Β· 66 views Β· 2 applications Β· 17d

    Data Scientist to $3250

    Full Remote Β· Ukraine Β· 2 years of experience Β· B2 - Upper Intermediate
    About Opal Data Consulting: At Opal Data we combine business and technical expertise to create solutions for businesses. Traditional management consultants offer strategic advice without the technical skills to implement their proposed solutions....

    About Opal Data Consulting: 

    At Opal Data we combine business and technical expertise to create solutions for businesses. Traditional management consultants offer strategic advice without the technical skills to implement their proposed solutions. Software consultants often build tools that aren’t truly optimized for a business or organizational need. We combine the best of both worlds.

     

    We do several kinds of projects: building tools to help our clients understand their organizations in real time, building predictive models to improve our clients’ operations, and building custom applications. Our clients are typically small to medium sized companies across industries (typically $2M – 100M revenue) or government agencies with similarly sized annual budgets.

     

    Building real time understanding of an organization often involves creating and populating a data warehouse by using APIs, scrapers, or prebuilt connectors to integrate all of a clients systems (ERP, CMS, marketing platforms, accounting systems, etc), writing ETL scripts in Python or SQL to shape and combine that data (often necessitating creating of a cloud environment to host serverless functions), and then building visualizations that allow the clients to see what is happening in real time (in Tableau, Power BI, Looker, etc). We often do a significant amount of related analytical work looking for patterns, identifying areas of improvement, and creating software tools to reinforce those learnings (e.g., building notification systems for operations teams to follow the best practices we identify in our analysis, automating tasks, etc.) 

     

    Building predictive models to improve our clients’ operations involves using machine learning to solve particular organizational challenges or take advantage of opportunities. For instance, we have built models to predict which customers will churn (unsubscribe) in advance in order to identify the causal factors leading to churn as well as prioritize customers for outreach from customer retention teams. In other cases, we have built models to predict the performance of individual stores within a network to identify and spread best practices from outperforming stores as well as identify ideal locations for new store expansion.

    We are a small but nimble team looking to bring on a self-starter who excels at data science. 

     

    You can read more about us at: www.opal-data.com. 

     

    Job Summary:

    The Data Scientist will report directly to Opal’s founder / technical lead. As a core member of a small team, the position provides an opportunity for growth across a wide range of technical skillsets, as well as experience working in a wide range of industries across our client projects. 

     

    Because of the broad range of work we do, candidates are not expected to be experts in everything. The ideal candidate should have experience in many of the areas listed below in Major Responsibilities, and have strong interest in learning the tools and techniques in which they do not already have expertise. Raw intelligence, curiosity, and excitement about experimentation and learning are some of the most important determinants of success in this position. We believe strongly in developing our team members and promoting from within, and are looking for candidates who are interested in continuing to learn and grow within the organization.

     

    In addition to generous base compensation commensurate with experience, this position will also earn profit sharing. Each month, the total compensation earned will be the greater of base compensation or profit sharing, whichever is larger for that month. In good months, our staff typically earn 30-60% more than their base compensation.

     

    Major Responsibilities:

    • Use APIs and build scrapers to ingest data
    • Setup and work within cloud environments (Azure, AWS, GCP) to create and populate data warehouses and deploy ETL code to serverless functions
    • Create ETL scripts / data pipelines in Python / SQL to shape data for visualization and automation tasks
    • Visualize data and create dashboards in Tableau, Power BI, Looker, etc
    • Conduct one-off analyses to look for patterns and insights, make suggestions on future improvements to data collection, etc.
    • Create machine learning models, including variable creation from available data to turn hypotheses we want to test into variables
    • Work on application backends
    • Write clean, well-documented code
    • Create agents via prompt engineering, fine-tuning, and decision pooling open source LLMs

     

    Qualifications:

    • Bachelor’s degree in a computational field and a minimum of 2 years of full time work experience using Python as a Data Scientist, Data Engineer, or backend Software Developer; or, in lieu of formal education, a minimum of 4 years of technical work experience in those fields
    • Extremely proficient in Python
    • Proficient in SQL
    • Fluency in English
    • Please mention:
      • Experience with Javascript, particularly for scrapers
      • Any other programming languages used
      • Experience with Tableau, Power BI, DOMO, Looker, or other dashboarding platforms
      • Experience building or expanding data warehouses
      • Experience with Dev Ops - setting up cloud environments and deploying containerized (Docker) or serverless functions
      • Machine learning / predictive modeling experience
      • Prompt engineering / fine-tuning of LLMs for business workflows
    More
  • Β· 28 views Β· 4 applications Β· 17d

    Game Mathematician

    Full Remote Β· EU Β· Product Β· 2 years of experience
    Ixilix is a technology-driven company that builds high-quality solutions and long-term partnerships. Our team is growing, and we are looking for a Game Mathematician. Responsibilities: Writing rules for slots, showing the mechanics of the game, and...

    Ixilix is a technology-driven company that builds high-quality solutions and long-term partnerships. Our team is growing, and we are looking for a Game Mathematician.

    Responsibilities:

    • Writing rules for slots, showing the mechanics of the game, and documenting them in the Wiki;
    • Creation of slots in accordance with the requirements of RTP, imposed by the rules (Main spins, Free games, Bonus games, features, etc.);
    • Analyze and optimize game volatility and payout curves;
    • Conducting a review of the rules of the slots checking the statistical data collected by bots when writing the server implementation of the game;
    • Participation in general meetings that affect such things as the mechanics of slots;
    • Actively participate in game planning sessions and propose innovative mechanics.

    Required Skills:

    • 3+ years of experience on positions like Mathematician in gambling is must;
    • Degree in Mathematics, Statistics, or related fields (Bachelor’s, Master’s, or Ph.D.);
    • Understanding of applied mathematics: probability theory and statistics, numerical methods, linear algebra, optimization;
    • Ability to describe processes in detail and clearly articulate their thoughts in writing;
    • Analytical approach in the process of working on problems;
    • Attention to detail, generation of ideas.
    • English B2+.
    • Ukrainian C1+.

    Preferred Skills:

    • Knowledge of algorithms and data structures, interpolation methods, regression models, theory of stochastic processes, cryptography;

    What we offer:

    Rewards & Celebrations 

    • Quarterly Bonus System
    • Team Buildings Compensations
    • Memorable Days Financial Benefit

    Learning & Development

    • Annual fixed budget for personal learning 
    • English Language Courses Compensation

    Time Off & Leave

    • Paid Annual Leave (Vacation) - 24 working days
    • Sick leave - unlimited number of days, fully covered

    Wellbeing Support

    • Mental Health Support (Therapy Compensation)
    • Holiday Helper Service

    Workplace Tools & Assistance

    • Laptop provided by Company (after probation)

    Work conditions:

    • Remote work from EU
    • Flexible 8-hour workday, typically between 9:00 - 18:00 CET
    • Five working days, Monday to Friday
    • Public holidays observed according to Ukrainian legislation
    • Business trips to Bratislava every 3-6 months (company provides compensation of expenses)


    At Ixilix, we value transparency, trust, and ownership. We believe that great results come from people who care - about their work, their team, and the impact they create. 

    Sounds like you? Let’s connect! We’re just one click away.

     

    More
  • Β· 58 views Β· 9 applications Β· 17d

    Data Scientist

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· B1 - Intermediate
    Trinetix is looking for a skilled Data Scientist. We are looking for a Data Scientist with strong expertise in Machine Learning and Generative AI. You will design, train, and fine-tune models using Python, TensorFlow, PyTorch, and Hugging Face, applying...

    Trinetix is looking for a skilled Data Scientist

    We are looking for a Data Scientist with strong expertise in Machine Learning and Generative AI. You will design, train, and fine-tune models using Python, TensorFlow, PyTorch, and Hugging Face, applying advanced statistical analysis and prompt engineering techniques. The role involves working with Azure ML, Databricks, and OpenAI APIs to deliver scalable, responsible AI solutions that drive data-informed decision-making. 

     

    Requirements 

    • Proven experience in building and fine-tuning Machine Learning (ML) and Generative AI models 
    • Strong proficiency in Python and familiarity with R for statistical modeling and data analysis 
    • Hands-on experience with TensorFlow, PyTorch, and Hugging Face frameworks 
    • Solid understanding of statistical analysis, feature engineering, and experimental design 
    • Practical experience using OpenAI API or similar LLM-based platforms 
    • Experience working with Azure Machine Learning or Databricks for model training and deployment 
    • Commitment to responsible AI practices, including model transparency, fairness, and bias mitigation 

     

    Nice-to-haves 

    • Experience with Generative AI prompt design and optimization techniques 
    • Familiarity with data visualization and storytelling tools (e.g., Power BI, Tableau) 
    • Understanding of MLOps and CI/CD workflows for ML models 
    • Experience collaborating in cross-functional AI teams or research-driven environments 
    • Background in cloud-based model orchestration or multi-modal AI systems 

     

    Core Responsibilities 

    • Design, train, and fine-tune ML and Generative AI models to solve complex business and analytical problems 
    • Conduct data preprocessing, feature engineering, and exploratory analysis to ensure model readiness 
    • Apply statistical and experimental methods to evaluate model performance and reliability 
    • Develop and optimize prompts for LLM-based solutions to improve accuracy and contextual relevance 
    • Collaborate with engineers and data teams to deploy models within Azure ML, Databricks, or other cloud environments 
    • Ensure adherence to responsible AI and ethical data use principles across all modeling stages 

     

    What we offer   

    • Continuous learning and career growth opportunities 
    • Professional training and English/Spanish language classes   
    • Comprehensive medical insurance 
    • Mental health support 
    • Specialized benefits program with compensation for fitness activities, hobbies, pet care, and more 
    • Flexible working hours 
    • Inclusive and supportive culture 
    More
  • Β· 71 views Β· 24 applications Β· 16d

    Data Scientist / Developer (PT to FT)

    Part-time Β· Full Remote Β· Worldwide Β· 3 years of experience Β· B2 - Upper Intermediate
    * Part-time (80 hours per month) transitioning to Full-time (160 hours per month) within 4–6 months. Company Description The Client is an American nonprofit organization providing essential services and support for individuals with disabilities. Their...

    * Part-time (80 hours per month) transitioning to Full-time (160 hours per month) within 4–6 months.

     

    Company Description

    The Client is an American nonprofit organization providing essential services and support for individuals with disabilities. Their programs focus on adult disability services, mental health, and employment support β€” empowering people to live, learn, work, and thrive in their communities. They are dedicated to advancing full equity, inclusion, and access for all through life-changing disability and community services.

     

    Project Description

    We are seeking an experienced Part-Time Data Scientist / Developer to support our enterprise data and artificial intelligence initiatives. This role will be responsible for developing and maintaining robust data pipelines within Microsoft Fabric, enabling advanced analytics through Power BI, and supporting the development and maintenance of large language model (LLM)β€”based solutions. The ideal candidate will demonstrate expertise in both data engineering and AI/ML, with the ability to deliver high-quality, scalable solutions that meet organizational needs.

     

    Requirments

    • BA or MS in Statistics, Mathematics, Computer Science, or other quantitative field.
    • 3+ years of industry experience delivering and scaling ML products both in scope and on time.
    • Strong skills in Python and querying languages (e.g. SQL).
    • Experience in production-level coding with broad knowledge of healthy code practices.
    • Tech Stack: Microsoft Fabric, Power BI, Dax, Python, MS SQL Server.
    • Business acumen with proven ability to understand, analyze and problem solve business goals in detail.
    • Fluent English (both written and spoken).

     

    Duties and responsibilities

    Data Engineering and Integration

    • Design, develop, and maintain data pipelines in Microsoft Fabric.
    • Ingest, transform, and model data from multiple sources into the data lakehouse/warehouse.
    • Ensure data integrity, scalability, and performance optimization.

    Analytics & Reporting

    • Develop and optimize complex SQL queries to support analytics and reporting.
    • Build and maintain datasets and semantic models for Power BI.
    • Design, implement, and modify advanced Power BI visualizations and dashboards.
    • Partner with stakeholders to deliver accurate and actionable insights.

    AI & LLM Development

    • Develop, fine-tune, and maintain large language models for enterprise use cases.
    • Integrate LLM solutions with data assets in Microsoft Fabric and reporting environments.
    • Monitor and retrain models as required to maintain performance.

       

    Working conditions

    Mon - Fri 9-5 (EST) overlap with team at least 4 hours.

    More
Log In or Sign Up to see all posted jobs