Jobs Data & Analytics

1031
  • Β· 20 views Β· 0 applications Β· 19d

    Website Network Manager

    Full Remote Β· Worldwide Β· 3 years of experience Β· English - C1
    Who we are: Selesa offers IT outsourcing, specialist out-staffing, and project management services to enhance business operations. We focus on providing skilled professionals for IT needs, including development, security consulting, and business...

    Who we are:
    Selesa offers IT outsourcing, specialist out-staffing, and project management services to enhance business operations. We focus on providing skilled professionals for IT needs, including development, security consulting, and business development. Selesa also manages sales, account management, and human resources to support company growth. Our services are known for efficiency, quality, and strong communication, making us a trusted partner for businesses looking to streamline and secure their IT infrastructure. Originated in Vilnius, Lithuania, we cater to global clients.


    Who we are looking for:

     

    We’re looking for a Web Master to own and scale our multi-site web networkβ€”from build and uptime to content operations and cross-team coordination.

     

    Responsibilities

     

    • Launch and manage multiple sites (CMS of choice); themes, plugins, hosting, domains, DNS, SSL, CDN.
    • Implement new pages/sections, templates, and components; create wireframes/mockups when needed.
    • Content operations: publish/format articles, graphics, landing pages; enforce brand and UX standards.
    • Maintain performance, security, and uptime (backups, updates, monitoring, caching).
    • Set up analytics & tracking, goals, and dashboards; run basic A/B tests.
    • SEO hygiene: technical checks (sitemaps, robots, schema, redirects, Core Web Vitals), on-page basics.
    • Coordinate change requests with developers; write clear tickets/specs and follow through to release.
    • Localization workflow across languages; manage translation assets and content parity.
    • QA before/after releases; document processes and maintain a site inventory.

     

    Requirements

    • 3–5+ years managing production websites (multi-site experience preferred).
    • Strong CMS skills, HTML/CSS, comfort with no-code/low-code tools.
    • Working knowledge: GA4, GTM, Search Console, PageSpeed/Lighthouse, CDN (Cloudflare/Akamai).
    • Solid English and communication; proven stakeholder coordination.
    • Organized, deadline-driven, high attention to detail.

    Nice to Have

    • Basic JS/PHP, REST/GraphQL familiarity; CI/CD awareness; accessibility (WCAG) and GDPR/CCPA basics.

     

    What we offer:

    • Fully remote position with a flexible schedule
    • Long-term opportunity with potential for financial and career advancement
    • Supportive and positive work culture, collaborating with like-minded teammates

       

    When submitting your application, please make sure to include your responses to the following screening questions in your COVER LETTER:

    1. Please explain to us your level of spoken/written English. Just rank it from 1 to 10, where 10 means a Native Speaker; 8-9 means a Near Native Speaker; 6-7 means Fluent Speaker; under 6 any further levels.
    2. Can you describe your experience managing multiple content websites or digital products simultaneously?
    3. How do you approach building and maintaining a website roadmap?
    4. What are your Monthly salary expectations for a long-term, full-time position (if we consider 40 hours a week)?
    More
  • Β· 80 views Β· 12 applications Β· 20d

    Data Platform Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 6 years of experience Β· English - B1
    WHO WE ARE At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on actionable technology that helps...

    WHO WE ARE

    At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on actionable technology that helps people around the world. Our deep experience and product-first attitude set us apart from other groups and it's why people who work with us tend to stay with us long term.

     

    WHY YOU SHOULD WORK WITH US

    You'll work on an important problem that improves the lives of a lot of people. You'll be at the cutting edge of innovation and get to work on fascinating problems, supporting real products, with real data. Your perks include: expert mentorship from senior staff, competitive compensation, paid leave, flexible work schedule and ability to travel internationally.

    Essential Requirements for Data Platform Engineer:

    • Architecture & Improvement: Continuously review the current architecture and implement incremental improvements, facilitating a gradual transition of production operations from Data Science to Engineering.
    • AWS Service Ownership: Own the full lifecycle (development, deployment, support, and monitoring) of client-facing AWS services (including SageMaker endpoints, Lambdas, and OpenSearch). Maintain high uptime and adherence to Service Level Agreements (SLAs).
    • ETL Operations Management: Manage all ETL processes, including the operation and maintenance of Step Functions and Batch jobs (scheduling, scaling, retry/timeout logic, failure handling, logging, and metrics).
    • Redshift Operations & Maintenance: Oversee all Redshift operations, focusing on performance optimization, access control, backup/restore readiness, cost management, and general housekeeping.
    • Performance Optimization: Post-stabilization of core monitoring and pipelines, collaborate with the Data Science team on targeted code optimizations to enhance reliability, reduce latency, and lower operational costs.
    • Security & Compliance: Implement and manage the vulnerability monitoring and remediation workflow (Snyk).
    • CI/CD Implementation: Establish and maintain robust Continuous Integration/Continuous Deployment (CI/CD) systems.
    • Infrastructure as Code (Optional): Utilize IaC principles where necessary to ensure repeatable and streamlined release processes.


    Mandatory Hard Skills:

    • AWS Core Services: Proven experience with production fundamentals (IAM, CloudWatch, and VPC networking concepts).
    • AWS Deployment: Proficiency in deploying and operating AWS SageMaker and Lambda services.
    • ETL Orchestration: Expertise in using AWS Step Functions and Batch for ETL and job orchestration.
    • Programming & Debugging: Strong command of Python for automation and troubleshooting.
    • Containerization: Competence with Docker/containers (build, run, debug).
    • Version Control & CI/CD: Experience with CI/CD practices and Git (GitHub Actions preferred).
    • Data Platform Tools: Experience with Databricks, or a demonstrated aptitude and willingness to quickly learn.
    •  

    Essential Soft Skills:

    • Accountability: Demonstrate complete autonomy and ownership over all assigned systems ("you run it, you fix it, you improve it").
    • Communication: Fluent in English; capable of clear, direct communication, especially during incidents.
    • Prioritization: A focus on delivering a minimally-supportable, deployable solution to meet deadlines, followed by optimization and cleanup.
    • Incident Management: Maintain composure under pressure and possess strong debugging and incident handling abilities.
    • Collaboration: Work effectively with the Data Science team while communicating technical trade-offs clearly and maintaining momentum.
    More
  • Β· 135 views Β· 15 applications Β· 20d

    Data Analyst

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B1
    WHO WE ARE At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on making our clients’ products...

    WHO WE ARE

    At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on making our clients’ products better. Our deep experience and product-first attitude set us apart from other groups and gets us the business results our clients want.

     

    WHY YOU SHOULD WORK WITH US

    You’ll be exposed to a wide range of clients who are at the cutting edge of innovation in their field and get to work on fascinating problems, supporting real products, with real data. We help lots of companies, from some of the largest companies in the world to small startups in Silicon Valley who are building the next big thing. Your perks include: expert mentorship from senior staff, competitive compensation, paid leave, flexible work schedule and ability to travel internationally.

     

    Data Analyst / Scientist - Specific Requirements:

    As a junior data scientist or data analyst, you will be primarily involved in analyzing data and reporting on data distribution and quality issues. You will utilize your analytic skills to gain deeper insights into data and communicate your findings to support various data science initiatives. You will receive direct support from senior data scientists and strengthen your expertise.

     

    A successful candidate has 3-6 years of experience, a working understanding of statistics, data visualization, and communication, some machine learning, and exhibits the following skills:

     

    - Experience with at least one visualization tool such as Tableau or Power BI

    - Knowledge and experience describing data distributions using statistical methods

    - Experience with metrics such as LTV, CAC, customer analysis, product analysis

    - Experience with initial and some exploratory data analysis (IDA/EDA)

    - Proven ability to identify and propose solutions to various data quality issues

    - Experience defining data required and querying databases to support defined objectives including combining data from multiple sources by grouping/aggregation to produce desired datasets

    - Experience visualizing/presenting data for stakeholders

    - Experience with Python

    - Experience writing SQL queries

    More
  • Β· 40 views Β· 1 application Β· 20d

    Middle/Senior ML/MLOps Engineer

    Office Work Β· Ukraine (Kyiv) Β· 3 years of experience Β· English - B2
    N-iX is looking for a Senior Machine Learning Engineer in Kyiv. You will be responsible for designing, developing, and deploying machine learning models at scale within the Palantir Foundry ecosystem. You will collaborate closely with data scientists,...

    N-iX is looking for a Senior Machine Learning Engineer in Kyiv. You will be responsible for designing, developing, and deploying machine learning models at scale within the Palantir Foundry ecosystem. You will collaborate closely with data scientists, MLOps engineers, and data engineers to build robust, production-grade ML workflowsβ€”from data preparation and feature engineering to model training, evaluation, deployment, and monitoring.


    Company provides military reservation (after successfully passing trial period).
     

    Key Responsibilities

    • Design and implement scalable ML models for use in predictive analytics, forecasting, and classification tasks.
    • Work with Palantir Foundry to build end-to-end ML pipelines, including custom Python code, Foundry Functions, and Ontology-aware feature generation.
    • Collaborate with Data Engineers to ensure high-quality, model-ready data flows from ingestion to inference.
    • Operationalize models using industry best practices for versioning, reproducibility, and monitoring (e.g., via MLflow or native Foundry tools).
    • Contribute to MLOps automation, including CI/CD for ML, drift detection, retraining pipelines, and evaluation dashboards.
    • Partner with business stakeholders and domain experts to translate scientific or commercial hypotheses into model-based solutions.
    • Apply rigorous experimentation and statistical validation to ensure models are explainable, generalizable, and regulatory-compliant.
    • Stay informed on the latest developments in ML/AI and proactively introduce innovative techniques and frameworks.

     

    Must-Have Skills & Experience

    • 3+ years of experience in machine learning or applied data science, ideally in a production or enterprise setting.
    • Strong programming skills in Python, with deep experience in machine learning libraries such as scikit-learn, XGBoost, TensorFlow, or PyTorch.
    • Experience designing and deploying ML workflows at scale, preferably with experience in Foundry, KubeFlow, SageMaker, or similar platforms.
    • Familiarity with feature engineering, data imputation, sampling strategies, and evaluation techniques 
    • Hands-on experience with model deployment and monitoring, including logging metrics, detecting drift, and managing model lifecycles.
    • Comfort working with structured and unstructured data: tabular, time series, text, etc.
    • Solid understanding of data security, privacy, and compliance.
    • Strong communication and stakeholder engagement skills; capable of explaining complex models in simple terms.
    • English level at least Upper-Intermediate. 
       

    Will be a plus: 

    • Experience with Palantir Foundry.
       

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 110 views Β· 16 applications Β· 20d

    Senior Data Engineer

    Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· English - B2
    Join a Company That Invests in You Seeking Alpha is the world’s leading community of engaged investors. We’re the go-to destination for investors looking for actionable stock market opinions, real-time market analysis, and unique financial insights. At...

    Join a Company That Invests in You

    Seeking Alpha is the world’s leading community of engaged investors. We’re the go-to destination for investors looking for actionable stock market opinions, real-time market analysis, and unique financial insights. At the same time, we’re also dedicated to creating a workplace where our team thrives. We’re passionate about fostering a flexible, balanced environment with remote work options and an array of perks that make a real difference.

    Here, your growth matters. We prioritize your development through ongoing learning and career advancement opportunities, helping you reach new milestones. Join Seeking Alpha to be part of a company that values your unique journey, supports your success, and champions both your personal well-being and professional goals.

     

    What We're Looking For

    Seeking Alpha is looking for a Senior Data Engineer responsible for designing, building, and maintaining the infrastructure necessary for analyzing large data sets. This individual should be an expert in data management, ETL (extract, transform, load) processes, and data warehousing and should have experience working with various big data technologies, such as Hadoop, Spark, and NoSQL databases. In addition to technical skills, a Senior Data Engineer should have strong communication and collaboration abilities, as they will be working closely with other members of the data and analytics team, as well as other stakeholders, to identify and prioritize data engineering projects and to ensure that the data infrastructure is aligned with the overall business goals and objectives.

     

    What You'll Do

    • Work closely with data scientists/analytics and other stakeholders to identify and prioritize data engineering projects and to ensure that the data infrastructure is aligned with business goals and objectives
    • Design, build and maintain optimal data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources, including external APIs, data streams, and data stores. 
    • Continuously monitor and optimize the performance and reliability of the data infrastructure, and identify and implement solutions to improve scalability, efficiency, and security
    • Stay up-to-date with the latest trends and developments in the field of data engineering, and leverage this knowledge to identify opportunities for improvement and innovation within the organization
    • Solve challenging problems in a fast-paced and evolving environment while maintaining uncompromising quality.
    • Implement data privacy and security requirements to ensure solutions comply with security standards and frameworks.
    • Enhance the team's dev-ops capabilities.

     

    Requirements

    • Bachelor's or Master's degree in Computer Science, Engineering, or a related field
    • 2+ years of proven experience developing large-scale software using an object-oriented or functional language.
    • 5+ years of professional experience in data engineering, focusing on building and maintaining data pipelines and data warehouses
    • Strong experience with Spark, Scala, and Python, including the ability to write high-performance, maintainable code
    • Experience with AWS services, including EC2, S3, Athena, Kinesis/Firehose Lambda and EMR
    • Familiarity with data warehousing concepts and technologies, such as columnar storage, data lakes, and SQL
    • Experience with data pipeline orchestration and scheduling using tools such as Airflow
    • Strong problem-solving skills and the ability to work independently as well as part of a team
    • High-level English - a must. 
    • A team player with excellent collaboration skills.

      

    Nice to Have:

    • Expertise with Vertica or Redshift, including experience with query optimization and performance tuning
    • Experience with machine learning and/or data science projects
    • Knowledge of data governance and security best practices, including data privacy regulations such as GDPR and CCPA.
    • Knowledge of Spark internals (tuning, query optimization)
    More
  • Β· 31 views Β· 5 applications Β· 22d

    Game Mathematician

    Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· English - None
    Hello, future colleague! At DreamPlay, we create pixel-perfect slot games powered by our own engine. We are reinventing the gambling experience by delivering unique, high-quality games to the market. We are a team of professionals who value quality,...

    Hello, future colleague!
     

    At DreamPlay, we create pixel-perfect slot games powered by our own engine. We are reinventing the gambling experience by delivering unique, high-quality games to the market.
    We are a team of professionals who value quality, ownership, transparency, and collaboration. We believe in a results-driven environment where everyone has the space to grow, contribute, and make an impact.

    We’re currently looking for a Game Mathematician to join our team and help shape the core mechanics behind our games.

     

    Requirements:

    • Experience in developing mathematics for casino slots.
    • Strong analytical and problem-solving skills with a high level of attention to detail
    • Solid background in Combinatorics, Probability Theory, and Statistics
    • Advanced proficiency in MS Excel, including building and adapting large, complex spreadsheets
    • Strong critical thinking skills and the ability to manage multiple tasks simultaneously
       

    Key Responsibilities:

    • Test and validate mathematical outcomes to ensure accuracy and quality (using MS Excel, programming, and proprietary tools).
    • Design and maintain high-quality mathematical documentation, including math models, game logic, PAR sheets, and customer-facing materials.
    • Analyze and balance game mechanics to ensure fairness, performance, and regulatory compliance.
    • Run simulations and optimize mathematical algorithms to improve game performance and player engagement.
    • Maintain clear technical documentation to support collaboration across teams and meet compliance requirements.
    • Stay up to date with industry trends, emerging technologies, and competitor practices to continuously improve game design strategies.
       

    We Offer:

    • Opportunity to work remotely or from our Kyiv office.
    • Flexible working hours β€” you choose when to start your day.
    • Modern Mac equipment.
    • Career growth within a team of iGaming professionals.
    • Supportive, transparent team culture with minimal bureaucracy.
    • Time-off policy that fits real life (paid vacation, sick leave, public holiday).
    • Benefits for employees.
    More
  • Β· 66 views Β· 9 applications Β· 22d

    Senior Data Engineer (Healthcare domain)

    Full Remote Β· EU Β· 5 years of experience Β· English - None
    Are you passionate about building large-scale cloud data infrastructure that makes a real difference? We are looking for a Senior Data Engineer to join our team and work on an impactful healthcare technology project. This role offers a remote work format...

    Are you passionate about building large-scale cloud data infrastructure that makes a real difference? We are looking for a Senior Data Engineer to join our team and work on an impactful healthcare technology project. This role offers a remote work format with the flexibility to collaborate across international teams.

    At Sigma Software, we deliver innovative IT solutions to global clients in multiple industries, and we take pride in projects that improve lives. Joining us means working with cutting-edge technologies, contributing to meaningful initiatives, and growing in a supportive environment.


    CUSTOMER
    Our client is a leading medical technology company. Its portfolio of products, services, and solutions is at the center of clinical decision-making and treatment pathways. Patient-centered innovation has always been, and will always be, at the core of the company. The client is committed to improving patient outcomes and experiences, regardless of where patients live or what they face. The Customer is innovating sustainably to provide healthcare for everyone, everywhere. 


    PROJECT
    The project focuses on building and maintaining large-scale cloud-based data infrastructure for healthcare applications. It involves designing efficient data pipelines, creating self-service tools, and implementing microservices to simplify complex processes. The work will directly impact how healthcare providers access, process, and analyze critical medical data, ultimately improving patient care.

     

    Responsibilities:

    • Collaborate with the Product Owner and team leads to define and design efficient pipelines and data schemas
    • Build and maintain infrastructure using Terraform for cloud platforms
    • Design and implement large-scale cloud data infrastructure, self-service tooling, and microservices
    • Work with large datasets to optimize performance and ensure seamless data integration
    • Develop and maintain squad-specific data architectures and pipelines following ETL and Data Lake principles
    • Discover, analyze, and organize disparate data sources into clean, understandable schemas

     

    Requirements:

    • Hands-on experience with cloud computing services in data and analytics
    • Experience with data modeling, reporting tools, data governance, and data warehousing
    • Proficiency in Python and PySpark for distributed data processing
    • Experience with Azure, Snowflake, and Databricks
    • Experience with Docker and Kubernetes
    • Knowledge of infrastructure as code (Terraform)
    • Advanced SQL skills and familiarity with big data databases such as Snowflake, Redshift, etc.
    • Experience with stream processing technologies such as Kafka, Spark Structured Streaming
    • At least an Upper-Intermediate level of English 

     

    More
  • Β· 111 views Β· 33 applications Β· 22d

    Business Analyst

    Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· English - B2
    Help industries go greener while shaping the future of sustainable technology! We’re looking for a Business Analyst to join a global initiative that improves data accuracy, boosts efficiency, and ensures compliance with international environmental...

    Help industries go greener while shaping the future of sustainable technology!

     

    We’re looking for a Business Analyst to join a global initiative that improves data accuracy, boosts efficiency, and ensures compliance with international environmental standards.

    As a Business Analyst, you will play a crucial role in bridging the gap between technical teams and project stakeholders to ensure that system enhancements meet both technical specifications and business requirements.

    You will also be responsible for gathering requirements, defining project deliverables, and documenting processes. If you’re ready to use your expertise to deliver meaningful change on a global scale, this is your chance!

     

    Customer

    Our client is working on upgrading their environmental data management system, which is essential for providing transparent and reliable impact assessments across various product lines.
    Aligning the system with current best practices reinforces the client’s commitment to sustainability and regulatory compliance.

     

    Project
    This initiative is a pivotal step in helping industries to make informed decisions that prioritize environmental responsibility. The project focuses on integrating the latest industry standards and guidelines to ensure compliance with international environmental regulations. The enhancements aim to refine data accuracy and elevate system efficiency.

     

    Requirements

    • At least 4 years of commercial experience as a Business Analyst
    • Proven experience in eliciting requirements
    • Meticulous approach to creating documentation
    • Fair knowledge and understanding of technology concepts
    • Ability to document test scenarios and acceptance criteria
    • Understanding of basic mathematics and coding, as well as knowledge of Front-end, Back-end, and databases to communicate with the development team
    • Advanced level of English

       

    Personal Profile

    • Excellent written and verbal communication skills, including presentation skills
    • Analytical skills and attention to detail
    • Ability to adapt quickly

       

       

    Responsibilities

    • Act as a liaison between business stakeholders and the technical team
    • Understand business needs, and elicit, validate, prioritize, and manage requirements, and ensure that the delivered solution aligns with those requirements
    • Translate business needs into clear, detailed, and unambiguous requirements
    • Document, analyze, and visualize business processes, workflows, and systems
    • Document requirements with sound acceptance criteria
    More
  • Β· 64 views Β· 6 applications Β· 22d

    Senior Business Analyst (Data Team)

    Ukraine Β· 5 years of experience Β· English - B2
    N-iX is looking fro Senior Business Analyst for one of our clients. Our US-based customer focuses on innovative online education and simulation platforms, primarily in healthcare and allied health. We are seeking a Senior Business Analyst to join our core...

    N-iX is looking fro Senior Business Analyst for one of our clients.
    Our US-based customer focuses on innovative online education and simulation platforms, primarily in healthcare and allied health.

    We are seeking a Senior Business Analyst to join our core Data Team. This position is critical to the successful delivery of complex data projects. You will work closely with Product Owners, Project Managers, Data Engineers, and Data Analysts to flesh out project requirements, drive comprehensive documentation and testing, and support both project and change management processes.

     

    Key Responsibilities:

    • Collaborate with stakeholders to elicit, analyze, and refine complex data and business requirements
    • Translate requirements into detailed documentation, including functional specs, user stories, and data/process flows
    • Develop and maintain test cases, assist with UAT, and ensure traceability from requirements through delivery
    • Support project and change management activities, identifying dependencies and managing change requests
    • Facilitate workshops, requirements sessions, and walkthrough meetings with cross-functional teams
    • Proactively address project risks, issues, and blockers related to requirements or delivery
    • Create and maintain clear data dictionaries, process and data mapping, and documentation shared with the Data Team
    • Coach and mentor junior analysts; foster knowledge-sharing within the team
    • Contribute to the ongoing standardization of documentation and requirements management best practices

     

    Requirements:

    • Proven 5+ years’ experience as a Business Analyst, with at least 2 years in data-centric or analytics projects
    • Immediate availability or short notice period (able to start ASAP)
    • Demonstrated expertise in requirements engineering, documentation, and test case development
    • Experience with business process and data modeling (BPMN, UML, or similar)
    • Excellent writing and communication skills in English
    • Strong experience supporting project and change management activities
    • Familiarity with Jira, Confluence, and relevant BA and process modeling tools
    • Experience working as part of a cross-functional data team (collaborating with Data Engineers, Analysts, PMs, Product Owners)
    • Analytical mindset, strong attention to detail, and a data-driven approach
    • Ability to deliver reliably in a fast-paced, agile environment

     

    Nice to Have:

    • Hands-on experience writing SQL queries (for data validation or reporting)
    • Practical experience with Tableau (creating reports, supporting analytics delivery)
    • Experience in cloud data environments (AWS, GCP, Azure) is a plus
    • Previous experience in consultancy, finance, or the Customer's industry verticals
    • Relevant certifications (e.g., CBAP, PMI-PBA)
    • Experience supporting digital transformation, data governance, or MDM initiatives

     

    We offer*:

    • Flexible working format - remote, office-based or flexible
    • A competitive salary and good compensation package
    • Personalized career growth
    • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
    • Active tech communities with regular knowledge sharing
    • Education reimbursement
    • Memorable anniversary presents
    • Corporate events and team buildings
    • Other location-specific benefits

    *not applicable for freelancers

    More
  • Β· 100 views Β· 20 applications Β· 22d

    Monetization Analyst

    Worldwide Β· 3 years of experience Β· English - B2 Ukrainian Product πŸ‡ΊπŸ‡¦
    Hi! We’re Leaply β€” a fast-growing ecosystem of digital and physical products designed to help people build healthier habits and lead better lives. We launched in 2024 and have already seen explosive growth β€” 10x revenue in the past six months β€” with...

    Hi! We’re Leaply β€” a fast-growing ecosystem of digital and physical products designed to help people build healthier habits and lead better lives. We launched in 2024 and have already seen explosive growth β€” 10x revenue in the past six months β€” with strong traction across the US, Europe, and beyond.

    At Leaply, we challenge traditional career paths β€” we value ownership over job titles, and impact over process. As we scale, we’re looking for a sharp, data-driven Analyst to refine our ops team, enhance team performance, and implement scalable systems.

     

    Role Overview:

    As an Analyst, you will be the strategic bridge between raw data and operational excellence. You will join our Operations team to drive efficiency across three key pillars: Monetization, Support, and Online reputation. Your role goes beyond just building reports; you will be expected to uncover hidden patterns, propose data-backed solutions, and actively participate in their implementation, including our upcoming AI-driven automation initiatives.

     

    Technical Stack, that you will work with:

    • Data Warehouse: Google BigQuery (Advanced SQL is a must).
    • Visualization: Tableau (Creating intuitive, actionable executive dashboards).
    • Product Analytics: Amplitude (Analyzing user behavior and funnel performance).
    • Innovation: Implementation and optimization of AI/LLM tools for operational workflows.

     

    Key Responsibilities:

    • Foundation Building & Data Architecture: As the first analyst in this direction, you will own the end-to-end data flowβ€”from querying and structuring raw data in BigQuery to ensuring a "single source of truth" for the entire Ops department.
    • Advanced Analytics & Modeling: Move beyond basic reporting. You will develop antifraud models to mitigate risks and design complex behavioral segmentations to understand user lifetime value and churn triggers.
    • Performance Monitoring: Architecting the automated KPI ecosystem and health-check dashboards for Payments, Support, and FirstLook.
    • Strategic Insight Generation: Conducting high-level deep-dives into payment failure trends, transaction routing optimization, and operational bottlenecks.
    • AI & Automation Integration: Leading the charge in implementing AI/LLM solutionsβ€”from automated support ticket classification to predictive routing and fraud detection.
    • Actionable Implementation: Working as a strategic partner to Ops Leads to ensure your data models translate into direct business growth and process overhauls.

     

    Potential Candidate Profile:

    • Experience: 2+ years of experience in Data or Operations Analytics (preferably in Fintech, SaaS, or high-growth startups).
    • SQL Mastery: Proficiency in writing complex, optimized queries (CTEs, Window Functions, etc.).
    • Visualization Skills: Proven ability to tell a story with data in Tableau, making complex metrics easy for stakeholders to digest.
    • Analytical Mindset: Strong understanding of product funnels (Amplitude) and business unit economics.
    • Adaptability: A keen interest in AI/Machine Learning and a willingness to experiment with new automation tools.
    • Communication: Fluent in translating data findings into clear business recommendations.

     

    Focus Areas (Your Impact)

    • Payments: Minimizing cancell rates, optimizing transaction flows, user segmentation etc.
    • Support: Analyzing CSAT, other metrics and identifying high-impact areas for AI chatbot implementation.
    • Online reputation: Integrating analytical tools for sentiment tracking across different platforms.

       

    Why This Role?

    • Build the Function from Scratch: You aren’t just joining a team; you are establishing the analytical culture in a team. You will have the autonomy to define the tools, the methodologies, and the standards for how Operations uses data.
    • Path to Data Science: This role is designed for growth. With our focus on AI, anti-fraud modeling, and complex segmentations, you will have the perfect sandbox to transition into Data Science or Advanced Machine Learning roles in case you want.
    • High Visibility & Ownership: As the pioneer analyst in this direction, your insights will directly influence executive-level decisions. You will see the immediate impact of every model you build.
    • AI-First Environment: You won't be stuck with legacy processes. You will be at the forefront of implementing AI-driven operations, giving you a massive competitive edge in the future market.

     

    What you’ll get:

    • Competitive compensation and a high-impact role.
    • Full Ownership: Freedom to optimize and evolve the support function as you see fit.
    • Support from the best: Access to internal professional communities (Marketing, Product, Operations) within the SKELAR network.
    • A meaningful mission: Help millions of people live healthier lives every day.

    Leaply is backed bySKELAR, a venture builder with over 10 successful B2C businesses. Beyond business, we support the SKELAR Foundation, a charitable initiative created by employees to support the Ukrainian Armed Forces.

    Join us in building the next big thing!

    More
  • Β· 56 views Β· 15 applications Β· 22d

    MLOps Architect

    Full Remote Β· Countries of Europe or Ukraine Β· 6 years of experience Β· English - B2
    About the Role We are looking for a Senior MLOps Architect to lead high-stakes AI and Data projects for our enterprise customers. In this role, you will act as the technical authority, helping clients bridge the gap between experimental data science and...

    About the Role We are looking for a Senior MLOps Architect to lead high-stakes AI and Data projects for our enterprise customers. In this role, you will act as the technical authority, helping clients bridge the gap between experimental data science and production-grade operations primarily on Google Cloud Platform. You will lead projects that involve building end-to-end MLOps pipelines from scratch, migrating workloads to Vertex AI, and standardizing model deployment. You will usually act as the "trusted advisor" owning the architecture and the delivery. 

    Key Responsibilities 

    ● Customer Leadership: Lead technical kickoffs, discovery workshops, and architecture reviews directly with client CTOs, VP R&D, and Data Science leads. 

    ● Architecture & Design: Design robust, scalable MLOps architectures using Google Cloud Platform services (Vertex AI, GKE, BigQuery, Cloud Build, Cloud Storage). 

    ● Implementation & Automation: Build "Golden Paths" for model deployment. Implement CI/CD pipelines for ML, automated retraining workflows, and model monitoring systems to allow Data Scientists to deploy self-sufficiently. 

    ● Production Engineering: Operationalize ML models in high-scale environments. Troubleshoot complex infrastructure issues (e.g., GPU provisioning, container orchestration, scaling strategies). 

    ● Strategic Advisory: Advise customers on best practices for MLOps maturity, cost optimization (FinOps for AI), and data governance. Requirements (Must Have) 

    ● MLOps Experience: At least 3+ years specialized in MLOps and building production ML pipelines. 

    ● Google Cloud Expert: Deep, hands-on experience with GCP core services (Compute Engine, GKE, IAM, Networking) and specifically Vertex AI (Pipelines, Feature Store, Model Registry).

    ● Customer-Facing Skills: Proven ability to lead projects, manage stakeholders, and explain complex technical concepts to clients. 

    ● Containerization & Orchestration: Strong proficiency with Docker and Kubernetes (GKE). 

    ● Coding: Strong proficiency in Python and SQL. 

    ● CI/CD for ML: Experience implementing pipelines using tools like Cloud Build, GitHub Actions, or Jenkins. Big Advantage (Nice to Have) ● Databricks Expertise: Experience with the Databricks Lakehouse platform, Unity Catalog, and MLflow is a major plus. Many of our clients use Databricks alongside GCP, so this skill will be highly valued. 

    ● Certifications: Google Cloud Professional Machine Learning Engineer or Professional Cloud Architect. 

    ● GenAI Experience: Experience deploying Large Language Models (LLMs) or working with Gemini/Claude APIs in production.

    More
  • Β· 21 views Β· 0 applications Β· 22d

    Senior Snowflake Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data...

    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data infrastructure through the transition to Snowflake as a priority, as it will enhance capabilities for implementing advanced AI solutions and unlock numerous opportunities for innovation and growth.

    We are seeking a highly skilled Snowflake Data Engineer to design, build, and optimize scalable data pipelines and cloud-based solutions across AWS, Azure, and GCP. The ideal candidate will have strong expertise in Snowflake, ETL Tools like DBT, Python, visualization tools like Tableau and modern CI/CD practices, with a deep understanding of data governance, security, and role-based access control (RBAC). Knowledge of data modeling methodologies (OLTP, OLAP, Data Vault 2.0), data quality frameworks, Stream lit application development and SAP integration and infrastructure-as-code with Terraform is essential. Experience working with different file formats such as JSON, Parquet, CSV, and XML is highly valued.

    • Responsibilities:

      β€’ Design and develop data pipelines using Snowflake and Snow pipe for real-time and batch ingestion.
      β€’ Implement CI/CD pipelines in Azure DevOps for seamless deployment of data solutions.
      β€’ Automate DBT jobs to streamline transformations and ensure reliable data workflows.
      β€’ Apply data modeling techniques including OLTP, OLAP, and Data Vault 2.0 methodologies to design scalable architectures.
      β€’ Document data models, processes, and workflows clearly for future reference and knowledge sharing.
      β€’ Build data tests, unit tests, and mock data frameworks to validate and maintain reliability of data solutions.
      β€’ Develop Streamlit applications integrated with Snowflake to deliver interactive dashboards and self-service analytics.
      β€’ Integrate SAP data sources into Snowflake pipelines for enterprise reporting and analytics.
      β€’ Leverage SQL expertise for complex queries, transformations, and performance optimization.
      β€’ Integrate cloud services across AWS, Azure, and GCP to support multi-cloud data strategies.
      β€’ Develop Python scripts for ETL/ELT processes, automation, and data quality checks.
      β€’ Implement infrastructure-as-code solutions using Terraform for scalable and automated cloud deployments.
      β€’ Manage RBAC and enforce data governance policies to ensure compliance and secure data access.
      β€’ Collaborate with cross-functional teams including business analysts, and business stakeholders to deliver reliable data solutions.

    • Mandatory Skills Description:

      β€’ Strong proficiency in Snowflake (Snowpipe, RBAC, performance tuning).
      β€’ Hands-on experience with Python , SQL , Jinja , JavaScript for data engineering tasks.
      β€’ CI/CD expertise using Azure DevOps (build, release, version control).
      β€’ Experience automating DBT jobs for data transformations.
      β€’ Experience building Streamlit applications with Snowflake integration.
      β€’ Cloud services knowledge across AWS (S3, Lambda, Glue), Azure (Data Factory, Synapse), and GCP (BigQuery, Pub/Sub).

    More
  • Β· 20 views Β· 3 applications Β· 22d

    Senior Snowflake Data Engineer

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data...
    • The project is for one of the world's famous science and technology companies in pharmaceutical industry, supporting initiatives in AWS, AI and data engineering, with plans to launch over 20 additional initiatives in the future. Modernizing the data infrastructure through the transition to Snowflake as a priority, as it will enhance capabilities for implementing advanced AI solutions and unlock numerous opportunities for innovation and growth.

      We are seeking a highly skilled Snowflake Data Engineer to design, build, and optimize scalable data pipelines and cloud-based solutions across AWS, Azure, and GCP. The ideal candidate will have strong expertise in Snowflake, ETL Tools like DBT, Python, visualization tools like Tableau and modern CI/CD practices, with a deep understanding of data governance, security, and role-based access control (RBAC). Knowledge of data modeling methodologies (OLTP, OLAP, Data Vault 2.0), data quality frameworks, Stream lit application development and SAP integration and infrastructure-as-code with Terraform is essential. Experience working with different file formats such as JSON, Parquet, CSV, and XML is highly valued.

     

     

    • Responsibilities:

      β€’ Design and develop data pipelines using Snowflake and Snow pipe for real-time and batch ingestion.
      β€’ Implement CI/CD pipelines in Azure DevOps for seamless deployment of data solutions.
      β€’ Automate DBT jobs to streamline transformations and ensure reliable data workflows.
      β€’ Apply data modeling techniques including OLTP, OLAP, and Data Vault 2.0 methodologies to design scalable architectures.
      β€’ Document data models, processes, and workflows clearly for future reference and knowledge sharing.
      β€’ Build data tests, unit tests, and mock data frameworks to validate and maintain reliability of data solutions.
      β€’ Develop Streamlit applications integrated with Snowflake to deliver interactive dashboards and self-service analytics.
      β€’ Integrate SAP data sources into Snowflake pipelines for enterprise reporting and analytics.
      β€’ Leverage SQL expertise for complex queries, transformations, and performance optimization.
      β€’ Integrate cloud services across AWS, Azure, and GCP to support multi-cloud data strategies.
      β€’ Develop Python scripts for ETL/ELT processes, automation, and data quality checks.
      β€’ Implement infrastructure-as-code solutions using Terraform for scalable and automated cloud deployments.
      β€’ Manage RBAC and enforce data governance policies to ensure compliance and secure data access.
      β€’ Collaborate with cross-functional teams including business analysts, and business stakeholders to deliver reliable data solutions.

     

     

    • Mandatory Skills Description:

      β€’ Strong proficiency in Snowflake (Snowpipe, RBAC, performance tuning).
      β€’ Hands-on experience with Python , SQL , Jinja , JavaScript for data engineering tasks.
      β€’ CI/CD expertise using Azure DevOps (build, release, version control).
      β€’ Experience automating DBT jobs for data transformations.
      β€’ Experience building Streamlit applications with Snowflake integration.
      β€’ Cloud services knowledge across AWS (S3, Lambda, Glue), Azure (Data Factory, Synapse), and GCP (BigQuery, Pub/Sub).

     

    • Nice-to-Have Skills Description:

      - Cloud certifications is a plus

     

     

    • Languages:
      • English: B2 Upper Intermediate
    More
  • Β· 39 views Β· 3 applications Β· 23d

    Machine Learning Engineer (Real-Time Inference Systems)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 6 years of experience Β· English - C2
    Our client is a leading mobile marketing and audience platform empowering the global app ecosystem with advanced solutions in mobile marketing, audience building, and monetization. With direct integrations into 500,000+ mobile apps worldwide, they process...

    Our client is a leading mobile marketing and audience platform empowering the global app ecosystem with advanced solutions in mobile marketing, audience building, and monetization.

    With direct integrations into 500,000+ mobile apps worldwide, they process massive volumes of first-party data to deliver intelligent, real-time, and scalable advertising decisions. Their platform operates at extreme scale, serving billions of requests per day under strict latency and performance constraints.

    About the Role

    We are looking for a highly skilled, independent, and driven Machine Learning Engineer to own and lead the design and development of our next-generation real-time inference services.

    This is a rare opportunity to take ownership of mission-critical systems on a massive scale, working at the intersection of machine learning, large-scale backend engineering, and business logic.

    You will build robust, low-latency services that seamlessly combine predictive models with dynamic decision logic β€” while meeting extreme requirements for performance, reliability, and scalability.

    Responsibilities

    • Own and lead the design and development of low-latency inference services handling billions of requests per day
    • Build and scale real-time decision-making engines, integrating ML models with business logic under strict SLAs
    • Collaborate closely with Data Science teams to deploy models reliably into production
    • Design and operate systems for model versioning, shadowing, and A/B testing in runtime
    • Ensure high availability, scalability, and observability of production services
    • Continuously optimize latency, throughput, and cost efficiency
    • Work independently while collaborating with stakeholders across Algo, Infra, Product, Engineering, Business Analytics, and Business teams

    Requirements

    • B.Sc. or M.Sc. in Computer Science, Software Engineering, or a related technical field
    • 5+ years of experience building high-performance backend or ML inference systems
    • Strong expertise in Python
    • Hands-on experience with low-latency APIs and real-time serving frameworks
      (FastAPI, Triton Inference Server, TorchServe, BentoML)
    • Experience designing scalable service architectures
    • Strong knowledge of async processing, message queues, and streaming systems
      (Kafka, Pub/Sub, SQS, RabbitMQ, Kinesis)
    • Solid understanding of model deployment, online/offline feature parity, and real-time monitoring
    • Experience with cloud platforms (AWS, GCP, or OCI)
    • Strong hands-on experience with Kubernetes
    • Experience with in-memory / NoSQL databases
      (Aerospike, Redis, Bigtable)
    • Familiarity with observability stacks: Prometheus, Grafana, OpenTelemetry
    • Strong sense of ownership and ability to drive solutions end-to-end
    • Passion for performance, clean architecture, and impactful systems
    More
  • Β· 64 views Β· 27 applications Β· 23d

    ML/AI/Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    About the Company Our client is an early-stage B2B SaaS company building intelligent automation software for finance teams. The platform focuses on real-time financial visibility, automated data ingestion, and advanced cost modeling using modern AI...

    About the Company

    Our client is an early-stage B2B SaaS company building intelligent automation software for finance teams. The platform focuses on real-time financial visibility, automated data ingestion, and advanced cost modeling using modern AI techniques.

    The company operates in a large and fast-growing market, with strong early customer validation and active design partners. The team is small, product-driven, and focused on building high-quality software for customers who expect robust, enterprise-grade solutions. This role offers meaningful ownership over core systems and the opportunity to shape foundational technology from the ground up.

     

    About the Role

    The company is seeking an AI & Data Engineer to develop the intelligence layer of its platform. In this role, you will design and implement systems that transform messy, heterogeneous business data - such as emails, documents, and spreadsheets - into structured financial models.

    You’ll work largely from first principles, deploying services to AWS and operating with a strong MVP mindset: prioritizing simple, effective solutions that can be shipped quickly and iterated on. You’ll collaborate closely with a SaaS engineering team to surface extracted insights through a client-facing dashboard.

    Strong engineering fundamentals are expected, including version control, testing, CI/CD, and the ability to break complex problems into small, testable increments while communicating clearly with the team.

     

    Responsibilities:

    • Build and maintain ETL pipelines for collecting, cleaning, and structuring customer data;
    • Implement document ingestion and vectorization workflows
    • Apply NLP and LLM-based approaches to extract structured insights from unstructured data;
    • Develop unsupervised models to infer financial structures, cost drivers, and relationships;
    • Design custom algorithms to align extracted data with organizational hierarchies and financial models;
    • Collaborate with frontend and product engineers to present insights in a clear, intuitive way;
    • Maintain strong engineering practices around testing, version control, automation, and documentation;
    • Optionally contribute to AWS deployments, infrastructure orchestration, and service integration.

       

    Required Skills & Experience

    • Hands-on experience with embeddings and vector databases;
    • Strong background working with NLP models and large language models (local inference and/or APIs);
    • Proven experience building data pipelines and data processing workflows;
    • Research-oriented mindset with the ability to design custom solutions beyond off-the-shelf tools;
    • Experience deploying and operating systems on AWS;
    • Familiarity with automation or data acquisition tools (e.g., workflow automation, scraping, integrations);
    • Ability to work independently, iterate quickly, and manage ambiguity;
    • Clear communicator who can reason through technical trade-offs;
    • Flexibility in working hours when needed.

       

    Nice to Have

    • Experience applying AI to finance, analytics, or enterprise data problems;
    • Broader cloud or infrastructure experience;
    • Familiarity with event-driven systems or microservice architectures;
    • Background in unsupervised learning on large, messy, real-world datasets.

       

      We Offer: 

    • Competitive market salary. 
    • Fully remote work. 
    • Convenient and somewhat flexible working hours. 
    • 28 days of paid time off per calendar year. 
    • The chance to work on meaningful, socially valuable products alongside a highly professional, US-based international team. Interesting technical challenges with opportunities to grow and learn.
    More
Log In or Sign Up to see all posted jobs