Jobs Data & Analytics

1030
  • Β· 71 views Β· 9 applications Β· 15d

    ML Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· English - B1
    We are helping one of our product partners build a core ML function and are looking for an ML Engineer to join their internal team. The partner is building an autonomous advertising system that already creates and runs campaigns through agents. The next...

    We are helping one of our product partners build a core ML function and are looking for an ML Engineer to join their internal team. The partner is building an autonomous advertising system that already creates and runs campaigns through agents. The next critical step is adding an intelligence layer that learns from outcomes and optimizes decisions autonomously.

     

    Mission
    Build machine learning and reinforcement learning systems that maximize ROI across hundreds of concurrent advertising campaigns through autonomous optimization.

     

    Responsibilities

    • Design and build reinforcement learning models for autonomous campaign optimization
    • Develop prediction models to forecast campaign profitability before spend
    • Implement automated bid adjustment, campaign kill-switches, and budget reallocation logic
    • Analyze around two years of historical campaign data in BigQuery to identify performance patterns
    • Design reward functions and feedback loops for AI agents
    • Run A/B tests comparing ML-driven optimization with rule-based approaches
    • Collaborate with an AI Creative Strategist to validate which creative patterns drive the highest ROI
    • Monitor models in production, retrain them, and continuously iterate


    Must-have requirements

    • Degree in Mathematics, Physics, Computer Science, or another quantitative field
    • At least 2 years of hands-on experience with machine learning models in production
    • Strong Python skills with the ability to write clean, testable, deployable code
    • Practical experience with at least one of the following: reinforcement learning, optimization algorithms, time-series forecasting, recommendation systems
    • Confident SQL skills (BigQuery or equivalent)
    • AI-native working style with daily use of modern AI coding tools
    • Ability to take solutions from research to production independently
    • Self-directed mindset with the ability to define and execute a work plan based on business goals
       

    Nice to have

    • Experience with Google Vertex AI, GCP, or similar cloud ML platforms
    • Background in AdTech, programmatic advertising, or real-time bidding
    • Published work or personal projects related to reinforcement learning or autonomous systems
    • Experience building systems that operate on real money
    • Experience with LLM-based agent orchestration


    Language requirements
    English at Upper-Intermediate level or higher. Code and data are the primary working languages. No client-facing communication is required.
     

    Work format
    Full-time, fully remote.

     

    More
  • Β· 42 views Β· 4 applications Β· 15d

    Senior Data Engineer (Data Competency Center)

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    Are you a Senior Data Engineer passionate about building scalable, secure, and high-performance data solutions? Join our Data Engineering Center of Excellence at Sigma Software and work on diverse projects that challenge your skills and inspire...

    Are you a Senior Data Engineer passionate about building scalable, secure, and high-performance data solutions? Join our Data Engineering Center of Excellence at Sigma Software and work on diverse projects that challenge your skills and inspire innovation.

     

    At Sigma Software, we value expertise, continuous learning, and a supportive environment where your career path is shaped around your strengths. You’ll be part of a collaborative team, gain exposure to cutting-edge technologies, and work in an inclusive culture that fosters growth and innovation.

    Project

    Our Data Engineering Center of Excellence (CoE) is a specialized unit focused on designing, building, and optimizing data platforms, pipelines, and architectures. We work across diverse industries, leveraging modern data stacks to deliver scalable, secure, and cost-efficient solutions.

    Job Description

    • Collaborate with clients and internal teams to clarify technical requirements and expectations
    • Implement architectures using Azure or AWS cloud platforms
    • Design, develop, optimize, and maintain squad-specific data architectures and pipelines
    • Discover, analyze, and organize disparate data sources into clean, understandable data models
    • Evaluate new tools for analytical data engineering or data science
    • Suggest and contribute to training and improvement plans for analytical data engineering skills, standards, and processes

    Qualifications

    • 5+ years of experience with Python and SQL
    • Hands-on experience with AWS services (API Gateway, Kinesis, Athena, RDS, Aurora)
    • Proven experience building ETL pipelines for analytics/internal operations
    • Experience developing and integrating APIs
    • Solid understanding of Linux OS
    • Familiarity with distributed applications and DevOps tools
    • Strong troubleshooting/debugging skills
    • English level: Upper-Intermediate
    • WILL BE A PLUS:
    • 2+ years with Hadoop, Spark, or Airflow
    • Experience with DAGs/orchestration tools
    • Experience with Snowflake-based data warehouses
    • Experience developing event-driven data pipelines
    • Personal Profile

    PERSONAL PROFILE:

    • Passion for data processing and continuous learning
    • Strong problem-solving skills and analytical thinking
    • Ability to mentor and guide team members
    • Effective communication and collaboration skills
    More
  • Β· 43 views Β· 1 application Β· 15d

    Senior Data Scientist

    Full Remote Β· Ukraine Β· 4 years of experience Β· English - B2
    WE ARE SoftServe is a global digital solutions company, headquartered in Austin, Texas, and founded in 1993. With 2,000+ active projects across the USA, Europe, APAC, and LATAM, we deliver meaningful outcomes through bold thinking and deep expertise. Our...

    WE ARE

    SoftServe is a global digital solutions company, headquartered in Austin, Texas, and founded in 1993.

    With 2,000+ active projects across the USA, Europe, APAC, and LATAM, we deliver meaningful outcomes through bold thinking and deep expertise. Our people create impactful solutions, drive innovation, and genuinely enjoy what they do.

    The AI and Data Science Center of Excellence (CoE) is SoftServe’s premier AI/ML hub, primarily based in Europe. With 130+ expertsβ€”including data scientists, research analysts, MLOps engineers, ML and LLM architects β€” we cover the full AI lifecycle, from problem framing to deployment.

    In 2024, we delivered 150+ AI projects, including over 100 focused on Generative AI, combining scale with measurable impact.

    We are a 2024 NVIDIA Service Delivery Partner and maintain strong collaborations with Google Cloud, Amazon, and Microsoft, ensuring our teams always work with cutting-edge tools and technologies.

    We also lead Gen AI Lab β€” our internal innovation engine focused on applied research and cross-functional collaboration in Generative AI.

    In 2025, a key area of innovation is Agentic AI β€” where we design and deploy autonomous, collaborative agent systems capable of addressing complex, real-world challenges at scale for our clients and internally.


    IF YOU ARE

    • Experienced in Generative AI and natural language processing (NLP), working with large-scale transformer models and generative pre-trained LLMs like GPT-4, Claude, and Gemini
    • Knowledgeable about the latest advancements in diffusion models and other generative frameworks for text and image generation
    • Adept at applying advanced deep learning techniques to practical use cases
    • Well-versed in emerging trends and breakthroughs in machine learning, deep learning, and NLP, with a strong focus on their real-world applications
    • Proficient in working with state-of-the-art pre-trained language models like GPT-4 and BERT, including fine-tuning for specialized tasks
    • Aware of the software development lifecycle for AI projects and the operationalization of machine learning models
    • Experienced in deploying AI solutions on major cloud platforms
    • Hands-on with Python and deep learning frameworks such as TensorFlow or PyTorch
    • Skilled in interpersonal communication, analytical reasoning, and complex problem-solving
    • Capable of translating technical concepts into clear, concise insights that non-technical audiences can easily grasp
    • Proficient in business communication in English at an upper-intermediate level
       

    AND YOU WANT TO

    • Work with the full stack of data analysis, deep learning, and machine learning model pipeline that includes deep analysis of customer data, modeling, and deployment in production
    • Choose relevant computational tools for study, experiment, or trial research objectives
    • Drive the development of innovative solutions for language generation, text synthesis, and creative content generation using the latest state-of-the-art techniques
    • Develop and implement advanced Generative AI solutions such as intelligent assistants, Retrieval-Augmented Generation (RAG) systems, and other innovative applications
    • Produce clear, concise, well-organized, and error-free computer programs with the appropriate technological stack
    • Present results directly to stakeholders and gather business requirements
    • Develop expertise in state-of-the-art Generative AI techniques and methodologies
    • Grow your skill set within a dynamic and supportive environment
    • Work with Big Data solutions and advanced data tools in cloud platforms
    • Build and operationalize ML models, including data manipulation, experiment design, developing analysis plans, and generating insights
    • Lead teams of data scientists and software engineers to successful project execution


    TOGETHER WE WILL

    • Be part of a team that's shaping the future of AI and data science through innovation and shared growth.
    • Advance the frontier of Agentic AI by shaping intelligent multi-agent ecosystems that drive autonomy, scalability, and measurable business value.
    • Have access to world-class training, cutting-edge research, and collaborate with top industry partners.
    • Maintain a synergy of Data Scientists, DevOps team, and ML Engineers to build infrastructure, set up processes, productize machine learning pipelines, and integrate them into existing business environments
    • Communicate with the world-leading companies from our logos portfolio
    • Enjoy the opportunity to work with the latest modern tools and technologies on various projects
    • Participate in international events and get certifications in cutting-edge technologies
    • Have access to powerful educational and mentorship programs
    • Revolutionize the software industry and drive innovation in adaptive self-learning technologies by leveraging multidisciplinary expertise
    More
  • Β· 39 views Β· 12 applications Β· 16d

    Senior Engineer ML/AI (IRC286207)

    Full Remote Β· Croatia, Poland, Romania, Slovakia, Ukraine Β· 5 years of experience Β· English - B2
    Job Description Technical Requirements: Experience: 5+ years in Applied ML/AI with a history of production-scale impact. Cloud: Hands-on AWS proficiency and MLOps expertise. Advantageous: Experience in Search, Personalization, or Dynamic Pricing. Job...

    Job Description

    Technical Requirements:

    • Experience: 5+ years in Applied ML/AI with a history of production-scale impact.
    • Cloud: Hands-on AWS proficiency and MLOps expertise.
    • Advantageous: Experience in Search, Personalization, or Dynamic Pricing.

    Job Responsibilities

    Core Responsibilities:

    • Design: Build scalable pipelines for LLM, RAG, and Generative AI systems.
    • Execute: Own the model lifecycle (Deployment β†’ Monitoring β†’ Optimization).
    • Strategy definition: Align AI capabilities with retail/e-commerce business goals.

    Department/Project Description

    Senior Applied AI Developer
    50% Dev | 30% Strategy | 20% Research

    A premier global multi-brand retailer known for its unique blend of high-end fashion and editorial content. With a massive digital footprint of 100M+ monthly interactions, this technology-driven platform is a leader in the luxury e-commerce space, focusing on menswear, womenswear, and innovative lifestyle categories.

    Skill Category

    AI/ML

    Keyskills - Must Have

    • Machine Learning
    • Pipeline Execution/ETL
    • AI
    • Python
    • Data Warehousing
    • Cloud Platform
    • AWS
    More
  • Β· 46 views Β· 1 application Β· 16d

    Data Engineer (Relocation to Spain)

    Office Work Β· Spain Β· Product Β· 3 years of experience Β· English - None
    Do you know that your professional skills can ensure the liquidity of a cryptocurrency exchange? We are looking for a Data Engineer with ETL/ELT for the Spanish office of the most famous Ukrainian company. Working with big data, strong team, assistance...

    Do you know that your professional skills can ensure the liquidity of a cryptocurrency exchange?
    We are looking for a Data Engineer with ETL/ELT for the Spanish office of the most famous Ukrainian company.

    Working with big data, strong team, assistance with family relocation, TOP conditions.

     

    Main Responsibilities

    β€” Design, build, and maintain scalable and resilient data pipelines (batch and real-time)
    β€” Develop and support data lake/data warehouse architectures
    β€” Integrate internal and external data sources/APIs into unified data systems
    β€” Ensure data quality, observability, and monitoring of pipelines
    β€” Collaborate with backend and DevOps engineers on infrastructure and deployment
    β€” Optimize query performance and data processing latency across systems
    β€” Maintain documentation and contribute to internal data engineering standards
    β€” Implement data access layers and provide well-structured data for downstream teams

     

    Mandatory Requirements

    β€” 3+ years of experience as a Data Engineer in high-load or data-driven environments
    β€” Proficient in Python for data processing and automation (pandas, pyarrow, sqlalchemy, etc.)
    β€” Advanced knowledge of SQL: query optimization, indexes, partitions, materialized views
    β€” Hands-on experience with ETL/ELT orchestration tools (e.g., Airflow, Prefect)
    β€” Experience with streaming technologies (e.g., Kafka, Flink, Spark Streaming)
    β€” Solid background in data warehouse solutions: ClickHouse, BigQuery, Redshift, or Snowflake
    β€” Familiarity with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code principles
    β€” Experience with containerization and deployment tools (e.g., Docker, Kubernetes, CI/CD)
    β€” Understanding of data modeling, data versioning, and schema evolution (e.g., dbt, Avro, Parquet)
    β€” English β€” at least intermediate (for documentation & communication with tech teams)

     

    We offer

    Immerse yourself in Crypto & Web3:
    β€” Master cutting-edge technologies and become an expert in the most innovative industry.
    Work with the Fintech of the Future:
    β€” Develop your skills in digital finance and shape the global market.
    Take Your Professionalism to the Next Level:
    β€” Gain unique experience and be part of global transformations.
    Drive Innovations:
    β€” Influence the industry and contribute to groundbreaking solutions.
    Join a Strong Team:
    β€” Collaborate with top experts worldwide and grow alongside the best.
    Work-Life Balance & Well-being:
    β€” Modern equipment.
    β€” Comfortable working conditions, and an inspiring environment to help you thrive.
    β€” 30 calendar days of paid leave.
    β€” Additional days off for national holidays.

     

    With us, you’ll dive into the world of unique blockchain technologies, reshape the crypto landscape, and become an innovator in your field. If you’re ready to take on challenges and join our dynamic team, apply now and start a new chapter in your career!
     

    More
  • Β· 40 views Β· 1 application Β· 16d

    Data Engineer (with Azure)

    Full Remote Β· EU Β· 3 years of experience Β· English - B1
    Main Responsibilities: Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements. You will work on cutting-edge cloud technologies,...

    Main Responsibilities:

    Data Engineer is responsible for helping select, deploy, and manage the systems and infrastructure required of a data processing pipeline to support customer requirements.

     

    You will work on cutting-edge cloud technologies, including Microsoft Fabric, Azure Synapse Analytics, Apache Spark, Data Lake, Data Bricks, Data Factory, Cosmos DB, HD Insights, Stream Analytics, Event Grid in the implementation projects for corporate clients all over EU, CIS, United Kingdom, Middle East.

    Our ideal candidate is a professional passionated with technologies, a curious and self-motivated person.

     

    Responsibilities revolve around DevOps and include implementing ETL pipelines, monitoring/maintaining data pipeline performance, model optimization

     

    Mandatory Requirements:

    – 3+ years of experience, ideally within a Data Engineer role.

    – understanding of data modeling, data warehousing concepts, and ETL processes

    – 2+ years of experience with Azure Cloud technologies

    – experience in distributed computing principles and familiarity with key architectures, broad experience across a set of data stores (Azure Data Lake Store, Azure Synapse Analytics, Apache Spark, Azure Data Factory)

    – Understanding of landing, staging area, data cleansing, data profiling, data security and data architecture concepts (DWH, Data Lake, Delta Lake/Lakehouse, Datamart)

    – SQL-skills

    – communication and interpersonal skills

    – English β€”Π’2

     

    Will be beneficial if a candidate has experience in SQL migration from on-premises to cloud, data modernization and migration, advanced analytics projects, and/or professional certification in data&analytics.

     

    We offer:

    – professional growth and international certification

    – free of charge technical and business trainings and the best bootcamps (worldwide, including HQ Microsoft- Redmond courses)

    – innovative data & analytics projects, practical experience with cutting-edge Azure data&analytics technologies at various customers’ projects

    – great compensation and individual bonus remuneration

    – medical insurance

    – long-term employment

    – ondividual development plan

    More
  • Β· 34 views Β· 0 applications Β· 16d

    Senior Data Engineer

    Ukraine Β· 4 years of experience Β· English - B2
    We are a global audience and location intelligence company that helps marketers connect the digital and physical world. We provide data-driven solutions to enhance marketing campaigns by leveraging location and audience data to reveal consumer behavior...

    We are a global audience and location intelligence company that helps marketers connect the digital and physical world. We provide data-driven solutions to enhance marketing campaigns by leveraging location and audience data to reveal consumer behavior and enable more precise targeting and measurement. We work on high-end / high-performance / high-throughput systems for in-time analysis of data for autonomous driving and other big data applications e.g. for E-commerce.


    Job Description

    You have 4+ years of experience on similar position.

    You have significant experience with Python. Familiarity with Java or Scala is a plus.

    Hands-on experience building scalable solutions in AWS.

    Proficiency in NoSQL and SQL databases and in high-throughput data-related architecture and technologies (e.g. Kafka, Spark, Hadoop, MongoDB, AWS Batch, AWS Glue, Athena, Airflow, dbt).

    Excellent SQL and data transformation skills.

    Excellent written and verbal communication skills with an ability to simplify complex technical information.

    Experience guiding and mentoring junior team members in a collaborative environment.


     

    Job Responsibilities

    Work in a self-organised agile team with a high level of autonomy, and you will actively shape your team's culture.

    Design, build, and standardise privacy-first big data architectures, large-scale data pipelines, and advanced analytics solutions in AWS.

    Develop complex integrations with third-party partners, transferring terabytes of data.

    Align with other Data experts on data (analytics) engineering best practices and standards, and introduce those standards and data engineering expertise to the team in order to enhance existing data pipelines and build new ones.

    Successfully partner up with the Product team to constantly develop further and improve our platform features.

    More
  • Β· 46 views Β· 7 applications Β· 16d

    Senior Data Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 4 years of experience Β· English - B1
    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an...

    GlobalLogic is searching for a motivated, results-driven, and innovative software engineer to join our project team at a dynamic startup specializing in pet insurance. Our client is a leading global holding company that is dedicated to developing an advanced pet insurance claims clearing solution designed to expedite and simplify the veterinary invoice reimbursement process for pet owners.
    You will be working on a cutting-edge system built from scratch, leveraging Azure cloud services and adopting a low-code paradigm. The project adheres to industry best practices in quality assurance and project management, aiming to deliver exceptional results.
    We are looking for an engineer who thrives in collaborative, supportive environments and is passionate about making a meaningful impact on people’s lives. If you are enthusiastic about building innovative solutions and contributing to a cause that matters, this role could be an excellent fit for you.



    Requirements

    • Strong hands-on experience with Azure Databricks (DLT Pipelines, Lakeflow Connect, Delta Live Tables, Unity Catalog, Time Travel, Delta Share) for large-scale data processing and analytics
    • Proficiency in data engineering with Apache Spark, using PySpark, Scala, or Java for data ingestion, transformation, and processing
    • Proven expertise in the Azure data ecosystem: Databricks, ADLS Gen2, Azure SQL, Azure Blob Storage, Azure Key Vault, Azure Service Bus/Event Hub, Azure Functions, Azure Data Factory, and Azure CosmosDB
    • Solid understanding of Lakehouse architecture, Modern Data Warehousing, and Delta Lake concepts
    • Experience designing and maintaining config-driven ETL/ELT pipelines with support for Change Data Capture (CDC) and event/stream-based processing
    • Proficiency with RDBMS (MS SQL, MySQL, PostgreSQL) and NoSQL databases
    • Strong understanding of data modeling, schema design, and database performance optimization
    • Practical experience working with various file formats, including JSON, Parquet, and ORC
    • Familiarity with machine learning and AI integration within the data platform context
    • Hands-on experience building and maintaining CI/CD pipelines (Azure DevOps, GitLab) and automating data workflow deployments
    • Solid understanding of data governance, lineage, and cloud security (Unity Catalog, encryption, access control)
    • Strong analytical and problem-solving skills with attention to detail
    • Excellent teamwork and communication skills
    • Upper-Intermediate English (spoken and written)

    Job responsibilities

    • Design, implement, and optimize scalable and reliable data pipelines using Databricks, Spark, and Azure data services
    • Develop and maintain config-driven ETL/ELT solutions for both batch and streaming data
    • Ensure data governance, lineage, and compliance using Unity Catalog and Azure Key Vault
    • Work with Delta tables, Delta Lake, and Lakehouse architecture to ensure efficient, reliable, and performant data processing
    • Collaborate with developers, analysts, and data scientists to deliver trusted datasets for reporting, analytics, and machine learning use cases
    • Integrate data pipelines with event-based and microservice architectures leveraging Service Bus, Event Hub, and Functions
    • Design and maintain data models and schemas optimized for analytical and operational workloads
    • Identify and resolve performance bottlenecks, ensuring cost efficiency and maintainability of data workflows
    • Participate in architecture discussions, backlog refinement, estimation, and sprint planning
    • Contribute to defining and maintaining best practices, coding standards, and quality guidelines for data engineering
    • Perform code reviews, provide technical mentorship, and foster knowledge sharing within the team
    • Continuously evaluate and enhance data engineering tools, frameworks, and processes in the Azure environment
    More
  • Β· 44 views Β· 3 applications Β· 16d

    Data Engineer ( with Snowflake and insurance companies experience)

    Full Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    Senior Data Engineer with experience in insurance & Snowflake. On behalf of our Client from the USA, Mobilunity is looking for a Senior Data Engineer. Our client is a well-established US-based financial services organization with a long history in the...

    Senior Data Engineer with experience in insurance & Snowflake.
    On behalf of our Client from the USA, Mobilunity is looking for a Senior Data Engineer.
    Our client is a well-established US-based financial services organization with a long history in
    the insurance and long-term financial security space. The company operates as a member-oriented, non-profit institution, focusing on life insurance, retirement programs, and community-driven initiatives. The product ecosystem is mature, data-heavy, and highly
    regulated, with a strong emphasis on reliability, accuracy, and compliance.
    We are looking for a Senior Data Engineer with deep Snowflake expertise and hands-on experience in insurance companies. 

    This domain background is a must-have requirement.
     

    Requirements:
    - 5+ years of experience as a Data Engineer.
    - Strong hands-on experience with Snowflake (data modeling, optimization, ELT/ETL pipelines).
    - Mandatory experience in the insurance domain (policies, claims, underwriting, actuarial, or related datasets).
    - Solid experience with data warehousing and analytical platforms.
    - Proven ability to build and maintain scalable, reliable data pipelines.
    - Advanced SQL skills.
    - Experience working with cloud platforms (AWS / GCP / Azure).
    - Upper-Intermediate+ / Advanced English – direct communication with US stakeholders.
     

    Nice to have:
    - Experience with dbt, Airflow, or similar orchestration tools.
    - Background in regulated or compliance-heavy environments.
    - Previous experience working with US clients or distributed teams.
     

    In return we offer
    The friendliest community of like-minded IT-people.
    Open knowledge-sharing environment – exclusive access to a rich pool of colleagues willing to share their endless insights into the broadest variety of modern technologies.


    Languages
    English - B2-Π‘1

    More
  • Β· 68 views Β· 7 applications Β· 16d

    Senior Backend Developer (AI agents experise and VPN projects)

    Full Remote Β· Worldwide Β· 8 years of experience Β· English - B1
    We’re seeking a super-experienced Senior Back-end Developer with deep expertise in JavaScript/TypeScript, particularly in Node.js, combined with practical experience in AI prompt engineering. You’ll architect, build, and optimize scalable applications...
    • We’re seeking a super-experienced Senior Back-end Developer with deep expertise in JavaScript/TypeScript, particularly in Node.js, combined with practical experience in AI prompt engineering. You’ll architect, build, and optimize scalable applications that seamlessly integrate AI-assisted functionalities and modern backends.
    • Prior experience with VPN services and network protocols.

     

    Requirements

     

    Position name: Senior Node.js Developer(with AI agents experience) for VPN Mobile App
    Level: Senior

    Hard skills:

    • 8+ years of Software Engineering experience
    • Strong experience with Node.js (Express.js, Koa.js, Nest.js)
    • Strong Software Design knowledge: OOD, FP, Design patterns
    • Experience building and maintaining VPN, proxy, and secure networking backend systems
    • Solid understanding of network protocols, tunneling, traffic routing, and encryption standards
    • Practical hands-on experience with AI prompt engineering, leveraging ArgumentCode, ClaudeCode, Cursor, or equivalent developer-AI tools.
    • Deep understanding of RESTful and GraphQL APIs, modern web performance, and security.
    • Solid understanding of LLM context design, tool integration, and code-generation workflows.
    • Excellent knowledge of modern systems architecture components including microservices, event sourcing, and distributed data processing systems
    • Advanced experience with SQL databases, especially PostgreSQL
    • Experience with Docker & CI/CD processes
    • Experience with AWS (or similar services like GCP or Azure)
    • Experience with Serverless framework, AWS Lambdas, AWS CloudFormation
    • Knowledge of Agile (preferably SCRUM) development practices
    More
  • Β· 54 views Β· 6 applications Β· 16d

    Data Analytic

    Full Remote Β· EU Β· 3 years of experience Β· English - B2
    Key Responsibilities Develop, build, and maintain Adobe Analytics workspaces and dashboards covering complex end-to-end digital journeys. Use Adobe Report Builder to create automated, reliable, and scalable reports for cross-functional teams. Analyse...

    Key Responsibilities

    • Develop, build, and maintain Adobe Analytics workspaces and dashboards covering complex end-to-end digital journeys.
    • Use Adobe Report Builder to create automated, reliable, and scalable reports for cross-functional teams.
    • Analyse digital performance trends and translate complex datasets into clear insights and recommendations for senior leadership.
    • Perform QA of analytics tracking across new and existing digital journeys to ensure accurate and complete data capture.
    • Troubleshoot data discrepancies and ensure compliance with tagging governance and best practices.
    • Combine data from multiple sources (e.g., Adobe Analytics, CRM systems, operational data) to create unified performance views.
    • Design and deliver clear, concise, and compelling reports tailored for senior stakeholder audiences.
    • Identify key opportunities, risks, and trends to support strategic and tactical decision-making.

    Required Skills & Experience

    • Proven experience as a Digital or Adobe Analytics Analyst.
    • Strong hands-on expertise with Adobe Analytics (Analysis Workspace, segments, calculated metrics).
    • Experience using Adobe Report Builder for automated reporting.
    • Strong understanding of digital analytics tagging, data layers, and governance principles.
    • Demonstrated ability to analyse complex data and communicate insights in a clear, business-focused manner.
    • Experience working with multiple data sources and integrating datasets for holistic analysis.
    • Excellent stakeholder communication skills, including experience presenting to senior leadership
    More
  • Β· 38 views Β· 8 applications Β· 16d

    Data Scientist (Ukrainian speaker)

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - B2
    Responsibilities: β€Œ Design, develop and optimize Large Language Models for various NLP tasks such as text generation, summarization, translation, and question-answering Conduct research and experiments to push the boundaries of LLM capabilities and...

    Responsibilities:

    β€Œ

    • Design, develop and optimize Large Language Models for various NLP tasks such as text generation, summarization, translation, and question-answering
    • Conduct research and experiments to push the boundaries of LLM capabilities and performance
    • Collaborate with cross-functional teams (engineering, product, research) to integrate LLMs into product offerings
    • Develop tools, pipelines and infrastructure to streamline LLM training, deployment and monitoring
    • Analyze and interpret model outputs, investigate errors/anomalies, and implement strategies to improve accuracy
    • Stay current with the latest advancements in LLMs, NLP and machine learning research
    • Communicate complex technical concepts to both technical and non-technical stakeholders

    Requirements:

    β€Œ

    • MS or PhD degree in Computer Science, Data Science, AI, or a related quantitative field
    • 4+ years of hands-on experience developing and working with deep learning models, especially in NLP/LLMs
    • Expert knowledge of Python, PyTorch, TensorFlow, and common deep learning libraries
    • Strong understanding of language models, attention mechanisms, transformers, sequence-to-sequence modeling
    • Experience training and fine-tuning large language models
    • Proficiency in model deployment, optimization, scaling and serving
    • Excellent problem-solving, analytical and quantitative abilities
    • Strong communication skills to present technical information clearly
    • Ability to work collaboratively in a team environment
    • Fluency in Ukrainian and English

    Preferred:

    • Research experience in LLMs, NLP, machine learning
    • Experience working with multi-modal data (text, image, audio)
    • Knowledge of cloud platforms like AWS, GCP for model training
    • Understanding of MLOps and production ML workflows
    • Background in information retrieval, knowledge graphs, reasoning
    More
  • Β· 55 views Β· 5 applications Β· 16d

    Acumatica Business Analyst

    Full Remote Β· Poland, Ukraine Β· 3 years of experience Β· English - B2
    Sprinterra is a leading provider of Acumatica development services and tech solutions. Having one of the largest in-house teams of Acumatica certified specialists (30+) with thousands of hours of experience in Acumatica we deliver top-quality services to...

    Sprinterra is a leading provider of Acumatica development services and tech solutions. Having one of the largest in-house teams of Acumatica certified specialists (30+) with thousands of hours of experience in Acumatica we deliver top-quality services to our customers.
    Requirements

    Must-have

    • 3+ years of Business Analyst experience on ERP / accounting / operations projects (implementation, optimization, or major enhancements).
    • Strong understanding of business process mapping (Order-to-Cash, Procure-to-Pay, Record-to-Report; inventory/manufacturing a plus).
    • Working knowledge of accounting fundamentals (GL, AP, AR, cash management, revenue recognition basics).
    • Proven ability to write clear requirements artifacts:
      • user stories / acceptance criteria
      • functional specs
      • process flows (BPMN or similar)
      • test cases / UAT scripts
    • Experience collaborating with developers and solution architects (translate business needs into functional + data requirements).
    • Comfortable with data analysis:
      • Excel (advanced)
      • ability to interpret datasets, reconcile transactions, validate outcomes
    • Strong stakeholder skills: run workshops, manage scope, prioritize backlog, communicate tradeoffs.
    • Excellent written and verbal English.

    Strongly preferred 

    • Prior work with any ERP: NetSuite, Dynamics 365, SAP, Oracle, Sage, Odoo, etc.
    • Familiarity with reporting concepts: dashboards, financial reporting, management reporting.
    • Understanding of integrations: REST/SOAP APIs, file imports/exports, middleware concepts.
    • Experience with implementation partners/VARs and client-facing delivery.

    Nice-to-have (Acumatica-specific but not required)

    • Acumatica experience (Financials, Distribution, Project Accounting, Construction, etc.).
    • Knowledge of Acumatica concepts: Screens, GIs (Generic Inquiries), Workflows, Role-based access, Import Scenarios.
    • Exposure to Acumatica customization model (high level) and release testing.

    Key responsibilities 

    • Become an expert in Acumatica OOTB functionality and recommend best-fit solutions before custom development.
    • Lead discovery to document as-is processes, define to-be processes, and identify gaps.
    • Translate business needs into efficient customization designs, focusing on simplicity, maintainability, and upgrade-safe patterns.
    • Produce developer-ready specs and validate delivered functionality through UAT and regression checks.
    • Maintain a structured backlog in Jira and ensure clarity of scope, priorities, and acceptance criteria.

    Learning expectation

    • Willingness to complete structured onboarding and become proficient in Acumatica within 60–90 days (navigation, core modules, workflows, reporting, common configuration patterns).
    More
  • Β· 62 views Β· 24 applications Β· 16d

    Senior Machine Learning Engineer

    Full Remote Β· Countries of Europe or Ukraine Β· 6 years of experience Β· English - B2
    About the Role: We are looking for a Senior Machine Learning Engineer to lead the development, deployment, and operationalization of advanced AI and machine learning solutions across two high-impact initiatives: Automated Requirements Engineering...

    About the Role:

     

    We are looking for a Senior Machine Learning Engineer to lead the development, deployment, and operationalization of advanced AI and machine learning solutions across two high-impact initiatives:

    • Automated Requirements Engineering Platform powered by Large Language Models (LLMs)
    • Supply Chain Intelligence Platform with predictive risk scoring and demand forecasting

       

    In this role, you will own the end-to-end machine learning lifecycle β€” from system architecture and data pipelines to model training, optimization, and production deployment. You will work at the intersection of generative AI and classical machine learning, delivering models that are not only accurate, but also robust, explainable, and production-ready.

     

    The environment follows a structured Sprint Zero β†’ Stage Gate delivery model and operates under strict defense-grade security and compliance requirements, making this role ideal for engineers who value engineering rigor and real-world impact.

     

    πŸ‘‰ Key Responsibilities:

     

    1. LLM & NLP Pipelines

    • Design and fine-tune LLM-based pipelines to parse and interpret complex regulatory and technical documentation (e.g. military standards, building codes);
    • Transform unstructured natural language requirements into machine-executable formats (e.g. logic tuples, structured rules);
    • Implement Retrieval-Augmented Generation (RAG) architectures for semantic search across technical documents and historical project data;
    • Optimize prompt engineering strategies (few-shot learning, chain-of-thought, prompt templates) to improve domain-specific performance with minimal retraining.

       

    2. Predictive & Analytical Models (Supply Chain)

     

    • Develop time-series forecasting models for material demand, cost trends, and spend categories;
    • Build risk scoring, classification, and anomaly detection models to evaluate supplier reliability and exposure (financial, operational, geopolitical);
    • Design multi-objective optimization algorithms to balance cost, lead time, and risk in procurement decision-making.

       

    3. MLOps & Productionization

     

    • Containerize and deploy models using Docker and Kubernetes into secure, on-premise inference environments;
    • Build reproducible training and inference pipelines using tools such as MLflow, Kubeflow, or similar;
    • Optimize inference performance through quantization, distillation, and efficient model architectures;
    • Implement monitoring and retraining workflows to detect model drift and ensure long-term performance in production.

       

    πŸ‘‰ Technical Requirements:

     

    • Python expertise and strong hands-on experience with ML frameworks: PyTorch, TensorFlow, Scikit-learn, Pandas, NumPy;
    • Deep understanding of NLP and Generative AI, including transformer architectures (BERT, GPT, LLaMA);
    • Experience with Hugging Face, LangChain, or similar NLP/LLM frameworks;
    • Solid MLOps experience, including Docker, Kubernetes, experiment tracking, and CI/CD for ML;
    • Ability to design data pipelines for structured data (SQL, tabular) and unstructured data (text, PDFs);
    • Strong algorithmic thinking with experience implementing custom logic (e.g. graph traversal, optimization, geometric or rule-based computations).

       

    πŸ‘‰ Professional Qualifications:

     

    • 5+ years of experience in Machine Learning Engineering with production-grade deployments;
    • Proven ability to adapt ML solutions to complex, highly regulated domains (e.g. defense, supply chain, construction, engineering);
    • Experience working in agile delivery models, while maintaining strict engineering standards and documentation discipline;
    • Strong collaboration and communication skills, with the ability to work closely with Data Scientists, Backend Engineers, and Domain Experts.

       

    πŸ‘‰ What we offer:

     

    πŸ“ˆ Professional Growth opportunities:

    • Ambitious goals and interesting projects;
    • Regular & transparent performance review and feedback process;
    • Possibility for both vertical or horizontal growth (in case you want to try a different path).

       

    😌 Comfortable Working conditions

    • Flexible working hours;
    • Provision of required equipment;
    • Remote working model.

       

    🎁 Benefits program

    • 20 working days of fully paid vacation;
    • Free tax reporting support by our Financial department;
    • Help with individual entrepreneurs’ questions and accounting support;
    • Financial support and additional days off for various occasions (e.g. marriage, childbirth, etc.).
    More
  • Β· 7 views Β· 0 applications Β· 16d

    Infrastructure Engineer with Java (hybrid work in Warsaw)

    Office Work Β· Poland Β· 5 years of experience Β· English - B2
    The product we are working on is one of TOP-3 navigation systems, complex web services, and other solutions related to it. The web and mobile apps handle information at a massive scale and extend well beyond the search, giving people and companies a lot...

    The product we are working on is one of TOP-3 navigation systems, complex web services, and other solutions related to it. The web and mobile apps handle information at a massive scale and extend well beyond the search, giving people and companies a lot of new, useful options.

    This role focuses on executing critical migration projects within the backend infrastructure of the project. The Backend Infrastructure team is undertaking several large-scale migrations to modernize its systems, improve reliability, and reduce maintenance overhead. This TVC position will be instrumental in performing the hands-on work required for these migrations, working closely with the infrastructure team and other Backend teams.
     

    Responsibilities:
     

    • Execute Migrations: Actively participate in and drive the execution of large-scale code and system migrations across various backend services. Some examples include:
      • migrating event processing systems from custom infrastructure to managed infrastructure solutions;
      • Transitioning services from custom OpenCensus metrics collection to OpenTelemetry;
      • migrating custom metrics to standard OpenTelemetry metrics.
    • Code Modification and Updates: Update and refactor existing codebases (primarily Java) to align with new libraries, platforms, and infrastructure.
    • Testing: Work with the Infrastructure team to create a testing plan for migrations to ensure that changes do not break running services and execute the test plans.
    • Collaboration: Work closely with the Backend Infrastructure team and other software engineers to understand migration requirements, plan execution strategies, and ensure smooth transitions with minimal disruption.
    • Problem Solving: Investigate, debug, and resolve technical issues and complexities encountered during the migration processes.
    • Documentation: Maintain clear and concise documentation for migration plans, processes, changes made, and outcomes.
    • Best Practices: Adhere to software development best practices, ensuring code quality, and follow established guidelines for infrastructure changes.

       

    Requirements:

    • 5+ years of hands-on experience in backend software development.
    • Strong proficiency in Java programming.
    • Strong communication and interpersonal skills, with the ability to collaborate effectively within a technical team environment.
    • Bachelor’s degree in Computer Science, Software Engineering, or a related technical field, or equivalent practical experience.
    • Good spoken and written English level β€” Upper-Intermediate or higher.
       

    Nice to have:

    • Experience with observability frameworks such as OpenTelemetry or OpenCensus.
    • Familiarity with gRPC.
    • Knowledge of Google Cloud Platform (GCP) services, particularly data processing services like Dataflow.
       

    We offer:

    • Opportunities to develop in various areas;
    • Compensation package (20 paid vacation days, paid sick leaves);
    • Flexible working hours;
    • Medical insurance;
    • English courses with a native speaker, yoga (Zoom);
    • Paid tech training and other activities for professional growth;
    • Hybrid work mode (∼3 days in the office);
    • International business trips
    • Comfortable office.

       

    If your qualifications and experience match the requirements of the position, our recruitment team will reach out to you in a week maximum. Please rest assured that we carefully consider each candidate, but due to the amount of applications, the review and further processing of your candidacy may take some time.

    More
Log In or Sign Up to see all posted jobs