Bennett Data Science

Joined in 2020
57% answers
WHO WE ARE
At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on making our clients’ products better. Our deep experience and product-first attitude set us apart from other groups and gets us the business results our clients want.

WHY YOU SHOULD WORK WITH US
You’ll be exposed to a wide range of clients who are at the cutting edge of innovation in their field and get to work on fascinating problems, supporting real products, with real data. We help lots of companies, from some of the largest companies in the world to small startups in Silicon Valley who are building the next big thing. Your perks include: expert mentorship from senior staff, competitive compensation, flexible work schedule, and the ability to work from any location of your choice.
  • · 33 views · 1 application · 2d

    Senior Data Scientist

    Full Remote · Countries of Europe or Ukraine · Product · 5 years of experience · English - B2
    Who We Are At Bennett Data Science, we've been pioneering the use of predictive analytics and data science for over a decade for some of the biggest brands and retailers. We're at the top of our field because we focus on delivering actionable AI for our...

    Who We Are

    At Bennett Data Science, we've been pioneering the use of predictive analytics and data science for over a decade for some of the biggest brands and retailers. We're at the top of our field because we focus on delivering actionable AI for our clients. Our deep experience and product-first attitude set us apart from other groups and gets us the business results our clients want.

     

    Why You Should Work With Us

    You'll be exposed to a wide range of clients who are at the cutting edge of innovation in their field and get to work on fascinating problems, supporting real products, with real data. We help lots of companies, from some of the largest companies in the world to small startups in Silicon Valley who are building the next big thing.

     

    Expert Mentorship: Direct guidance from senior staff with 20+ years of applied ML experience

    Competitive Compensation: Market-rate pay with performance upside

    Fully Remote: Work from any location of your choice, on a flexible schedule

    Real Impact: Your models go into production and serve real users

     

    The Role

    As a Senior Data Scientist, you will lead the design, development, and deployment of production machine learning models. You will work directly with stakeholders to understand their business problems, translate them into data science solutions, and deliver measurable impact. You will also mentor junior team members and drive technical excellence across engagements.

     

    You'll own projects end-to-end, from exploratory analysis and model development through deployment, monitoring, and iteration alongside senior data engineers. This is a hands-on, individual contributor role at a senior level. Client-facing communication is part of the job.

     

    Requirements

    A successful candidate has 5+ years of experience in applied data science and machine learning, a strong statistical foundation, and demonstrates the following:

    • Production ML experience: building, deploying, and maintaining models that serve real users at scale
    • Strong Python skills including scikit-learn, pandas, NumPy, and at least one deep learning framework (PyTorch or TensorFlow)
    • Proven experience building predictive scoring, classification, or ranking models deployed to production at scale
    • Proficiency in SQL and comfort working with large-scale data warehouses (Redshift, Snowflake, BigQuery)
    • Experience deploying RAG-based chat bots, including  how to reduce hallucinations and a strong understanding of the tradeoffs across various approaches
    • Solid statistical foundation: hypothesis testing, distributions, probability, experimental design
    • Experience communicating findings and model recommendations to non-technical stakeholders
    • Comfort working independently across multiple projects simultaneously
    • English proficiency at B2 or above (written and spoken)

    Nice to Have

    • Experience applying LLMs or transformer-based NLP for structured text classification, information extraction, or embedding-based retrieval
    • Geospatial feature engineering — location-based statistics, spatial indexing, or OpenSearch for proximity scoring
    • Experience with Vision-Language Models (VLMs) or satellite/aerial imagery analysis
    • Familiarity with MLOps tooling — experiment tracking (MLflow, W&B), model registries, CI/CD for ML
    • Experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI)
    • Exposure to utilities, energy, infrastructure, or enterprise SaaS domains
    • Experience fine-tuning or adapting pre-trained models (LoRA, PEFT, or full fine-tune)

     

    Role Expectations

    • This is a production role. We build systems that run in the real world
    More
  • · 86 views · 13 applications · 24d

    Data Platform Engineer

    Full Remote · Countries of Europe or Ukraine · Product · 6 years of experience · English - B1
    WHO WE ARE At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on actionable technology that helps...

    WHO WE ARE

    At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on actionable technology that helps people around the world. Our deep experience and product-first attitude set us apart from other groups and it's why people who work with us tend to stay with us long term.

     

    WHY YOU SHOULD WORK WITH US

    You'll work on an important problem that improves the lives of a lot of people. You'll be at the cutting edge of innovation and get to work on fascinating problems, supporting real products, with real data. Your perks include: expert mentorship from senior staff, competitive compensation, paid leave, flexible work schedule and ability to travel internationally.

    Essential Requirements for Data Platform Engineer:

    • Architecture & Improvement: Continuously review the current architecture and implement incremental improvements, facilitating a gradual transition of production operations from Data Science to Engineering.
    • AWS Service Ownership: Own the full lifecycle (development, deployment, support, and monitoring) of client-facing AWS services (including SageMaker endpoints, Lambdas, and OpenSearch). Maintain high uptime and adherence to Service Level Agreements (SLAs).
    • ETL Operations Management: Manage all ETL processes, including the operation and maintenance of Step Functions and Batch jobs (scheduling, scaling, retry/timeout logic, failure handling, logging, and metrics).
    • Redshift Operations & Maintenance: Oversee all Redshift operations, focusing on performance optimization, access control, backup/restore readiness, cost management, and general housekeeping.
    • Performance Optimization: Post-stabilization of core monitoring and pipelines, collaborate with the Data Science team on targeted code optimizations to enhance reliability, reduce latency, and lower operational costs.
    • Security & Compliance: Implement and manage the vulnerability monitoring and remediation workflow (Snyk).
    • CI/CD Implementation: Establish and maintain robust Continuous Integration/Continuous Deployment (CI/CD) systems.
    • Infrastructure as Code (Optional): Utilize IaC principles where necessary to ensure repeatable and streamlined release processes.


    Mandatory Hard Skills:

    • AWS Core Services: Proven experience with production fundamentals (IAM, CloudWatch, and VPC networking concepts).
    • AWS Deployment: Proficiency in deploying and operating AWS SageMaker and Lambda services.
    • ETL Orchestration: Expertise in using AWS Step Functions and Batch for ETL and job orchestration.
    • Programming & Debugging: Strong command of Python for automation and troubleshooting.
    • Containerization: Competence with Docker/containers (build, run, debug).
    • Version Control & CI/CD: Experience with CI/CD practices and Git (GitHub Actions preferred).
    • Data Platform Tools: Experience with Databricks, or a demonstrated aptitude and willingness to quickly learn.
    •  

    Essential Soft Skills:

    • Accountability: Demonstrate complete autonomy and ownership over all assigned systems ("you run it, you fix it, you improve it").
    • Communication: Fluent in English; capable of clear, direct communication, especially during incidents.
    • Prioritization: A focus on delivering a minimally-supportable, deployable solution to meet deadlines, followed by optimization and cleanup.
    • Incident Management: Maintain composure under pressure and possess strong debugging and incident handling abilities.
    • Collaboration: Work effectively with the Data Science team while communicating technical trade-offs clearly and maintaining momentum.
    More
  • · 140 views · 16 applications · 24d

    Data Analyst

    Full Remote · Countries of Europe or Ukraine · 3 years of experience · English - B1
    WHO WE ARE At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on making our clients’ products...

    WHO WE ARE

    At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on making our clients’ products better. Our deep experience and product-first attitude set us apart from other groups and gets us the business results our clients want.

     

    WHY YOU SHOULD WORK WITH US

    You’ll be exposed to a wide range of clients who are at the cutting edge of innovation in their field and get to work on fascinating problems, supporting real products, with real data. We help lots of companies, from some of the largest companies in the world to small startups in Silicon Valley who are building the next big thing. Your perks include: expert mentorship from senior staff, competitive compensation, paid leave, flexible work schedule and ability to travel internationally.

     

    Data Analyst - Specific Requirements:

    As a data analyst, you will be primarily involved in analyzing data and reporting on data distribution and quality issues. You will utilize your analytic skills to gain deeper insights into data and communicate your findings to support various data science initiatives. You will receive direct support from senior data scientists and strengthen your expertise.

     

    A successful candidate has 3-6 years of experience, a working understanding of statistics, data visualization, and communication, some machine learning, and exhibits the following skills:

     

    - Experience with at least one visualization tool such as Tableau or Power BI

    - Knowledge and experience describing data distributions using statistical methods

    - Experience with metrics such as LTV, CAC, customer analysis, product analysis

    - Experience with initial and some exploratory data analysis (IDA/EDA)

    - Proven ability to identify and propose solutions to various data quality issues

    - Experience defining data required and querying databases to support defined objectives including combining data from multiple sources by grouping/aggregation to produce desired datasets

    - Experience visualizing/presenting data for stakeholders

    - Experience with Python

    - Experience writing SQL queries

    More
  • · 118 views · 28 applications · 29d

    MLOps Engineer

    Full Remote · Countries of Europe or Ukraine · 3 years of experience · English - B1
    Hi! Thank you for taking some time to look at our requisition. We are a US-based company working on an AI product in the entertainment space. Our app is geared towards children and we are working with major film companies. We have a strong, distributed...

    Hi!

    Thank you for taking some time to look at our requisition. We are a US-based company working on an AI product in the entertainment space. Our app is geared towards children and we are working with major film companies.

    We have a strong, distributed team, mostly in Europe. We're looking for an experienced person to help use with MLOps.

     

    Core MLOps Responsibilities:

    • Model Deployment: Convert ComfyUI workflows to production Python pipelines
    • Infrastructure Management: Multi-provider GPU orchestration (RunPod + future providers)
    • CI/CD for ML: Automated model deployment and rollback systems
    • Monitoring & Observability: Pipeline performance, model drift, and system health
    • Scalability: Serverless GPU management and load balancing
    • Model Lifecycle: Version control, and hot-swapping of LoRAs

     

    AI/ML Pipeline (Critical):

    • Deep experience with Diffusion models (Stable Diffusion, Flux)
    • Hands-on ComfyUI to Python conversion experience
    • Computer vision libraries: OpenCV, PIL, torchvision
    • Model inference optimization (batching, memory management)
    • Experience with diffuser library
    • Experience with ControlNets, LoRA, and inpainting workflows
    • Experience with GroundingDINO, SAM

     

    Backend Development:

    • FastAPI/Python (mid/senior level)
    • Async programming and queue management
    • PostgreSQL/AlloyDb
    • RESTful API design with proper error handling

     

    DevOps/Infrastructure:

    • Docker containerization
    • Google Cloud Platform (GCS, Cloud Run, CloudBuild)
    • Git Actions
    • CI/CD pipeline setup
    • GPU Providers Platform (RunPod nice to have)

     

    GPU/Serverless:

    • RunPod API integration (preferred) or other GPU providers
    • GPU memory optimization
    • Cold start minimization strategies
    • Multi-provider orchestration patterns

     

    Monitoring & Observability:

    • Custom metrics for ML pipelines
    • Performance monitoring and alerting
    • Integration with data warehouse systems

     

    Nice-to-Have:

    • Previous work with content generation platforms
    • Experience with model serving frameworks (TorchServe, TensorRT)
    • Experience with training/fine-tuning image generation models (e.g, Stable Diffusion, Flux with LoRA)

     

     

     

     

    More
Log In or Sign Up to see all posted jobs