At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on making our clients’ products better. Our deep experience and product-first attitude set us apart from other groups and gets us the business results our clients want.
WHY YOU SHOULD WORK WITH US
You’ll be exposed to a wide range of clients who are at the cutting edge of innovation in their field and get to work on fascinating problems, supporting real products, with real data. We help lots of companies, from some of the largest companies in the world to small startups in Silicon Valley who are building the next big thing. Your perks include: expert mentorship from senior staff, competitive compensation, flexible work schedule, and the ability to work from any location of your choice.
-
· 108 views · 10 applications · 3d
Data Analyst
Full Remote · Countries of Europe or Ukraine · 3 years of experience · English - B1WHO WE ARE At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on making our clients’ products...WHO WE ARE
At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on making our clients’ products better. Our deep experience and product-first attitude set us apart from other groups and gets us the business results our clients want.
WHY YOU SHOULD WORK WITH US
You’ll be exposed to a wide range of clients who are at the cutting edge of innovation in their field and get to work on fascinating problems, supporting real products, with real data. We help lots of companies, from some of the largest companies in the world to small startups in Silicon Valley who are building the next big thing. Your perks include: expert mentorship from senior staff, competitive compensation, paid leave, flexible work schedule and ability to travel internationally.
Data Analyst / Scientist - Specific Requirements:
As a junior data scientist or data analyst, you will be primarily involved in analyzing data and reporting on data distribution and quality issues. You will utilize your analytic skills to gain deeper insights into data and communicate your findings to support various data science initiatives. You will receive direct support from senior data scientists and strengthen your expertise.
A successful candidate has 3-6 years of experience, a working understanding of statistics, data visualization, and communication, some machine learning, and exhibits the following skills:
- Experience with at least one visualization tool such as Tableau or Power BI
- Knowledge and experience describing data distributions using statistical methods
- Experience with metrics such as LTV, CAC, customer analysis, product analysis
- Experience with initial and some exploratory data analysis (IDA/EDA)
- Proven ability to identify and propose solutions to various data quality issues
- Experience defining data required and querying databases to support defined objectives including combining data from multiple sources by grouping/aggregation to produce desired datasets
- Experience visualizing/presenting data for stakeholders
- Experience with Python
- Experience writing SQL queries
More -
· 88 views · 21 applications · 8d
MLOps Engineer
Full Remote · Countries of Europe or Ukraine · 3 years of experience · English - B1Hi! Thank you for taking some time to look at our requisition. We are a US-based company working on an AI product in the entertainment space. Our app is geared towards children and we are working with major film companies. We have a strong, distributed...Hi!
Thank you for taking some time to look at our requisition. We are a US-based company working on an AI product in the entertainment space. Our app is geared towards children and we are working with major film companies.
We have a strong, distributed team, mostly in Europe. We're looking for an experienced person to help use with MLOps.
Core MLOps Responsibilities:
- Model Deployment: Convert ComfyUI workflows to production Python pipelines
- Infrastructure Management: Multi-provider GPU orchestration (RunPod + future providers)
- CI/CD for ML: Automated model deployment and rollback systems
- Monitoring & Observability: Pipeline performance, model drift, and system health
- Scalability: Serverless GPU management and load balancing
- Model Lifecycle: Version control, and hot-swapping of LoRAs
AI/ML Pipeline (Critical):
- Deep experience with Diffusion models (Stable Diffusion, Flux)
- Hands-on ComfyUI to Python conversion experience
- Computer vision libraries: OpenCV, PIL, torchvision
- Model inference optimization (batching, memory management)
- Experience with diffuser library
- Experience with ControlNets, LoRA, and inpainting workflows
- Experience with GroundingDINO, SAM
Backend Development:
- FastAPI/Python (mid/senior level)
- Async programming and queue management
- PostgreSQL/AlloyDb
- RESTful API design with proper error handling
DevOps/Infrastructure:
- Docker containerization
- Google Cloud Platform (GCS, Cloud Run, CloudBuild)
- Git Actions
- CI/CD pipeline setup
- GPU Providers Platform (RunPod nice to have)
GPU/Serverless:
- RunPod API integration (preferred) or other GPU providers
- GPU memory optimization
- Cold start minimization strategies
- Multi-provider orchestration patterns
Monitoring & Observability:
- Custom metrics for ML pipelines
- Performance monitoring and alerting
- Integration with data warehouse systems
Nice-to-Have:
- Previous work with content generation platforms
- Experience with model serving frameworks (TorchServe, TensorRT)
- Experience with training/fine-tuning image generation models (e.g, Stable Diffusion, Flux with LoRA)
More -
· 115 views · 29 applications · 21d
Generative AI Engineer (Image Focus)
Full Remote · Countries of Europe or Ukraine · Product · 2.5 years of experience · English - B1Location: Remote Type: Full-Time Department: Engineering / Data Sciencej / AI We are a company based in California seeking a versatile Generative AI Engineer who sits at the intersection of software development and creative AI. You will not only...Location: Remote
Type: Full-Time
Department: Engineering / Data Sciencej / AI
We are a company based in California seeking a versatile Generative AI Engineer who sits at the intersection of software development and creative AI. You will not only generate and test high-quality AI imagery but also build and maintain the Python-based tooling that powers our workflows.
Our product is a cutting-edge AI tool that makes children the starts of their own custom stories. We have partnerships with some of the biggest entertainment companies in the world.
Most of our team is Ukrainian.Key Responsibilities
- Tool Development & Maintenance:
- Maintain and improve our internal Python-based applications that generate images based on specific prompts.
- Debug and enhance code in Jupyter Notebooks and manage version control via GitHub.
- Ensure smooth operation of tools through the Command Line Interface (CLI).
- Image Generation & Workflow Testing:
- Execute image generation tasks using SwarmUI (training provided, but aptitude required).
- Run rigorous testing on image outputs to ensure consistency and quality.
- Prompt Engineering:
- Write, test, and validate prompts to achieve specific visual results.
- Iterate on prompt strategies to improve the reliability of image models.
- Collaboration:
- Work closely with key stakeholders to interpret requirements and translate them into technical execution, mostly around efficient and effective prompting as it relates to high-quality images.
- Communicate technical constraints and wins to cross-functional teams.
Required Qualifications (The "Must-Haves")
- Python Proficiency (Mid-Level): You can write clean functions, debug existing code, and are comfortable working within Jupyter Notebooks.
- Command Line Confidence: You are comfortable navigating the OS, running scripts, and managing environments via CLI.
- Version Control: Standard proficiency with Git and GitHub (pull, push, merge, resolve conflicts).
- Prompt Engineering Basics: You understand how to structure a text prompt to get a desired image result and how to troubleshoot when the model hallucinates or fails.
- Language Skills: Strong professional English (written and verbal) is mandatory for communicating complex requirements with the team.
Preferred Qualifications (The "Nice-to-Haves")
- SwarmUI Experience: Familiarity with this specific interface or similar Stable Diffusion web UIs (ComfyUI, A1111).
- Cloud Autonomy: Experience setting up and managing cloud instances (AWS, GCP, or Lambda Labs) for GPU computing.
- Model Training: Experience training LoRAs, Dreambooth, or fine-tuning checkpoints.
- SOTA Knowledge: A genuine interest in the latest developments in Image Generation (Flux, SDXL, ControlNets, etc.).
Soft Skills & Cultural Fit
- High Autonomy: You don’t need to be micro-managed. If you see a bug in the tool or a way to optimize the prompt workflow, you fix it or propose a solution.
- Bridge Builder: You can act as a translator between technical code and creative visual requests.
- Fast Learner: You can pick up new tools (like SwarmUI) with a short learning curve.
More - Tool Development & Maintenance:
-
· 60 views · 10 applications · 3d
Data Platform Engineer
Full Remote · Countries of Europe or Ukraine · Product · 6 years of experience · English - B1WHO WE ARE At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on actionable technology that helps...WHO WE ARE
At Bennett Data Science, we’ve been pioneering the use of predictive analytics and data science for over ten years, for some of the biggest brands and retailers. We’re at the top of our field because we focus on actionable technology that helps people around the world. Our deep experience and product-first attitude set us apart from other groups and it's why people who work with us tend to stay with us long term.
WHY YOU SHOULD WORK WITH US
You'll work on an important problem that improves the lives of a lot of people. You'll be at the cutting edge of innovation and get to work on fascinating problems, supporting real products, with real data. Your perks include: expert mentorship from senior staff, competitive compensation, paid leave, flexible work schedule and ability to travel internationally.
Essential Requirements for Data Platform Engineer:- Architecture & Improvement: Continuously review the current architecture and implement incremental improvements, facilitating a gradual transition of production operations from Data Science to Engineering.
- AWS Service Ownership: Own the full lifecycle (development, deployment, support, and monitoring) of client-facing AWS services (including SageMaker endpoints, Lambdas, and OpenSearch). Maintain high uptime and adherence to Service Level Agreements (SLAs).
- ETL Operations Management: Manage all ETL processes, including the operation and maintenance of Step Functions and Batch jobs (scheduling, scaling, retry/timeout logic, failure handling, logging, and metrics).
- Redshift Operations & Maintenance: Oversee all Redshift operations, focusing on performance optimization, access control, backup/restore readiness, cost management, and general housekeeping.
- Performance Optimization: Post-stabilization of core monitoring and pipelines, collaborate with the Data Science team on targeted code optimizations to enhance reliability, reduce latency, and lower operational costs.
- Security & Compliance: Implement and manage the vulnerability monitoring and remediation workflow (Snyk).
- CI/CD Implementation: Establish and maintain robust Continuous Integration/Continuous Deployment (CI/CD) systems.
- Infrastructure as Code (Optional): Utilize IaC principles where necessary to ensure repeatable and streamlined release processes.
Mandatory Hard Skills:- AWS Core Services: Proven experience with production fundamentals (IAM, CloudWatch, and VPC networking concepts).
- AWS Deployment: Proficiency in deploying and operating AWS SageMaker and Lambda services.
- ETL Orchestration: Expertise in using AWS Step Functions and Batch for ETL and job orchestration.
- Programming & Debugging: Strong command of Python for automation and troubleshooting.
- Containerization: Competence with Docker/containers (build, run, debug).
- Version Control & CI/CD: Experience with CI/CD practices and Git (GitHub Actions preferred).
- Data Platform Tools: Experience with Databricks, or a demonstrated aptitude and willingness to quickly learn.
Essential Soft Skills:
- Accountability: Demonstrate complete autonomy and ownership over all assigned systems ("you run it, you fix it, you improve it").
- Communication: Fluent in English; capable of clear, direct communication, especially during incidents.
- Prioritization: A focus on delivering a minimally-supportable, deployable solution to meet deadlines, followed by optimization and cleanup.
- Incident Management: Maintain composure under pressure and possess strong debugging and incident handling abilities.
- Collaboration: Work effectively with the Data Science team while communicating technical trade-offs clearly and maintaining momentum.