Busy Rebel

Joined in 2020
32% answers
At Busy Rebel, we aim to redefine the technology and innovation landscape, creating a distinctive ecosystem of exceptional talents while fostering ownership, inclusion, and diversity. We aim to catalyze societal and technological evolution by empowering tech entrepreneurs to challenge norms and spearhead change.

Busy Rebel aims to forge unparalleled partnerships, driving global impact through creative rebellion. We commit to being the trusted ally that amplifies visions with expert insight and inventive strategies, fostering a community of disruptive thinkers for technological empowerment and societal betterment. Every partnership and project is a step toward a more interconnected and enlightened world.

Be part of a company where every day is an opportunity to learn, challenge, and transcend the ordinary. Your journey with Busy Rebel is not just a career; it’s a pivotal chapter in the larger story of technological revolution and societal evolution. Join us, and let’s build the future together.
  • · 333 views · 62 applications · 25d

    AI Engineer to $7000

    Full Remote · Countries of Europe or Ukraine · Product · 5 years of experience · B2 - Upper Intermediate
    About the Project Join a high-performing UK-based product and cross-functional product team composed of top-tier engineers(PhD holders). We’re building an enterprise-grade data platform designed to integrate, process, and analyze vast amounts of diverse...

    About the Project

    Join a high-performing UK-based product and cross-functional product team composed of top-tier engineers(PhD holders). We’re building an enterprise-grade data platform designed to integrate, process, and analyze vast amounts of diverse data in real-time.

    We build next generation, Data & Cloud agnostic Process Intelligence solution that leverages process mining, knowledge graph, agentic eco-system & canvas to optimise enterprise process and proactivelly suggest and simulate improvements deploying agents into the customer system. 

     

    What You’ll Do

    • Design and develop advanced AI components using LangChain, Langraph,  AutoGen, and other multi-agent orchestration frameworks
    • Build and optimize RAG (Retrieval-Augmented Generation) pipelines using tools like Google AI Platform, Vertex AI and Pinecone.
    • Engineer efficient prompts and fine-tune LLMs (e.g., GPT-4o, DeepSeek) for advanced data analytics capabilities
    • Develop AI-related backend services using Python and FastAPI
    • Deploy scalable solutions in Google Cloud Platform (GCP), working with services such as BigQuery, GKE, Cloud Storage, etc.
    • Collaborate with backend and data engineering teams to integrate and expose AI functionality via well-structured APIs

       

    Must-Have Skills

    • 5+ years of experience in backend and AI system development
    • Strong hands-on experience with LangChain, AutoGen, and multi-agent frameworks
    • In-depth knowledge of LLMs, prompt engineering, NL2SQL, and RAG architectures
    • Proficient in Python and FastAPI
    • Proven experience with GCP services: Vertex AI, BigQuery, GKE, Cloud Storage
    • Experience building production-grade RAG pipelines
    • Excellent problem-solving and communication skills

       

    Nice-to-Have

    • Experience working with data engineering and DevOps tools
    • Familiarity with data governance frameworks and compliance standards (GDPR, HIPAA)
    • Exposure to real-time processing and scalable architecture patterns

       

    Why Join Us

    • Work with a world-class team on a cutting-edge AI and data platform
    • High-impact role with room for innovation and growth
    • Flexible schedule and remote-friendly setup
    • Competitive compensation
    • Collaborative, growth-oriented culture
    • Engage in technically challenging, meaningful projects
    •  

    About Busy RebelAt Busy Rebel, we aim to redefine the technology and innovation landscape, creating a distinctive ecosystem of exceptional talents while fostering ownership, inclusion, and diversity. We aim to catalyze societal and technological evolution by empowering tech entrepreneurs to challenge norms and spearhead change.

    Busy Rebel aims to forge unparalleled partnerships, driving global impact through creative rebellion. We commit to being the trusted ally that amplifies visions with expert insight and inventive strategies, fostering a community of disruptive thinkers for technological empowerment and societal betterment. Every partnership and project is a step toward a more interconnected and enlightened world.

    Be part of a company where every day is an opportunity to learn, challenge, and transcend the ordinary. Your journey with Busy Rebel is not just a career; it’s a pivotal chapter in the larger story of technological revolution and societal evolution. Join us, and let’s build the future together.
     


     

    More
  • · 136 views · 20 applications · 24d

    Senior Data Engineer

    Part-time · Full Remote · Countries of Europe or Ukraine · Product · 5 years of experience · B2 - Upper Intermediate
    About the Platform We’re building a unified data ecosystem that connects raw data, analytical models, and intelligent decision layers. The platform combines the principles of data lakes, lakehouses, and modern data warehouses — structured around the...

    About the Platform

    We’re building a unified data ecosystem that connects raw data, analytical models, and intelligent decision layers.
    The platform combines the principles of data lakes, lakehouses, and modern data warehouses — structured around the Medallion architecture (Bronze / Silver / Gold).
    Every dataset is versioned, governed, and traceable through a unified catalog and lineage framework.
    This environment supports analytics, KPI computation, and AI-driven reasoning — designed for performance, transparency, and future scalability. (in Partnership with GCP, OpenAI, Cohere)
     

    What You’ll Work On

    1. Data Architecture & Foundations

    • Design, implement, and evolve medallion-style data pipelines — from raw ingestion to curated, business-ready models.
    • Build hybrid data lakes and lakehouses using Iceberg, Delta, or Parquet formats with ACID control and schema evolution.
    • Architect data warehouses that unify batch and streaming sources into a consistent, governed analytics layer.
    • Ensure optimal partitioning, clustering, and storage strategies for large-scale analytical workloads.

    2. Data Ingestion & Transformation

    • Create ingestion frameworks for APIs, IoT, ERP, and streaming systems (Kafka, Pub/Sub).
    • Develop reproducible ETL/ELT pipelines using Airflow, dbt, Spark, or Dataflow.
    • Manage CDC and incremental data loads, ensuring freshness and resilience.
    • Apply quality validation, schema checks, and contract-based transformations at every stage.

    3. Governance, Cataloging & Lineage

    • Implement a unified data catalog with lineage visibility, metadata capture, and schema versioning.
    • Integrate dbt metadata, OpenLineage, and Great Expectations to enforce data quality.
    • Define clear governance rules: data contracts, access policies, and change auditability.
    • Ensure every dataset is explainable and fully traceable back to its source.

    4. Data Modeling & Lakehouse Operations

    • Design dimensional models and business data marts to power dashboards and KPI analytics.
    • Develop curated Gold-layer tables that serve as trusted sources of truth for analytics and AI workloads.
    • Optimize materialized views and performance tuning for analytical efficiency.
    • Manage cross-domain joins and unified semantics across products, customers, or operational processes.

    5. Observability, Reliability & Performance

    • Monitor data pipeline health, freshness, and cost using modern observability tools (Prometheus, Grafana, Cloud Monitoring).
    • Build proactive alerting, anomaly detection, and drift monitoring for datasets.
    • Implement CI/CD workflows for data infrastructure using Terraform, Helm, and ArgoCD.
    • Continuously improve query performance and storage efficiency across warehouses and lakehouses.

    6. Unified Data & Semantic Layers

    • Help define a unified semantic model that connects operational, analytical, and AI-ready data.
    • Work with AI and analytics teams to structure datasets for semantic search, simulation, and reasoning systems.
    • Collaborate on vectorized data representation and process-relationship modeling (graph or vector DBs).


    What We’re Looking For

    • 5+ years of hands-on experience building large-scale data platforms, warehouses, or lakehouses.
    • Strong proficiency in SQL, Python, and distributed processing frameworks (PySpark, Spark, Dataflow).
    • Deep understanding of Medallion architecture, data modeling, and modern ETL orchestration (Airflow, dbt).
    • Experience implementing data catalogs, lineage tracking, and validation frameworks.
    • Knowledge of data governance, schema evolution, and contract-based transformations.
    • Familiarity with streaming architectures, CDC patterns, and real-time analytics.
    • Practical understanding of FinOps, data performance tuning, and cost management in analytical environments.
    • Strong foundation in metadata-driven orchestration, observability, and automated testing.
    • Bonus: experience with ClickHouse, Trino, Iceberg, or hybrid on-prem/cloud data deployments.


    You’ll Excel If You

    • Think of data systems as living, evolving architectures — not just pipelines.
    • Care deeply about traceability, scalability, and explainability.
    • Love designing platforms that unify data across analytics, AI, and process intelligence.
    • Are pragmatic, hands-on, and focused on building systems that last.
    More
  • · 198 views · 43 applications · 25d

    AI Engineer to $8000

    Full Remote · Countries of Europe or Ukraine · Product · 5 years of experience · B2 - Upper Intermediate
    About the Project Join a high-performing UK-based product and cross-functional product team composed of top-tier engineers(PhD holders). We’re building an enterprise-grade data platform designed to integrate, process, and analyze vast amounts of diverse...

    About the Project

    Join a high-performing UK-based product and cross-functional product team composed of top-tier engineers(PhD holders). We’re building an enterprise-grade data platform designed to integrate, process, and analyze vast amounts of diverse data in real-time.

    We build next generation, Data & Cloud agnostic Process Intelligence solution that leverages process mining, knowledge graph, agentic eco-system & canvas to optimise enterprise process and proactivelly suggest and simulate improvements deploying agents into the customer system. 

     

    What You’ll Do

    • Design and develop advanced AI components using LangChain, Langraph,  AutoGen, and other multi-agent orchestration frameworks
    • Build and optimize RAG (Retrieval-Augmented Generation) pipelines using tools like Google AI Platform, Vertex AI and Pinecone.
    • Engineer efficient prompts and fine-tune LLMs (e.g., GPT-4o, DeepSeek) for advanced data analytics capabilities
    • Develop AI-related backend services using Python and FastAPI
    • Deploy scalable solutions in Google Cloud Platform (GCP), working with services such as BigQuery, GKE, Cloud Storage, etc.
    • Collaborate with backend and data engineering teams to integrate and expose AI functionality via well-structured APIs

       

    Must-Have Skills

    • 5+ years of experience in backend and AI system development
    • Strong hands-on experience with LangChain, AutoGen, and multi-agent frameworks
    • In-depth knowledge of LLMs, prompt engineering, NL2SQL, and RAG architectures
    • Proficient in Python and FastAPI
    • Proven experience with GCP services: Vertex AI, BigQuery, GKE, Cloud Storage
    • Experience building production-grade RAG pipelines
    • Excellent problem-solving and communication skills

       

    Nice-to-Have

    • Experience working with data engineering and DevOps tools
    • Familiarity with data governance frameworks and compliance standards (GDPR, HIPAA)
    • Exposure to real-time processing and scalable architecture patterns

       

    Why Join Us

    • Work with a world-class team on a cutting-edge AI and data platform
    • High-impact role with room for innovation and growth
    • Flexible schedule and remote-friendly setup
    • Competitive compensation
    • Collaborative, growth-oriented culture
    • Engage in technically challenging, meaningful projects
    •  

    About Busy RebelAt Busy Rebel, we aim to redefine the technology and innovation landscape, creating a distinctive ecosystem of exceptional talents while fostering ownership, inclusion, and diversity. We aim to catalyze societal and technological evolution by empowering tech entrepreneurs to challenge norms and spearhead change.

    Busy Rebel aims to forge unparalleled partnerships, driving global impact through creative rebellion. We commit to being the trusted ally that amplifies visions with expert insight and inventive strategies, fostering a community of disruptive thinkers for technological empowerment and societal betterment. Every partnership and project is a step toward a more interconnected and enlightened world.

    Be part of a company where every day is an opportunity to learn, challenge, and transcend the ordinary. Your journey with Busy Rebel is not just a career; it’s a pivotal chapter in the larger story of technological revolution and societal evolution. Join us, and let’s build the future together.
     


     

    More
  • · 156 views · 55 applications · 18d

    Sr. Ruby on Rails

    Part-time · Full Remote · Countries of Europe or Ukraine · Product · 4 years of experience · B1 - Intermediate
    You will be responsible for designing, developing, and maintaining internal tools and platforms that power our operations, analytics, and AI-assisted workflows. This is a hands-on engineering role with significant autonomy, direct impact, and the ability...

    You will be responsible for designing, developing, and maintaining internal tools and platforms that power our operations, analytics, and AI-assisted workflows. This is a hands-on engineering role with significant autonomy, direct impact, and the ability to work in dynamic product-led environment.

     

    The ideal candidate is product-oriented, pragmatic, deeply experienced with Rails, and comfortable moving fast while maintaining engineering discipline.

     

    Key Responsibilities

    • Build, maintain, and enhance internal Rails-based applications and microservices.
    • Design clean, scalable, and secure backend architecture for new internal products.
    • Develop REST/GraphQL APIs for front-end and platform integrations.
    • Collaborate with product leadership to refine requirements, scope features, and deliver iterative improvements.
    • Implement background jobs, automation pipelines, data processing components, and integrations with third-party APIs.
    • Ensure system reliability, performance, observability, and maintainability across environments.
    • Contribute to DevOps, CI/CD, and deployment improvements where relevant.
    • Uphold code quality through testing, reviews, and best practices.

     

     

    Required Skills & Experience

    • 4+ years of professional experience with Ruby on Rails.
    • Strong proficiency in PostgreSQL, ActiveRecord, and relational schema design.
    • Experience building and consuming APIs (REST, GraphQL is a plus).
    • Solid understanding of background processing (Sidekiq or equivalent).
    • Experience with authentication, authorization, and security best practices.
    • Familiarity with modern frontend stacks (React, Vue, or similar) is an advantage.
    • Comfortable working in environments with evolving requirements and rapid iterations.
    • Ability to take ownership of features and drive them across the entire lifecycle.

     

     

    Nice-to-Have

    • Knowledge of microservice architectures and event-driven patterns.
    • Exposure to AI systems, LLM integrations, or automation pipelines.
    • Experience with Docker, Kubernetes, AWS, or GCP.
    • Understanding of product development principles and internal tooling needs.

     

    More
Log In or Sign Up to see all posted jobs