-
· 257 views · 54 applications · 14d
Senior DevOps engineer
Full Remote · Worldwide · 5 years of experience · B2 - Upper IntermediateWe are a boutique cloud consulting company looking to expand our remote team of 50+ engineers. As a services company, we focus on building long-term relationships with our clients. We don't believe in micromanagement (no time tracking here) and expect...We are a boutique cloud consulting company looking to expand our remote team of 50+ engineers.
As a services company, we focus on building long-term relationships with our clients. We don't believe in micromanagement (no time tracking here) and expect our engineers to take ownership while being proactive and creative in their work.
We specialize in DevOps and Cloud operations, primarily on AWS and Google Cloud, including expertise in GenAI/LLM-Ops supported by our AI team. Our team also has extensive Kubernetes experience, having worked with the container orchestration platform since its early development stages.
We are seeking engineers with strong communication skills to help our clients:
- Migrate from manual cloud configuration to infrastructure-as-code using Terraform/Terragrunt, CloudFormation & Crossplane
- Containerize applications and implement Docker workflows in existing build pipelines using GitHub Actions, GitLab, CircleCI, or Jenkins
- Handle day-one operations: deploy Kubernetes clusters across cloud providers and configure essential add-ons like Karpenter and ingress controllers
- Manage day-two operations: package applications with Helm, implement GitOps-based continuous deployment using ArgoCD/Flux, optimize costs using Graviton/Arm64 and Spot instances, support incident resolution, and deploy Istio for mTLS and observability.
- Enhance application observability through monitoring tools including Prometheus, Loki, Grafana, EFK, eBPF-based solutions like Pixie, and advanced services such as APM and RUM
- Strengthen application security through DevSecOps practices, leveraging cloud tools like CloudTrail, SecurityHub, GuardDuty, and integrating security scanners such as Snyk and Wiz into CI/CD pipelines
- Support AI engineers in building and managing AI/ML pipelines, integrate infrastructure and data with LLMs, and optimize GPU efficiency through scheduling optimization, GPU Time Slicing, etc.
Being part of a team of 50+ senior DevOps engineers comes with great benefits: you'll always have someone to learn from and colleagues ready to help with technical questions or brainstorming sessions.
If you're interested and would like to learn more about the position, please reach out to us.
Talk soon!!
More -
· 60 views · 11 applications · 9d
Data Engineer
Full Remote · Countries of Europe or Ukraine · 4 years of experience · B2 - Upper IntermediateWe are seeking a talented and experienced Data Engineer to join our professional services team of 50+ engineers, on a full-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud...We are seeking a talented and experienced Data Engineer to join our professional services team of 50+ engineers, on a full-time basis. This remote-first position requires in-depth expertise in data engineering, with a preference for experience in cloud platforms like AWS, Google Cloud. You will play a vital role in ensuring the performance, efficiency, and integrity of data pipelines of our customers while contributing to insightful data analysis and utilization.
About us:Opsfleet is a boutique services company who specializes in cloud infrastructure, data, AI, and human‑behavior analytics to help organizations make smarter decisions and boost performance.
Our experts provide end‑to‑end solutions—from data engineering and advanced analytics to DevOps—ensuring scalable, secure, and AI‑ready platforms that turn insights into action.
Role Overview
As a Data Engineer at Opsfleet, you will lead the entire data lifecycle—gathering and translating business requirements, ingesting and integrating diverse data sources, and designing, building, and orchestrating robust ETL/ELT pipelines with built‑in quality checks, governance, and observability. You’ll partner with data scientists to prepare, deploy, and monitor ML/AI models in production, and work closely with analysts and stakeholders to transform raw data into actionable insights and scalable intelligence.
What You’ll Do
* E2E Solution Delivery: Lead the full spectrum of data projects—requirements gathering, data ingestion, modeling, validation, and production deployment.
* Data Modeling: Develop and maintain robust logical and physical data models—such as star and snowflake schemas—to support analytics, reporting, and scalable data architectures.
* Data Analysis & BI: Transform complex datasets into clear, actionable insights; develop dashboards and reports that drive operational efficiency and revenue growth.
* ML Engineering: Implement and manage model‑serving pipelines using cloud’s MLOps toolchain, ensuring reliability and monitoring in production.
* Collaboration & Research: Partner with cross‑functional teams to prototype solutions, identify new opportunities, and drive continuous improvement.
What We’re Looking For
Experience: 4+ years in a data‑focused role (Data Engineer, BI Developer, or similar)
Technical Skills: Proficient in SQL and Python for data manipulation, cleaning, transformation, and ETL workflows. Strong understanding of statistical methods and data modeling concepts. Soft Skills: Excellent problem‑solving ability, critical thinking, and attention to detail. Outstanding written and verbal communication.
Education: BSc or higher in Mathematics, Statistics, Engineering, Computer Science, Life Science, or a related quantitative discipline.
Nice to Have
Cloud & Data Warehousing: Hands‑on experience with cloud platforms (GCP, AWS or others) and modern data warehouses such as BigQuery and Snowflake.
More
We're a boutique DevOps consulting company with a focus and experience in Kubernetes-related projects.
Website:
https://www.opsfleet.com/