Jobs DevOps

230
  • Β· 135 views Β· 38 applications Β· 12d

    DevOps Engineer for Data / AI based product

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· English - B2
    People and stability are our key priorities. We are a small team, spread across Ukraine, Denmark and Norway. Friendly atmosphere with no rush and high focus. This is a long-term project. Our products are designed for industrial and energy sectors,...

    People and stability are our key priorities.
    We are a small team, spread across Ukraine, Denmark and Norway.
    Friendly atmosphere with no rush and high focus. This is a long-term project.


    Our products are designed for industrial and energy sectors, including oil and gas. We are working in IoT, Machine Learning and AI technology domains. 

     

     

    We are looking for candidate with 3+ years of experience
    and good command of following skills and technologies:


    - K8S, Docker Compose
    - Helm
    - IaC: Ansible, Terraform or Pulumi
    - Scripting: Python, bash
    - Linux administration: process, memory, disk, and network troubleshooting, secure configuration
    - Cloud Services: AWS or Azure (we use Azure)
    - Monitoring: Grafana stack: Grafana, Prometheus, Loki, etc

     


    - Confident English communication (speaking and writing)
    (ability to lead deployment and troubleshooting calls)

     


    You will be working on following:


    - Install configure and monitor on-prem and cloud deployments
    - Troubleshoot and optimise live deployments
    - Implement and maintain CI/CD pipelines to enable reliable and automated deployments
    - Optimize system performance, scalability and cost efficiency
    - Ensure security best practices, access control, and secrets management
    - Support development teams with environments setup and deployment

    - Collaborate inside cross-functional team of engineers (and great personalities)
    - Grow your expertise and have fun

     

     

    We Offer:
    - The competitive reward for the provided services
    - Official contract and full taxes compensation
    - Remote work
    - Flexible working hours
    - Minimal bureaucracy
    - Paid vacation, sick leaves, national holidays

    More
  • Β· 74 views Β· 0 applications Β· 12d

    DevOps engineer (Crypto exchange ecosystem)

    Full Remote Β· Canada Β· Product Β· 4 years of experience Β· English - B2
    About the Company Our client is a financial technology services provider dedicated to expanding global access to digital assets by building innovative solutions in the cryptocurrency ecosystem. Their products provide exchanges, liquidity solutions, and...

    About the Company

    Our client is a financial technology services provider dedicated to expanding global access to digital assets by building innovative solutions in the cryptocurrency ecosystem. Their products provide exchanges, liquidity solutions, and enterprise-grade integrations for clients worldwide. Specialized in building Π‘rypto, Web3, and various White-Label Solutions, the company is now looking for a skilled DevOps Engineer to support and automate their general infrastructure.

     

    In this role, you will be a core member of the engineering team, applying your technical foundation to a rapidly growing organization. You will thrive on solving complex technical challenges, writing clean automation, and collaborating in a fast-paced environment.

     

    What You'll Do

    You will be a key builder in the development lifecycle, focusing on automating established systems and delivering robust infrastructure. Your main responsibilities include:

    • Infrastructure Delivery: Work closely with development teams to deliver infrastructure using Terraform CDKTF. Cross-team communication is a key part of the job!
    • Pipeline Maintenance: Maintain and extend the build pipeline, including CI architecture in GitHub Actions, building container images with Puppet and Packer, and resolving service issues in collaboration with developers.
    • Tooling & Automation: Develop TypeScript CLI tooling to automate all facets of your job responsibilities.
    • Optimization: Ensure the stability and reliability of the platform's general infrastructure.

     

    Your AI & Automation Advantage

    We believe AI is a powerful co-pilot for modern development. We're looking for someone eager to use AI to supercharge their work and enhance infrastructure automation.

     

    Who You Are

    You are a proactive, detail-oriented, and adaptable engineer who thrives in a fast-paced environment. You are:

    • A Clear Communicator: You have strong verbal and written English skills and can effectively articulate complex technical decisions to both technical teammates and stakeholders.
    • A Problem-Solver: You are self-motivated, resilient, and enjoy tackling complex technical puzzles to build robust solutions.
    • A Quick Learner: You have a curious, tech-savvy mindset and are either already in the crypto space or eager to enter it.
    • A Team Player: You value collaboration and feedback, contributing positively to the team's shared engineering culture.

     

    Qualifications

    • Experience: 4+ years of professional experience in Platform Engineering, DevOps, or a related field.
    • AWS Expertise: 3+ years of hands-on experience working with AWS.
    • Core Technical Skills:
      • Proficiency with Linux and TypeScript.
      • Strong experience with Terraform CDKTF.
    • English: Strong verbal and written communication skills (Upper-Intermediate level or higher).

     

    Nice to Have:

    • Experience or knowledge in the Web3/Blockchain area.
    • Familiarity with Puppet, Packer, Travis, or Serverless/ECS architectures.

     

    We Offer

    • Flexible working hours and all official holidays.
    • Time Off: Paid vacation and sick leaves.
    • Support & Growth: English classes (up to 3x weekly), mentoring, and educational programs.
    • Perks: Advanced bonus system.
    • Community: Regular corporate activities, including team buildings, tech events, and sports.
    More
  • Β· 55 views Β· 2 applications Β· 12d

    Senior DevOps Engineer (AWS) (US Hours)

    Full Remote Β· United States Β· 5 years of experience Β· English - C1
    AWS, Linux, Docker, Kubernetes, Terraform, Logging solutions (ELK, EFK, Graylog), Monitoring solutions (Prometheus+Grafana), Configuration management tools (Ansible, Puppet, SaltStack, Chef), Hands-on experience with SQL (MySQL, PostgreSQL, MariaDB),...

    AWS, Linux, Docker, Kubernetes, Terraform, Logging solutions (ELK, EFK, Graylog), Monitoring solutions (Prometheus+Grafana), Configuration management tools (Ansible, Puppet, SaltStack, Chef), Hands-on experience with SQL (MySQL, PostgreSQL, MariaDB), NoSQL databases (MongoDB, Redis, DynamoDB, Elasticsearch), Message Broker systems (RabbitMQ, ActiveMQ, Kafka),CI/CD tools and version control (Jenkins, ArgoCD, GitHub actions)


    Profisea is an Israeli DevOps and Cloud boutique company with a full cycle of services. For more than nine years, we have been implementing best practices of GitOps, DevSecOps, and FinOps, and providing Kubernetes-based infrastructure services to help businesses of all sizes β€”SMB, SME, or large enterprise clients to stay innovative and effective. 


    This is a full-time remote role for a DevOps Engineer. As a DevOps Engineer, your day-to-day tasks will involve infrastructure as code (IaC), software development, continuous integration, system administration, and Linux. You will be responsible for implementing and managing DevOps processes, optimizing cloud networks, and ensuring efficient deployment and operation of systems.

    Requirements:

    • Proficiency in infrastructure as code (IaC) and relevant tools
    • Experience in software development and continuous integration
    • Strong system administration skills
    • Advanced knowledge of Linux
    • Ability to work independently and remotely
    • Excellent problem-solving and troubleshooting skills
    • Good communication and collaboration skills
    • English – Intermediate+  

    Would be an advantage:

    • Experience with cloud platforms and technologies
    • Relevant certifications in DevOps or cloud technologies

    What we offer:

    • Competitive salary and social package 
    • Flexible working hours 
    • Mentorship and professional certifications support 
    • Rewarding working environment and flexible career opportunities 

     

    More
  • Β· 87 views Β· 6 applications Β· 13d

    Infrastructure Engineer

    Hybrid Remote Β· Ukraine, Poland, Cyprus Β· Product Β· 3 years of experience Β· English - B1
    Our Mission and Vision At Solidgate, our mission is clear: to empower outstanding entrepreneurs to build exceptional internet companies. We exist to fuel the builders β€” the ones shaping the digital economy β€” with the financial infrastructure they deserve....

    Our Mission and Vision

    At Solidgate, our mission is clear: to empower outstanding entrepreneurs to build exceptional internet companies. We exist to fuel the builders β€” the ones shaping the digital economy β€” with the financial infrastructure they deserve. To achieve that, we’re on a bold path: to become the #1 payments orchestration platform in the world.

    We believe the future of payments is shaped by people who think big, take ownership, and bring curiosity and drive to everything they do. That’s exactly the kind of teammates we want on board.

     

    Uniqueness of the Role

    We’re scaling our Infrastructure Stream and need an Infra Engineer who wants to build, not just maintain.

    Right now, our Infra Engineers keep Solidgate’s backend services running smoothly. But we’re also investing in something bigger: implementing Kubernetes for our internal management and analytics infrastructure. And you who will be in charge of it.

    This isn’t a standard ops role. There’s a real R&D component here β€” you’ll evaluate whether and how Kubernetes fits our needs, experiment with solutions, and make calls that shape our infrastructure for years.

     

    If you want to work with Kubernetes but you’re tired of inheriting someone else’s setup, this is your chance to do it right from day one.

     

    The rest of your time? Solving the everyday infrastructure challenges that keep a fintech platform running at scale.

    You’ll work side by side with Mykyta, our Head of Infrastructure, who’s scaled our infrastructure from the ground up. He doesn’t just set the strategy β€” he’s hands-on, rolling up his sleeves and driving improvements alongside the team of 4 other qualified Infra Engineers.

    Interesting fact: Our Infra team is committed to the long haul β€” on average, they’ve been at Solidgate for 5 years, driving innovation and shaping our infrastructure every step of the way.
    Explore our technology stack ➑️ https://solidgate-tech.github.io/.

     

    What you’ll own

    β€” Supercharging our CI/CD pipelines to make releases faster and smoother

    β€” Bringing our infrastructure fully in line with Infrastructure as Code principles

    β€” Leading an R&D initiative to introduce Kubernetes for internal management and analytics infrastructure

    β€” Maintaining and modernizing existing systems to keep them rock-solid

    β€” Collaborating closely with the whole tech team: Data Engineers, Development, and Product

    β€” Keeping our infrastructure and processes docs up-to-date

     

    You’re a great fit if you have

    β€” 5+ years of hands-on experience as a DevOps or Infrastructure Engineer

    β€” Solid Linux skills β€” networking, process management, storage, the whole stack

    β€” Proven experience with AWS β€” VPC, EC2, ECS, RDS, CloudFront, Lambda

    β€” Hands-on experience with Kubernetes and a strong understanding of when it makes sense (and when it doesn’t)

    β€” Experience with the HashiCorp stack β€” Terraform, Vault

    β€” Background in microservices platforms β€” building infrastructure with microservices in mind and designing inter-service interactions

     

    Nice to have

    β€” Experience working on fintech projects

    β€” Hands-on with PCI DSS certification processes

    β€” Development experience in Python, Go, or similar

    β€” Familiarity with HashiCorp Packer and Consul

     

    Why Solidgate?

    • High-impact role. You’re not inheriting a perfect system β€” you’re building one.
    • Great product. We’ve built a fintech powerhouse that scales fast. Solidgate isn’t just an orchestration player β€” it’s the financial infrastructure for modern Internet businesses. From subscriptions to chargeback management, fraud prevention, and indirect tax β€” we’ve got it covered.
    • Massive growth opportunity. Solidgate is scaling rapidly β€” this role will be a career-defining move.
    • Top-tier tech team. Work alongside our driving force β€” a proven, results-driven engineering team that delivers.
    • Modern engineering culture. TBDs, code reviews, solid testing practices, metrics, alerts, and fully automated CI/CD.

       

    πŸ’Œ The Extras: 30+ days off, unlimited sick leave, free office meals, health coverage, and Apple gear to keep you productive. Courses, conferences, sports and wellness benefits β€” all designed for ideas, focus, and fun.

    Tomorrow’s fintech needs your mindset. Come build it with us.

    More
  • Β· 114 views Β· 22 applications Β· 13d

    Senior DevOps Engineer (Π‘DE)

    Full Remote Β· EU Β· Product Β· 5 years of experience Β· English - B1
    Description Our team is developing a set of services that support the company's operational activities, starting with the gaming platform itself, back-offices, and data analytics. The product is being developed by a small team of experienced...

    Description

    Our team is developing a set of services that support the company's operational activities, starting with the gaming platform itself, back-offices, and data analytics. The product is being developed by a small team of experienced professionals.

    We work extensively with databases and distributed systems, discuss and implement architectural solutions, and conduct RND procedures to check the performance of certain solutions. We are looking for a team member who will work as part of the DevOps Team and collaborate closely with cross-functional teams to maintain and improve the current delivery workflow.


    Requirements

    • Experience in DevOps or related roles, ideally within a fast-paced tech environment;
    • Proficiency in Helm Chart development;
    • Familiarity with IAAC (Terraform) for backend infrastructure;
    • Familiarity with GitOps best practises;
    • Familiarity with AWS Services;
    • Experience with configuration management tools;
    • Knowledge of CI/CD pipelines and best practices;
    • Experience with the continuous integration system (Jenkins, GitHub actions);
    • Ability to troubleshoot complex issues and implement effective solutions;
    • Excellent communication skills and the ability to work collaboratively in a team environment;
    • Willingness to learn new technologies and adapt to evolving project requirements;
    • Experience in Kubernetes (Self Hosted);
    • Experience with ArgoCD and its' ecosystem;
    • 2+ years of Python experience;
    • Scripting language skills will be a plus (Bash, etc).

      Responsibilities
    • Collaborating with the team to develop Helm Charts for our applications;
    • Contributing to the Terraform backend team's codebase, ensuring efficient and scalable infrastructure;
    • Configuring our development, testing and production applications to ensure optimal performance and reliability;
    • Managing releases, ensuring smooth deployments;
    • Overseeing continuous integration processes, including auto tests, quality checks, security checks, and more;
    • Streamlining and optimizing Jenkins pipelines for efficiency and reliability.

      Technologies
    • Python (Flask/FastApi, Celery, SQLAlchemy);
    • PostgreSQL; Cassandra; Redis; KeyDB;
    • GIT, CI/CD (Jenkins);
    • Docker;
    • Kubernetes; Helm; ArgoCD;
    • Terraform;
    • AWS Services (S3, SQS, SNS, Firehose, Glue);
    • ELK Stack; Grafana; Prometheus/VictoriaMetrics.


    Benefits

    Why Join Us?

    🎰 Be part of the international iGaming industry – Work with a top European solution provider and shape the future of online gaming;

    πŸ’• A Collaborative Culture – Join a supportive and understanding team;

    πŸ’° Competitive salary and bonus system – Enjoy additional rewards on top of your base salary;

    πŸ“† Unlimited vacation & sick leave – Because we prioritize your well-being;

    πŸ“ˆ Professional Development – Access a dedicated budget for self-development and learning;

    πŸ₯ Healthcare coverage – Available for employees in Ukraine and compensation across the EU;

    πŸ«‚ Mental health support – Free consultations with a corporate psychologist;

    πŸ‡¬πŸ‡§ Language learning support – We cover the cost of foreign language courses;

    🎁 Celebrating Your Milestones – Special gifts for life’s important moments;

    ⏳ Flexible working hours – Start your day anytime between 9:00-11:00 AM;

    🏒 Flexible Work Arrangements – Choose between remote, office, or hybrid work;

    πŸ–₯ Modern Tech Setup – Get the tools you need to perform at your best;

    🚚 Relocation support – Assistance provided if you move to one of our hubs.

    More
  • Β· 157 views Β· 65 applications Β· 13d

    Senior DevOps Engineer (AWS)

    Full Remote Β· Worldwide Β· 5 years of experience Β· English - B2
    About Digis Digis is a European IT company with 200+ specialists delivering complex SaaS products, enterprise solutions, and AI-powered platforms worldwide. We partner with clients from the US, UK, and EU to build long-term development teams and provide...

    About Digis
    Digis is a European IT company with 200+ specialists delivering complex SaaS products, enterprise solutions, and AI-powered platforms worldwide. We partner with clients from the US, UK, and EU to build long-term development teams and provide transparency, stability, and continuous professional growth for all our engineers.

    About the Project
    Our Client is a Belgian startup building low-latency machine learning systems that predict how markets react to breaking news and real-time social signals. The platform processes high-frequency data streams and delivers predictions in under 200ms. You will join an execution-driven engineering team and play a key role in ensuring system scalability, reliability, and cost efficiency.

    Tech Stack
    Cloud: AWS
    Automation & Tooling: Python
    Database: PostgreSQL
    Monitoring: Datadog
    Other: CI/CD, REST APIs, Slack, ClickUp

    Requirements

    • 5+ years of experience as a DevOps Engineer
    • 3+ years of hands-on experience with AWS
    • Strong recent experience with Python (automation, tooling, pipelines)
    • English level: Upper-Intermediate+


    Responsibilities

    • Design, maintain, and optimize AWS infrastructure for scalability, reliability, and cost efficiency
    • Improve and maintain CI/CD pipelines and release workflows
    • Automate infrastructure and operational processes using Python
    • Support and optimize ML and data processing pipelines
    • Implement monitoring and observability practices
    • Collaborate closely with engineers on system reliability and architecture
    • Participate in infrastructure planning and optimization


    We Offer

    • 20 paid vacation days per year
    • 5 paid sick leaves (no medical documents required)
    • Personalized development plan + training compensation
    • English courses compensation
    • Work equipment if needed (PC/laptop/monitor)
    • Flat, transparent internal communication
    • Ability to switch between projects and technologies inside Digis
    • Full accounting and legal support
    • Free corporate psychologist sessions
    More
  • Β· 48 views Β· 6 applications Β· 13d

    OpenStack Infrastructure Engineer (IRC287887)

    Part-time Β· Full Remote Β· Croatia, Poland, Slovakia, Ukraine Β· 6 years of experience Β· English - B2
    Job Description - At least 8 years of experience in virtualization and cloud infrastructure (VMware, KVM, OpenStack). - Extensive hands-on experience with OpenStack administration, deployment, and operation in a production environment. - Proven...

    Job Description

    - At least 8 years of experience in virtualization and cloud infrastructure (VMware, KVM, OpenStack). 

    - Extensive hands-on experience with OpenStack administration, deployment, and operation in a production environment.

    - Proven experience administering VMware vSphere (ESXi, vCenter) and ability to analyze source infrastructure for migration.

    -Deep knowledge of compute, networking (including NSX or equivalent), and storage (Ceph or similar) technologies on both VMware and OpenStack platforms

    - Proficiency in Linux administration and shell scripting experience (Python, Bash, Ansible, Terraform).

    - Experience with containerization (Docker, Kubernetes). Experience with Ceph or other storage backends a plus.

    - Demonstrated experience in cloud migration projects, specifically from a proprietary platform to OpenStack or a similar open-source cloud will be a big plus.

    - Strong troubleshooting, communication, and documentation skills.

    - Certifications such as Certified OpenStack Administrator (COA) or relevant vendor-specific cloud certifications are a plus.

    - Upper-Intermediate or higher English level.

    - Experience working with Japanese clients is a big plus.

     

    Job Responsibilities

    The candidate will be responsible for defining the migration strategy, ensuring data integrity, and optimizing the workloads for the OpenStack architecture. You responsibilities will include, but will not be limited to:

    - Design and implement workload discovery, assessment, and migration roadmap from VMware to OpenStack environment.

    - Deploy and configure OpenStack components (Nova, Neutron, Glance, Cinder, Keystone).

    - Migrate workloads (VMs, templates, volumes) from vSphere to OpenStack compute nodes.

    - Reconfigure networking and storage for compatibility with OpenStack architecture.

    - Collaborate with DevOps and automation teams to streamline migration using scripts and APIs.

    - Develop automation scripts (e.g., using Python, Ansible, or Terraform) to streamline the migration process and infrastructure provisioning.

    - Ensure network and security parity between the source (VMware) and destination (OpenStack) environments.

    - Troubleshoot and resolve complex issues related to data transfer, networking, and workload compatibility during the migration.

    - Train and mentor internal teams on OpenStack operations and best practices.

    - Create operational documentation and conduct handover to infrastructure teams.

     

    Department/Project Description

    Our client is a leading telecommunications and internet service provider based in Japan, offering a range of innovative solutions including network services, cloud computing and security solutions to businesses and individuals. By leveraging the technological expertise gained through providing Internet connectivity services, the company has expanded its business portfolio as a total solutions provider, offering outsourcing services that include cloud computing, Wide-Area Network (WAN) services, systems integration services, and more.

    The client is transitioning from a VMware-based private cloud to an OpenStack infrastructure to reduce licensing costs, gain open-source flexibility, and improve integration with containerized workloads. The project includes designing, deploying, and executing migration strategies for VMs, storage, and networking services from VMware to OpenStack.

    More
  • Β· 41 views Β· 7 applications Β· 13d

    Senior DevOps Engineer (AWS) IRC287083

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    Description Our client is innovative manufacturer of medical devices in the United States which produces a devices and software applications. At this project, you will have a great opportunity to be involved into the full development life cycle of medical...

    Description

    Our client is innovative manufacturer of medical devices in the United States which produces a devices and software applications.

    At this project, you will have a great opportunity to be involved into the full development life cycle of medical software which is intended to help individuals by processing certain information taken from medical devices to identify health trends and to track daily activities. As additional there are opportunities to work with medical devices, in scope of end-to-end testing.

     

    Requirements:

    1. Proficiency in AWS services such as S3, EC2, RDS, Lambda, Glue, Athena, IAM, KMS, Lake Formation, and DataZone
    2. Experience with Terraform
    3. Knowledge of continuous integration and continuous deployment practices and tools like GitLab CI
    4. Languages: Python, HCL (Terraform),
    5. Version Control: GitLab
    6. Experience with both relational (SQL) and non-relational (NoSQL) databases, and the ability to model data appropriately for each type
    7. Knowledge of data governance principles to ensure data quality, consistency, and compliance
    8. Proficiency in designing and managing data pipelines using ETL tools and frameworks
    9. Experience with monitoring and ensuring the reliability and performance of data systems
    10. Understanding of serverless computing concepts
    11. Security & Compliance best practices for sensitive data processing

     

     

    MUST HAVE:
    – Infrastructure as Code: Experience with Terraform and CloudFormation – Proven ability to write and manage Infrastructure as Code (IaC)

    – AWS Platform: Working experience with AWS services – in particular serverless architectures (S3, RDS, Lambda, IAM, etc…)

    – CI/CD Tools: experience setting up and managing CI/CD pipelines using GitLab CI, Jenkins, or similar tools

    – Scripting and automation: experience in scripting language such as Python, PowerShell, etc…

    – Containerization and Orchestration: experience with Docker, AWS EKS, Kubernetes for container management & orchestration

    – Monitoring and Logging: Familiarity with monitoring, logging & visualization tools such as CloudWatch, ELK, Dynatrace, Prometheus, Grafana, etc…

    – Source Code Management: Expertise with git commands and associated VCS (Gitlab, Github, Gitea or similar)

    – Documentation: Experience with AsciiDocs or markdown and for creating technical documentation

     

     

    NICE TO HAVE:
    – Previous Healthcare or Medical Device experience

    – Experience working with Healthcare Data, including HL7v2, FHIR and DICOM

    – Building software classified as Software as a Medical Device (SaMD)

    – Experience implementation implementing enterprise grade cyber security & privacy by design into software products

    – Experience working in Digital Health software

    – Experience developing global applications

    – Strong understanding of SDLC – Waterfall & Agile methodologies

    – Software estimation

    – Experience leading software development teams onshore and offshore

     

    Job responsibilities

    β€’ Design, develop, and maintain data pipelines and workflows using AWS services.
    β€’ Implement Infrastructure as Code (IaC) to automate the provisioning and management of cloud resources
    β€’ Develop and maintain CI/CD pipelines
    β€’ Collaborate with data engineers, data scientists, and other stakeholders to understand data requirements and deliver solutions
    β€’ Monitor and ensure data observability to maintain data quality and reliability
    β€’ Troubleshoot and resolve issues related to data pipelines and infrastructure
    β€’ Provide technical guidance, direction, and governance within the product team
    β€’ Hands on development and coding
    β€’ Participation in code reviews
    β€’ Implement best practices for data governance, security, and compliance

     

    KEY RESPONSIBILITIES

    – Develops, documents, and configures systems specifications that conform to defined architecture standards, address business requirements, and processes in the DevOps domain.

    – Ensure seamless deployments of teams infrastructure from version control systems to AWS environments.

    – Create and manage reusable infrastructure templates to reduce deployment time and improve consistency

    – Develop and maintain CI/CD pipelines across all stages of development ranging from build, testing, linting and deployment.

    – Involved in planning of system and development deployment as well as responsible for meeting compliance and security standards.

    – Ability to document detailed technical system specifications based on business system requirements

    – Actively identifies system functionality or performance deficiencies, executes changes to existing systems, and tests functionality of the system to correct deficiencies and maintain more effective data handling, data integrity, conversion, input/output requirements, and storage.

    – May document testing and maintenance of system updates, modifications, and configurations.

    – May act as a liaison with key technology vendor technologists or other business functions.

    – Function Specific: Strategically design technology solutions that meet the needs and goals of the company and its customers/users.

    – Leverages platform process expertise to assess if existing standard platform functionality will solve a business problem or if a customized solution would be required.

     

    What we offer

    Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. 

    Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally.

    Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today.

    Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way!

    High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do.

     


     

    More
  • Β· 22 views Β· 1 application Β· 13d

    Azure DevOps Engineer (business trips)

    Hybrid Remote Β· Countries of Europe or Ukraine Β· 5 years of experience Β· English - B2
    Project Description: Our client is a large international enterprise operating modern cloud-based platforms and services. The project focuses on building, improving, and operating Azure-based cloud environments, ensuring reliable CI/CD pipelines,...

    Project Description:

    Our client is a large international enterprise operating modern cloud-based platforms and services.

    The project focuses on building, improving, and operating Azure-based cloud environments, ensuring reliable CI/CD pipelines, infrastructure automation, and stable production operations.

    You will work closely with development teams, contributing to cloud platform evolution and supporting mission-critical systems in production.

     

    Responsibilities:

    β€’ Provide Azure DevOps support for cloud solutions by setting up new environments and implementing CI/CD pipelines
    β€’ Maintain, improve, and extend cloud deployment frameworks for existing products
    β€’ Design, implement, and manage Azure DevOps pipelines
    β€’ Apply Infrastructure as Code principles to automate cloud resources
    β€’ Monitor cloud environments, detect issues, and resolve incidents
    β€’ Support 3rd-level production issues and perform root cause analysis
    β€’ Collaborate with development and operations teams to improve reliability and performance
    β€’ Share DevOps best practices and knowledge with team members

     

    Mandatory Skills Description:

    β€’ Minimum 3 years of relevant experience in DevOps on Azure
    β€’ Bachelor degree in Software Engineering or equivalent through experience
    β€’ Strong knowledge of main Azure components and Azure DevOps pipelines
    β€’ Fluency in Python for development and scripting new DevOps solutions
    β€’ Good knowledge of Docker containers
    β€’ Experience with Infrastructure as Code tools (e.g. Bicep, SaltStack or similar)
    β€’ Ability to detect, analyze, and resolve issues in cloud and production environments
    β€’ Experience supporting production systems and handling 3rd-level support cases
    β€’ Ability to work independently in a remote setup
    β€’ Open, proactive, and clear communication skills

     

    Nice-to-Have Skills Description:

    β€’ Experience with Kubernetes-based deployments
    β€’ Familiarity with configuration management tools
    β€’ Familiarity with additional cloud automation and monitoring tools
    β€’ Experience working in large-scale enterprise environments

     

    Languages:

    English: C1 Advanced

    More
  • Β· 28 views Β· 4 applications Β· 13d

    Senior DevOps Engineer, Azure

    Ukraine Β· 5 years of experience Β· English - B2
    Join Our Team in Ivano-Frankivsk We’re looking for an experienced Senior DevOps Engineer to join our growing team and support one of our key customers. You’ll be working with a modern Azure-based cloud infrastructure, owning deployments, automation,...

    πŸš€ Join Our Team in Ivano-Frankivsk

    We’re looking for an experienced Senior DevOps Engineer to join our growing team and support one of our key customers. You’ll be working with a modern Azure-based cloud infrastructure, owning deployments, automation, monitoring & observability, and security. If you’re passionate about building reliable, scalable systems and enjoy solving complex problems, this role is for you.

     

    About This Opportunity

    As part of a small, high-performing team, you won’t get lost in layers of corporate processes. You’ll work closely with senior engineers and directly impact the customer’s production environment. You’ll have autonomy to design and improve cloud infrastructure while ensuring systems remain secure, cost-efficient, and highly available.

    We value initiative and ownership β€” you’ll have the chance to shape DevOps processes and mentor others while working in a supportive environment.

     

    What You’ll Do

    • Design, implement, and manage cloud infrastructure on Microsoft Azure
    • Define and manage Infrastructure as Code with Terraform
    • Build and optimize CI/CD pipelines for application deployment and testing
    • Monitor system health, availability, and performance, proactively preventing downtime
    • Collaborate with developers to streamline release processes and ensure smooth delivery
    • Implement and maintain security best practices across cloud infrastructure
    • Optimize costs, scalability, and resilience of cloud environments
    • Troubleshoot complex infrastructure and deployment issues
    • Provide guidance and mentorship on DevOps best practices within the team

     

    What We’re Looking For

     

    Required Skills

    • 5+ years of hands-on DevOps / Cloud Engineering experience
    • Strong experience with Azure services (AKS, App Services, Functions, Networking, Security)
    • Solid understanding of Terraform and Infrastructure as Code practices
    • Experience with CI/CD pipelines (GitHub Actions, Azure DevOps, or similar)
    • Knowledge of containerization & orchestration (Docker, Kubernetes)
    • Familiarity with monitoring & logging tools (Azure Monitor, Prometheus, Grafana, ELK, etc.)
    • Understanding of cloud security, IAM, and compliance practices
    • Ability to troubleshoot complex infrastructure problems end-to-end
    • English proficiency (Upper-Intermediate or higher)
    • Ability to work from our Ivano-Frankivsk office

     

    Nice to Have

    • Experience with multi-cloud setups (AWS, GCP)
    • Scripting/programming knowledge (Python, Go, Bash, PowerShell)
    • Knowledge of networking (VPCs, VPNs, firewalls, routing)
    • Familiarity with database management in the cloud (Postgres, SQL, Cosmos DB)

     

    Our Tech Stack

    • Cloud: Azure
    • IaC: Terraform
    • CI/CD: GitHub Actions, Azure DevOps
    • Containers: Docker, Kubernetes (AKS)

     

    What We Offer

     

    Work Environment

    • Small, ambitious team where your contributions are visible
    • Office in Promprylad, Ivano-Frankivsk
    • Flat structure, direct communication, no bureaucracy
    • Safe environment to innovate and experiment

     

    Benefits

    • Performance-based salary reviews as you develop
    • Flexible working hours
    • Paid time off and sick leave
    • Opportunity to make a real impact in a growing company

     

    Why Join Us?

    In a small company, your work has a direct impact. You’ll be the driving force behind an initiative on introducing a full coverage of the infrastructure with IaC, ensuring stability and scalability while building efficient DevOps processes. We’re not looking for someone to just maintain β€” we need someone who thrives on building, optimizing, and innovating.

     

    How to Apply

    Send us:

    • Your CV/Resume
    • Links to any infrastructure projects, IaC repos, or automation work you’re proud of
    • A short note on what excites you about cloud infrastructure and DevOps

     

    πŸ“ Location: Office-based in Promprylad, Ivano-Frankivsk
    πŸ• Start Date: As soon as possible

    More
  • Β· 159 views Β· 29 applications Β· 13d

    Team Lead DevOps

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· English - B1
    NuxGame is a dynamic IT company delivering top-tier software solutions for the iGaming industry. We empower operators of all sizes to expand into new markets, strengthen their existing brands, and achieve ambitious business goals. We are looking for a...

    NuxGame is a dynamic IT company delivering top-tier software solutions for the iGaming industry. We empower operators of all sizes to expand into new markets, strengthen their existing brands, and achieve ambitious business goals.


    We are looking for a DevOps TL to join our team. If you are passionate about building resilient infrastructure, leading talented engineers, and ensuring high availability for high-load systems, this is the place for you.


    Tech Stack: AWS, Kubernetes, Terraform, Helm, GitHub Actions, VictoriaMetrics, Grafana, New Relic, Aurora, Redis, Cloudflare, Python/Bash.


    Your Responsibilities:

    You will be responsible for the reliability and scalability of our platform infrastructure, leading the team towards operational excellence.

    • Team Leadership: Lead, mentor, and grow the DevOps team. Set technical priorities, optimize delivery workflows (hiring, performance reviews), and foster a culture of engineering excellence.
    • Infrastructure as Code: Design and maintain scalable, multi-account AWS infrastructure using Terraform and Helm. Drive the adoption of GitOps practices for consistent environment management.
    • Kubernetes Orchestration: Lead the development and support of high-availability EKS clusters, focusing on autoscaling, networking (CNI/Service Mesh), and resource optimization.
    • Operational Excellence: Support continuous delivery through optimized CI/CD pipelines. Ensure 99.9%+ availability via robust monitoring, incident response protocols, and proactive system performance tuning.
    • Security & Compliance: Strengthen security controls across the platform. Implement Zero-Trust principles, manage IAM/Secrets, and oversee Cloudflare security services (WAF, DDoS protection).
    • Database Operations: Ensure reliability of stateful services (MySQL/Aurora, Redis), automating backups, failover strategies, and performance tuning.
    • Cross-Functional Collaboration: Partner with Backend, Product, and QA teams to align infrastructure roadmap with business goals and eliminate bottlenecks in the development lifecycle.


    What we expect from you:

    • Experience: 5+ years in a DevOps role with 2+ years in a leadership position (managing team workflows, mentoring, reporting).
    • AWS Expertise: Deep understanding of AWS ecosystem (multi-account organizations, advanced networking, security best practices, and cost optimization).
    • Kubernetes Mastery: Advanced experience with EKS (HA networking, PodDisruptionBudgets, NodeGroups, Service Mesh, Cilium/eBPF).
    • IaC & GitOps: Strong proficiency with Terraform (multi-environment modules) and GitOps tools (FluxCD or ArgoCD).
    • CI/CD Strategies: Experience designing secure, scalable deployment pipelines (GitHub Actions, Jenkins) for PHP and Go applications.
    • Observability: Expert knowledge of monitoring stacks (Prometheus, Grafana, Alertmanager, New Relic, Sentry) and experience designing custom metrics/alerts strategies.
    • Security Fundamentals: Strong grasp of network segmentation, SSL/TLS, Secrets Management (AWS Secrets Manager/External Secrets Operator), and compliance basics.
    • Automation: Proficiency in Python or Bash for creating operational tooling and automation scripts.


    It will be a plus

    • Architecture: Experience with Multi-Region and Multi-Cloud architectures, cross-cluster networking, and disaster recovery strategies.
    • Streaming Data: Experience with Kafka/MSK or other event streaming systems.
    • Compliance: Experience with PCI DSS or equivalent compliance frameworks.
    • Performance: Deep understanding of Linux internals and advanced caching strategies (Redis, Cloudflare, S3 tiering).
    • Cost Management: Strong background in FinOps and cost optimization techniques across cloud providers.

      What we offer:
    • We believe that a happy team builds the best products. Here is how we support you:
    • Remote & Flexible: Work from anywhere. Our core hours are 09:00/10:00 to 17:00/18:00 (Kyiv time), Mon-Fri.
    • Financial Stability: Timely payment for compensation for services
    • Personal Equipment Policy: equipment are provided for use to ensure comfortable and efficient work.
    • Knowledge Sharing: We regularly gather to discuss new trends, share insights, and elevate one another.
    • Community: At NuxGame, you will work in a team of like-minded people who are ready to support, inspire, and tackle complex challenges together.
    • Creative Freedom: We encourage initiative. With us, you have the freedom of professional expression and the space to implement your ideas.
    • Time-off Policy: 24 vacation days per year + 5 sick days (without medical confirmation).
    • Atmosphere: A friendly environment focused on results and mutual respect, free from unnecessary bureaucracy and pressure.


      We believe in the importance of unlocking the inner potential of each team member, and we have an open and democratic system of work organization.
       
      We are waiting for you on our team!
       

    More
  • Β· 42 views Β· 3 applications Β· 13d

    DevOps Engineer 3 to $5000

    Hybrid Remote Β· Ukraine Β· Product Β· 5 years of experience Β· English - B2
    About Behavox Behavox is a cloud-native AI company providing an integrated controls platform for global banks, asset managers, hedge funds, private equity firms, insurance businesses, and commodity firms. The platform unifies communications and trade...

    About Behavox

     

    Behavox is a cloud-native AI company providing an integrated controls platform for global banks, asset managers, hedge funds, private equity firms, insurance businesses, and commodity firms. The platform unifies communications and trade surveillance, compliant archiving, policy management as well as front-office analytics on a single, AI-native technology stack, delivered as a globally scalable SaaS-based cloud service.

     

    At Behavox, our engineering culture is built around speed, experimentation, and technical excellence, following agile principles and rapid iteration. We constantly test and adopt the latest cloud technologies and AI tooling, optimising for fast feedback loops and execution. We look for people who can move fast, challenge conventional wisdom, and who want to work at the frontier of modern AI, SaaS platforms, and distributed systems.

     

    Behavox is a high-performance organisation with a strong bias toward delivery, ownership, and responsibility. We commit, and we execute. We are building systems that are complex, mission-critical, and global in scale; systems that many consider too large or too difficult. 

     

    To do that, we seek the smartest, most technically capable engineers and technologists who take end-to-end responsibility and want to win by building what others cannot.

     

    Founded in 2014 and backed by SoftBank Vision Fund, Behavox is headquartered in London, with offices worldwide, including New York City, Montreal, Seattle, Singapore, and Tokyo.

     

     

    About the Role

     

    The Behavox Platform is a scalable, fault-tolerant and highly performant storage and processing system which allows us to manage and analyze massive volumes of data. We have an extensive and flexible set of APIs to develop products that allow our clients to work through millions of data items, by searching, filtering, and visualizing relationships between entities in the system.

    As a DevOps Engineer you will be responsible for the availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of all production systems and services. You will work together with other SRE, Product and Engineering teams to design and implement SRE practice at Behavox to build foundational infrastructure allowing to support the rapid growth of the Behavox client base.

    This is an incredible opportunity to discover the world of high-load data processing and face the challenges of distributed Big Data systems. It will also provide you the opportunity to:

    1. Work with high-load and business-critical services that will have a big impact on the company
    2. Implement your ideas in an environment that strives for continuous improvement
    3. Be part of a fast-growing, dynamic company and with modern technologies

    More information about the tools and solutions used at Behavox can be found on our engineering blog https://blog.behavox.engineering

     

     

    What You’ll Bring

     

    1. Linux mastery (5+ years). You understand how the kernel works, not just how to use it. You're comfortable with: systemd, strace, system calls, inodes, iptables/netfilter, namespace isolation, cgroups, process management, filesystem internals. You can debug a hanging process or network issue from first principles.
    2. Kubernetes in production (3+ years). Not just "I deployed a pod once." You've run production K8s clusters, debugged CNI issues, understood resource limits and QoS, troubleshot DNS problems, dealt with pod evictions, and know when to use StatefulSets vs Deployments vs DaemonSets.
    3. Production troubleshooting and incident leadership. You've been paged at 3 AM and fixed it. You've led incidents as a DRI or Incident Commander, not just participated as a responder. You know how to methodically isolate failures in distributed systems, write blameless postmortems, and improve systems based on lessons learned. You can read application logs, correlate metrics, check network connectivity, profile resource usage, and find root causes under time pressure.
    4. Python or Golang (hands-on, production experience). You've built real automation tools, not just scripts. You understand error handling, testing, logging, and writing maintainable code that other engineers will use and modify.
    5. Cloud platforms (GCP required, AWS is a plus). Real production experience with Google Cloud (Compute Engine, GKE, Cloud Storage, IAM, VPC networking) or AWS equivalents. You've designed cloud architecture, optimized costs, and debugged cloud-specific issues.

     

     

    What You'll Do

     

    1. Be on-call and lead incident response. You'll carry the pager and act as Incident Commander or DRI during major outages. This means coordinating response teams, making decisive calls under pressure, running structured incident management (severity classification, communication, escalation, resolution, postmortems), and keeping stakeholders informed. You must know how to run an incident - not just fix technical problems.
    2. Deep troubleshooting. We believe in observability-first approaches with proper monitoring and metrics. But when observability doesn't give you the answer - when a Java service is leaking memory in Kubernetes, network packets are dropping mysteriously, or a production database is hitting inode limits - you need to be unafraid to go deeper. Grab strace, dive into kernel logs, check iptables rules, and analyze system calls. No handholding.
    3. Build real automation. Not bash one-liners. You'll write Python or Golang tools that solve complex operational problems - deployment automation, self-healing systems, capacity planning tools, incident response automation. Code that other engineers will depend on.
    4. Maintain high-load distributed systems. Our platform processes massive data volumes across GCP (primary) and AWS. You'll deploy, scale, monitor, and optimize these systems while keeping SLAs.
    5. Own the observability stack. Prometheus is your foundation. You'll design monitoring, write meaningful alerts (not alert spam), build dashboards that actually help during incidents, and implement quality control gates for AI services.

     

    What We Offer & Expect

     

    1. The opportunity to work on a global, mission-critical AI platform alongside the best engineers and technologists across multiple geographies.
    2. A role with real ownership and impact, building complex systems at scale in an environment that values speed, experimentation, and technical excellence.
    3. A highly attractive benefits package, including competitive cash compensation, an equity award aligned with long-term value creation, and comprehensive health insurance for employees and their families.
    4. A modern, comfortable office in central Lviv, with an expectation of working from the office three (3) days per week, reflecting our belief in strong in-person collaboration, while remaining flexible to accommodate occasional personal circumstances that may require working from home.
    5. A generous time-off policy of 30 days annually, plus public holidays and sick leave, recognizing the importance of sustained high performance.

     

    About Our Process

     

    Our selection process is designed to rigorously assess a candidate’s depth of technical knowledge, problem-solving ability, and alignment with Behavox’s mission and core values.

     

    As part of the process, candidates will first participate in a series of interviews focused on evaluating their technical expertise and engineering judgment. Candidates who successfully progress through these interviews will then be invited to complete a live technical exercise with a group of Behavox engineers and engineering managers. 

     

    The purpose of this live technical assessment is to validate the candidate’s stated technical competencies and assess their ability to solve complex problems with speed, accuracy, and sound engineering judgment. Note that whenever possible, we aim to conduct interviews in person at our offices.

     

    We recognize and respect the time candidates invest in this process. In return, Behavox commits significant time and resources to ensure that those who join us have the capability, judgment, and alignment required to operate at the speed and level of complexity our work demands. We value efficiency and clarity on both sides; if at any point we determine that a candidate is not a fit, we reserve the right to immediately conclude the interview or the technical assessment.

     

    Please note the following:

    • A core objective of the process is to objectively assess individual knowledge and competencies. The use of AI tools or external assistance during live interviews or technical exercises is strictly prohibited (unless explicitly instructed otherwise) and will result in immediate disqualification.
    • Interviews and technical sessions may be recorded for internal review to support fairness, consistency, and collaborative decision-making within the hiring team.
    More
  • Β· 27 views Β· 2 applications Β· 13d

    Senior Azure DevOps Engineer

    Full Remote Β· Ukraine Β· 3 years of experience Β· English - C1
    Our client is a large international enterprise operating modern cloud-based platforms and services. The project focuses on building, improving, and operating Azure-based cloud environments, ensuring reliable CI/CD pipelines, infrastructure automation,...

    Our client is a large international enterprise operating modern cloud-based platforms and services.

    The project focuses on building, improving, and operating Azure-based cloud environments, ensuring reliable CI/CD pipelines, infrastructure automation, and stable production operations.

    You will work closely with development teams, contributing to cloud platform evolution and supporting mission-critical systems in production.

     

    Responsibilities

    Provide Azure DevOps support for cloud solutions by setting up new environments and implementing CI/CD pipelines

    Maintain, improve, and extend cloud deployment frameworks for existing products

    Design, implement, and manage Azure DevOps pipelines

    Apply Infrastructure as Code principles to automate cloud resources

    Monitor cloud environments, detect issues, and resolve incidents

    Support 3rd-level production issues and perform root cause analysis

    Collaborate with development and operations teams to improve reliability and performance

    Share DevOps best practices and knowledge with team members

     

    Skills

    Must have

    Minimum 3 years of relevant experience in DevOps on Azure

    Bachelor degree in Software Engineering or equivalent through experience

    Strong knowledge of main Azure components and Azure DevOps pipelines

    Fluency in Python for development and scripting new DevOps solutions

    Good knowledge of Docker containers

    Experience with Infrastructure as Code tools (e.g. Bicep, SaltStack or similar)

    Ability to detect, analyze, and resolve issues in cloud and production environments

    Experience supporting production systems and handling 3rd-level support cases

    Ability to work independently in a remote setup

    Open, proactive, and clear communication skills

     

    Nice to have

    Experience with Kubernetes-based deployments

    Familiarity with configuration management tools

    Familiarity with additional cloud automation and monitoring tools

    Experience working in large-scale enterprise environments

    More
  • Β· 84 views Β· 21 applications Β· 13d

    DevOps for GCP/GKE with security skills to $4000

    Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 3 years of experience Β· English - B2
    We are looking for Google Cloud Platform DevOps engineer with interest in security and automation field. We need to protect hundreds GKE cluster nodes with different security tools, automate their deployment, compliance checks for the infrastructure and...

    We are looking for Google Cloud Platform DevOps engineer with interest in security and automation field. We need to protect hundreds GKE cluster nodes with different security tools, automate their deployment, compliance checks for the infrastructure and log analysis.

     

    Your primary tasks will be:

    - tuning opensource security tools for K8/GKE

    - analyzing security events and taking appropriate measures

    - development of Terraform configurations/modules which manage security components of Google Cloud Platform

    - development of Helm charts

    - reviewing user access requests

    - maintaining the network security rules

    - maintaining the integration between GCP and other identity providers (Federated identity)

    - development of shell scripts to automate routine tasks

     - we have also islands of Azure infrastructure so security of its services will be under your control as well

     

    We expect from you:

    * perfect knowledge of data structures, computation complexity and algorithms

    * clean coding

    * strong team player

    * ability to take ownership of features you ship

    * interest in the feature/product vision

    * being self-learner, strong can-do attitude and great interpersonal skills.

    * being comfortable with writing documentation (a lot of documentation)

     

    A big plus is:

    - Google Cloud certification

    - Kubernetes certification

    - Terraform certification

    - BigQuery SQL perfect knowledge

    - SSO, Oauth2, OIDC, SAML experience

    More
  • Β· 26 views Β· 3 applications Β· 13d

    Senior Azure DevOps Engineer (business trips)

    Full Remote Β· Ukraine Β· 5 years of experience Β· English - B2
    Our client is a large international enterprise operating modern cloud-based platforms and services. The project focuses on building, improving, and operating Azure-based cloud environments, ensuring reliable CI/CD pipelines, infrastructure automation,...

    Our client is a large international enterprise operating modern cloud-based platforms and services.

    The project focuses on building, improving, and operating Azure-based cloud environments, ensuring reliable CI/CD pipelines, infrastructure automation, and stable production operations.

    You will work closely with development teams, contributing to cloud platform evolution and supporting mission-critical systems in production.

    • Responsibilities:

      β€’ Provide Azure DevOps support for cloud solutions by setting up new environments and implementing CI/CD pipelines
      β€’ Maintain, improve, and extend cloud deployment frameworks for existing products
      β€’ Design, implement, and manage Azure DevOps pipelines
      β€’ Apply Infrastructure as Code principles to automate cloud resources
      β€’ Monitor cloud environments, detect issues, and resolve incidents
      β€’ Support 3rd-level production issues and perform root cause analysis
      β€’ Collaborate with development and operations teams to improve reliability and performance
      β€’ Share DevOps best practices and knowledge with team members

    • Mandatory Skills Description:

      β€’ Minimum 3 years of relevant experience in DevOps on Azure
      β€’ Bachelor degree in Software Engineering or equivalent through experience
      β€’ Strong knowledge of main Azure components and Azure DevOps pipelines
      β€’ Fluency in Python for development and scripting new DevOps solutions
      β€’ Good knowledge of Docker containers
      β€’ Experience with Infrastructure as Code tools (e.g. Bicep, SaltStack or similar)
      β€’ Ability to detect, analyze, and resolve issues in cloud and production environments
      β€’ Experience supporting production systems and handling 3rd-level support cases
      β€’ Ability to work independently in a remote setup
      β€’ Open, proactive, and clear communication skills

    • Nice-to-Have Skills Description:

      β€’ Experience with Kubernetes-based deployments
      β€’ Familiarity with configuration management tools
      β€’ Familiarity with additional cloud automation and monitoring tools
      β€’ Experience working in large-scale enterprise environments

    More
Log In or Sign Up to see all posted jobs