Jobs
1428-
Β· 62 views Β· 12 applications Β· 4d
Senior DevOps Engineer
Full Remote Β· Countries of Europe or Ukraine Β· Product Β· 5 years of experience Β· B1 - IntermediateAbout Fintech Farm We are a UK fintech creating successful neobanks in emerging markets in partnerships with local traditional banks. The mission is to make banking services accessible, simple and fun to use worldwide and the goal is to launch neobanks in...About Fintech Farm
We are a UK fintech creating successful neobanks in emerging markets in partnerships with local traditional banks. The mission is to make banking services accessible, simple and fun to use worldwide and the goal is to launch neobanks in 50+ markets, serving 100m+ customers.
Our success builds upon a best-in-class product, customer experience, emotional engagement, viral marketing and deep credit-decisioning expertise across our product suite covering credit, payments, savings and investments. One of our founders also previously co-founded a highly successful Eastern European neobank with a multi-million customer base.
We launched our first market with Leobank in Azerbaijan in 2021, where weβve already taken a leading market position. Our next market was Vietnam, where we launched Liobank in early 2023 and have also reached strong traction. We have several more markets on the roadmap in the next 12 months and are starting to build out teams there.
Why Fintech Farm is a great place to be
Our ambition. We are looking to become a leading consumer digital bank brand in each market we operate, making it easy for consumers to interact with their money. You could be a part of this exciting journey.
Our culture
Customers. We always go above and beyond to provide an amazing customer experience. We serve our customers the way we would want our mom to be served. And who said that banking has to be boring? We make our apps not just easy but fun to use.
People. We are all business partners in our company. Each of us thinks big, acts as if we own the place and never takes βnoβ for an answer. We work with strong individuals whom we empower and trust rather than micromanage. Common sense rather than formal policies prevails in all that we do. We always stay curious and open-minded. We embrace the βwe over meβ culture.
Your Role:
As a Senior DevOps Engineer, you will play a critical role in designing, implementing, and managing robust infrastructure solutions for our multi-market microservices architecture. Youβll work across cloud environments, container orchestration platforms, and observability tools to ensure performance, scalability, and reliability.
What You Will Be Doing:- Design, implement, and manage Infrastructure as Code using Terraform/Terragrunt on AWS
- Manage and optimize Kubernetes clusters (on-premises and AWS EKS) and deploy services with Helm
- Develop and maintain CI/CD pipelines using GitLab CI/CD
- Implement and manage Istio service mesh to enhance microservices networking and observability
- Secure and manage sensitive information using Vault and AWS Secrets Manager
- Monitor infrastructure performance and availability using VictoriaMetrics, Grafana, ELK, and Alert Manager
- Participate in the duty schedule and react to infrastructure-related production incidents
- Optimize and maintain databases (PostgreSQL, MongoDB, Redis) for high availability and performance
- Collaborate with development teams to support fast, safe, and efficient delivery pipelines
- Contribute to a DevSecOps culture with a strong focus on security standards and operational best practices
Who You Are:
- Autonomous and accountable β you own your work from idea to implementation
- Always curious β you embrace new technologies and love to automate
- Strong communicator β you work well with developers, SREs, and business teams
- Calm under pressure β you can diagnose and resolve critical issues quickly
- Collaborative β you value feedback and contribute to a high-performing team culture
Your Experience:
- 5+ years as a DevOps Engineer, working in production-grade environments
- Strong hands-on experience with AWS services and Kubernetes (EKS and on-prem)
- Proficiency in Infrastructure as Code with Terraform and Terragrunt
- Solid scripting experience (Python and Bash preferred)
- Familiar with CI/CD tooling (GitLab CI/CD, Ansible) and GitOps principles (Flux, ArgoCD)
- Hands-on with Helm, Istio, Vault, VictoriaMetrics, Grafana
- Experience in managing relational and NoSQL databases (PostgreSQL, MongoDB, Redis)
- Good understanding of DevSecOps practices and relevant security certifications (e.g., ISO 27001, SOC 2, PCI DSS)
- Familiarity with observability and alerting stack (ELK, Alert Manager, PagerDuty)
What we are offering
- Competitive salary is negotiable depending on the candidateβs level
- Share options
- We are still a start-up and more benefits are on the way
-
Β· 157 views Β· 26 applications Β· 3d
COO / Chief of Staff at Profitable Healthcare Startup to $12000
Full Remote Β· Countries of Europe or Ukraine Β· 3 years of experience Β· C1 - AdvancedHiring a Chief of Staff with fluent English who is smart and meticulous. Benefits: top pay, vital healthcare service, happy customers, stable employment, unlimited vacation. As a Chief of Staff, youβll work very closely with a Y Combinator funded CEO to...Hiring a Chief of Staff with fluent English who is smart and meticulous. Benefits: top pay, vital healthcare service, happy customers, stable employment, unlimited vacation.
As a Chief of Staff, youβll work very closely with a Y Combinator funded CEO to support them across a wide range of projects. You'll see how to build a profitable, capital-efficient business, and learn the skills you need to be a founder or CEO in the future.
About us:
- Stealth-mode healthcare startup. We make it easy and cheap to buy high-quality healthcare services so that every patient can get the care they need.
- Small, Remote Team. Weβre a 10 person team based in the US and Ukraine.
- Profitable. So we offer stable employment.
Job Benefits:
- High salary. $9,000 - $12,000 (gross) per month.
- Learning. Learn how every aspect of a successful startup works.
- Mission-driven. Help build the Amazon for healthcare; make healthcare affordable and accessible.
- Work remotely. Remote-first team, with flexibility as to when and how you work.
- Great product. Customers love our product.
- Unlimited vacation days. Take time when you need it.
What youβll do:
- Recruiting. Hire top 1% engineers and support agents.
- Special projects. Drive forward strategic projects and initiatives.
- Manage vendors. Manage key vendor relationships.
- Complex customer support. Handle the most sensitive customer problems.
- Admin. Youβll process payroll and take meeting notes.
- US east coast hours. Youβll work East Coast US Hours Monday through Friday.
Job requirements:
- Fluent English. Both written and spoken.
- Operational excellence. You are disciplined, create and follow checklists, and get all TODOs done.
- Ability to influence people. You get other people to do what you want.
- Strong interest in Business. You enjoy consuming an extraordinary amount of content on business.
- Perfectionist personality. You enjoy quality and craftsmanship for its own sake.
A technical background is nice to have, but not required.
Recruiting process:
- An online assessment followed by three interviews over Zoom.
- Weβll reply within 1 business day after each recruitment step.
-
Β· 65 views Β· 1 application Β· 3d
Senior Backend Data Engineer (Node.js) to $7000
Full Remote Β· Poland Β· Product Β· 5 years of experience Β· B2 - Upper IntermediateWho We Are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our platform provides organizations with real-time...Who We Are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.About the Product:
Our platform provides organizations with real-time visibility and control over their digital environments, enabling IT teams to detect, diagnose, and resolve issues before they impact employees. It integrates multiple products into a single, unified experienceβleveraging AI-driven automation, intelligent data processing, and scalable architecture to enhance productivity across global workplaces. The DEX Platform team builds the core infrastructure that powers these capabilities, delivering high-performance backend services and advanced data pipelines at scale.About the Role:
We are looking for an experienced Senior Backend Data Engineer to join our advanced employee experience monitoring and optimization platform and take a leading role in building the next generation of our data infrastructure. This role involves designing and implementing large-scale, real-time data pipelines and backend services that support both operational workflows and AI-driven insights.You will work end-to-endβcovering architecture, development, deployment, and ongoing production monitoringβwhile collaborating closely with backend, AI, and data specialists to ensure high performance, scalability, and reliability.
Key Responsibilities:
- Design, develop, and maintain robust backend services and data processing pipelines for large-scale, real-time environments.
- Build and optimize streaming solutions using technologies like Kafka, Flink, and other stream-processing frameworks.
- Own the full lifecycle of services: architecture, implementation, deployment, monitoring, and scaling.
- Collaborate with cross-functional teams, including backend engineers, AI developers, and data analysts, to deliver production-ready solutions.
- Ensure compliance, security, and observability for all data-related systems.
- Work with cloud infrastructure to design and deploy scalable solutions.
- Troubleshoot and resolve production issues with a focus on high availability and system resilience.
Required Competence and Skills:
- 5+ years of backend/data engineering experience.
- Strong proficiency in Node.js (NestJS/Express).
- Familiarity with Java (Vert.x or Spring) will be a big advantage.
- Solid understanding of microservices architecture and cloud platforms (Azure, AWS, or GCP).
- Hands-on experience with Kafka and streaming frameworks such as Kafka Streams, Flink, Spark, or Beam.
- Strong foundation in object-oriented design, design patterns, and clean code principles.
- Experience in production-aware environments, including monitoring, troubleshooting, and optimization.
- Comfortable designing, deploying, and maintaining backend services and data flows.
- Passion for continuous learning, experimenting with new technologies, and building reliable systems at scale.
- Strong product mind-set, open-mindedness and flexibility to work with different technologies as per company needs
- Excellent communication skills in English (Hebrew a plus).
- Team player with a positive attitude and a passion for delivering high-quality products.
Nice to have:
- Familiarity with AI-first development tools (e.g., GitHub Copilot, Cursor).
- Knowledge of Postgres, Redis, or ClickHouse.
More
-
Β· 33 views Β· 2 applications Β· 3d
Senior Data Scientist to $9000
Full Remote Β· Ukraine, Poland, Portugal, Bulgaria Β· Product Β· 5 years of experience Β· B2 - Upper IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client is a leading SaaS company offering pricing...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.
About the Product:
Our client is a leading SaaS company offering pricing optimization solutions for e-commerce businesses. Its advanced technology utilizes big data, machine learning, and AI to assist customers in optimizing their pricing strategies and maximizing their profits.
About the Role:
As a Data Scientist youβll play a critical role in shaping and enhancing our AI-driven pricing platform.
Key Responsibilities:
- Develop and Optimize Advanced ML Models: Build, improve, and deploy machine learning and statistical models for forecasting demand, analyzing price elasticities, and recommending optimal pricing strategies.
- Lead End-to-End Data Science Projects: Own your projects fully, from conceptualization and experimentation through production deployment, monitoring, and iterative improvement.
- Innovate with Generative and Predictive AI Solutions: Leverage state-of-the-art generative and predictive modeling techniques to automate complex pricing scenarios and adapt to rapidly changing market dynamics.
Required Competence and Skills:
- A Masterβs or PhD in Computer Science, Physics, Applied Mathematics or a related field, demonstrating a strong foundation in analytical thinking.
- At least 5 years of professional experience in end-to-end machine learning lifecycle (design, development, deployment, and monitoring).
- At least 5 years of professional experience with Python development, including OOP, writing production-grade code, testing, and optimization.
- At least 5 years of experience with data mining, statistical analysis, and effective data visualization techniques.
- Deep familiarity with modern ML/DL methods and frameworks (e.g., PyTorch, XGBoost, scikit-learn, statsmodels).
- Strong analytical skills combined with practical experience interpreting model outputs to drive business decisions.
Nice-to-Have:
- Practical knowledge of SQL and experience with large-scale data systems like Hadoop or Spark.
- Familiarity with MLOps tools and practices (CI/CD, model monitoring, data version control).
- Experience in reinforcement learning and Monte-Carlo methods.
- A solid grasp of microeconomic principles, including supply and demand dynamics, price elasticity, as well as econometrics.
- Experience with cloud services and platforms, preferably AWS.
-
Β· 50 views Β· 12 applications Β· 3d
DevOps Manager β Infrastructure and Edge Operations
Full Remote Β· Worldwide Β· 8 years of experience Β· C1 - AdvancedOn behalf of our Client, Scalors is looking for a DevOps Manager to join a full-remote team for a full-time position. About Client: Our Client delivers cutting-edge software solutions for the cruise and hospitality industries, driving efficiency and...On behalf of our Client, Scalors is looking for a DevOps Manager to join a full-remote team for a full-time position.
About Client:
Our Client delivers cutting-edge software solutions for the cruise and hospitality industries, driving efficiency and reliability across mission-critical systems. Our Infrastructure & Edge Operations team ensures that our global platforms run smoothly, securely, and at scale.
The Role:
Weβre looking for a DevOps Manager to lead our Infrastructure & Edge Operations team. In this role, youβll oversee a group of talented engineers, guide technical strategy, and ensure smooth, reliable delivery of our infrastructure services. Youβll drive best practices in automation, monitoring, and reliability engineering, while mentoring your team to grow and succeed.
Responsibilities:
β’ Lead and mentor a team of DevOps Engineers.
β’ Define and implement technical strategy for infrastructure and edge operations.
β’ Drive adoption of SRE and automation practices across the organization.
β’ Establish and improve CI/CD, observability, and incident response processes.
β’ Collaborate with product, QA, and development teams to ensure operational excellence.
β’ Manage and optimize cloud and edge environments for scalability and security.
β’ Foster a culture of ownership, accountability, and continuous improvement.Requirements:
β’ Proven experience in a DevOps leadership role (Manager, Lead, or similar).
β’ Strong technical expertise in cloud platforms (AWS, Azure, or GCP) and container orchestration (Kubernetes, Docker).
β’ Experience with Terraform, Ansible, CI/CD pipelines, and monitoring tools.
β’ Solid understanding of networking, edge computing, and system security.
β’ Strong people leadership skills β mentoring, coaching, and team building.
β’ Excellent communication and cross-functional collaboration abilities.Nice to Have:
β’ Experience scaling infrastructure for global, distributed systems.
β’ Background in SRE (Site Reliability Engineering) practices.
β’ Familiarity with regulated environments and compliance frameworks.What Success Looks Like:
β’ A motivated and high-performing DevOps team.
β’ Infrastructure that scales efficiently and reliably across edge and cloud environments.
β’ Measurable improvements in system uptime, performance, and release cycles.
β’ Clear alignment between technical strategy and business goals.Why Join Us:
β’ Shape the future of infrastructure powering the cruise and hospitality industries.
More
β’ Lead a talented global team working with cutting-edge technologies.
β’ Competitive compensation, career growth opportunities, and flexible work models. -
Β· 51 views Β· 8 applications Β· 3d
Java Team Lead to $7500
Full Remote Β· Ukraine Β· Product Β· 6 years of experience Β· B2 - Upper IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Over the past few years Coro has received $275M in...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.About the Product:
Over the past few years Coro has received $275M in funding and is one of the fastest growing Cybersecurity companies in the world. The funding is primarily being used to enhance the Coro Cybersecurity SaaS platform and for additional headcount growth, as Coro continues to expand globally.Coro started in Tel-Aviv, Israel and is also headquartered in Chicago, IL with additional offices in New York, London, and remotely across the globe. As a global organization, Coro gives you the ability to work with people and teammates from around the world.
Coroβs AI enabled Modular Cybersecurity Platform is the only one in the industry specifically designed to provide Mid-Market customers with scalable and affordable βenterprise gradeβ protection for all of their priority threat vectors.
About the Role:
This is a hands-on leadership role for a Backend Team Lead who will guide a team of backend engineers while staying directly involved in coding, architecture, and delivery.Youβll lead Coroβs EDR (Endpoint Detection & Response) team, working on real high-load systems that process millions of telemetry events daily from customer devices and cloud providers. These systems detect anomalies, identify malicious processes, and trigger automated remediation β making this a mission-critical leadership role at the core of Coroβs product.If you are looking for a role where you can combine leadership with coding, shape team culture, and build highly scalable backend services for a fast-growing global cybersecurity platform β this is the role for you.
Key Responsibilities:
- Lead a team of highly skilled and passionate developers
- Partner with the Product Manager to build the road-map for the team
- Understand the team objectives and key results and make sure that the team are on the path to achieving them
- Participate in making decisions on the architecture and technologies
- Conduct the team ceremonies: Daily stand-ups, sprint planning and retro meetings
- Help create and enforce strong engineering standards, code quality, and best Engineering practices
- Take an active part in the implementation (hands-on) of the teamβs various services and features
- Help your team members grow by mentoring them, building their growth plan with them and enable them opportunities to implement it
Required Competence and Skills:
- 6+ years of backend/server-side development with Java
- 2+ years of proven experience leading an engineering team
- Strong hands-on experience with Spring Framework and MongoDB
- Deep understanding of multithreading and concurrency
- Solid knowledge of OOP concepts and design patterns
- Proven experience with microservices architecture
- Experience working with high-load backend systems
- Proficiency in cloud-native environments (preferably AWS)
Nice to have:
- Experience with Redis and RabbitMQ
- Experience with Elasticsearch/OpenSearch
- Familiarity with Docker / Kubernetes
- Experience in Cybersecurity domain
-
Β· 28 views Β· 0 applications Β· 3d
Penetration Tester
Full Remote Β· Ukraine Β· Product Β· 2 years of experienceBig product software company is looking for a Penetration Tester. Remote work, high salary + financial bonuses (up to 100% of the salary), regular salary review, interesting projects, good working conditions. REQUIREMENTS: - Software Penetration Testing...Big product software company is looking for a Penetration Tester. Remote work, high salary + financial bonuses (up to 100% of the salary), regular salary review, interesting projects, good working conditions.
REQUIREMENTS:
- Software Penetration Testing experience more than 2 years;
- Practical skills with C/C++;
- Higher education;
- Technical English (higher level is advantage).
COMPANY OFFERS:
- Employment under gig-contract, all taxes are paid;
- Flexible working hours;
- 28 days of paid vacation + 15 days at your own expense;
- Paid sick leave;
- Medical insurance (with dentistry and optics), including the children;
- Opportunity to become an inventor of international patents with paid bonuses;
- Career and professional growth;
- Own base of courses and trainings;
- Office in the Kyiv city centre / remotely;
- Provision of necessary up-to-date equipment;
- Regular salary review and financial bonuses (up to 100% of the salary);
- Bonuses for wedding, birth of children and other significant events;
- Paid maternity leave;
- Paid lunches, tea, coffee, water, snacks;
- Discounts to company's products, services.
More -
Β· 41 views Β· 1 application Β· 3d
Security (Penetrarion) Engineer to $7200
Full Remote Β· Ukraine, Poland Β· Product Β· 4 years of experience Β· B2 - Upper IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client, Coro, over the past few years has received...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.About the Product:
Our client, Coro, over the past few years has received $275M in funding and is one of the fastest growingCybersecurity companies in the world. The funding is primarily being used to enhance the CoroCybersecurity SaaS platform and for additional headcount growth, as Coro continues to expand globally.Coro started in Tel-Aviv, Israel, and is also headquartered in Chicago, IL, with additional offices in New York, London, and remotely across the globe. As a global organization, Coro gives you the ability to work with people and teammates from around the world.
Coroβs AI-enabled Modular Cybersecurity Platform is the only one in the industry specifically designed to provide Mid-Market customers with scalable and affordable βenterprise-gradeβ protection for all of their priority threat vectors.
About the Role:
At Coro, youβll own email security research end-to-end: dissecting phishing and malicious email campaigns, pulling apart attacker tradecraft, and building detection logic that scales to protect thousands of customers daily. Youβll have sole responsibility for this domain, which means autonomy, accountability, and the ability to directly influence how Coro stops attackers.Key Responsibilities:
- Own email threat research and detection: analyze real-world phishing and malicious emails, identify attacker techniques, and translate findings into new or improved detection rules.
- Maintain and extend internal detection tools: debug and develop multi-file Python/Shell scripts used daily by researchers and engineers.
- Collaborate across teams: work with Engineering, Product, and Data to turn research insights into production-grade features and detection logic.
- SIEM/log analysis: Investigate incidents by reviewing logs from customer devices, understanding tool internals, and suggesting improvements to log collection and analysis.
- Stay ahead of attackers: monitor emerging threats and protocols (SMTP, HTTP/HTTPS), apply knowledge of OS internals (especially Windows), and feed new insights into the detection engine.
Required Competence and Skills:
- At least 4 years in security research or closely related fields
- Previous experience with aspects of security in networks, software, and/or hardware
- Experienced with Python and/or similar language to debug and maintain multi-file research tools
- Solid understanding of Email & network protocols
- Hands-on experience with OS internals
- Ability to interpret logs, understand how SIEM tools work, and apply findings to incident investigation
- Security mindset and autonomy, ability to proactively spot and raise vulnerabilities, work independently, and communicate risks.
Nice to have:
- Perl/Shell experience.
- Reverse engineering skills
- Malware analysis background
- Familiarity with vulnerability assessment tools
- Cloud security (AWS or others).
-
Β· 36 views Β· 2 applications Β· 3d
Senior DevOps Engineer (AWS) to $7300
Full Remote Β· Ukraine Β· Product Β· 5 years of experience Β· B2 - Upper IntermediateWho we are: Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries. About the Product: Our client, Coro, over the past few years has received...Who we are:
Adaptiq is a technology hub specializing in building, scaling, and supporting R&D teams for high-end, fast-growing product companies in a wide range of industries.About the Product:
Our client, Coro, over the past few years has received $275M in funding and is one of the fastest growing Cybersecurity companies in the world. The funding is primarily being used to enhance the Coro Cybersecurity SaaS platform and for additional headcount growth, as Coro continues to expand globally.Coro started in Tel-Aviv, Israel and is also headquartered in Chicago, IL with additional offices in New York, London, and remotely across the globe. As a global organization, Coro gives you the ability to work with people and teammates from around the world.
Coroβs AI enabled Modular Cybersecurity Platform is the only one in the industry specifically designed to provide Mid-Market customers with scalable and affordable βenterprise gradeβ protection for all of their priority threat vectors.About the Role:
As a Senior DevOps Engineer in this high-trajectory environment, youβll shape the future of multi-region cloud infrastructure with direct influence and ownership. Youβll revolutionize how we deliver infrastructure at scaleβdesigning globally distributed, highly available systems, fortifying observability and automation, and raising reliability standards.
This is a great opportunity to join a fast-growing cybersecurity company and make a real impact.Key Responsibilities:
- Take ownership of infrastructure and deployment pipelines, working hands-on to design, build, and scale cloud-native environments
- Collaborate with developers, QA, architects, and product teams to ensure seamless integration and delivery across microservices
- Build and maintain secure, observable infrastructure using Terraform, terragrunt, helm, Kubernetes, and GitOps tools like ArgoCD
- Develop and optimize CI/CD pipelines using GitLab CI, ensuring automated, safe, and efficient deployments
- Continuously improve performance, reliability, and scalability through monitoring, alerts, and proactive infrastructure changes
- Drive infrastructure enhancements via POCs, innovation, and tool evaluation
- Lead root cause analysis and troubleshooting of incidents; ensure post-mortem discipline and recovery improvements
- Mentor engineers and promote a culture of ownership, collaboration, and continuous improvement
- Stay current with industry trends, emerging tools, and DevOps best practices
Required Competence and Skills:
- 5+ years of hands-on experience in DevOps, SRE, or infrastructure engineering roles
- Strong motivation and proven ability to thrive in complex, cross-functional environments
- Deep expertise in AWS (EKS, EC2, RDS, S3, IAM, CloudWatch, etc.)
- Production-grade experience with Kubernetes and Helm
- Proficient with Terraform
- Experience with Bash and Python scripting
- Strong CI/CD background using GitLab CI, TeamCity, or similar
- Solid Linux administration skills and deep knowledge of Docker/container security
- Experience with observability stacks: Prometheus, Grafana, Coralogix (or other monitoring and log management systems)
- Understanding of networking
- Excellent communication skills and a collaborative mindset with a desire to lead infrastructure strategy discussions
Nice to have:
- Experience with Terragrunt and Ansible
- Familiar with secrets management using Vault, AWS KMS or similar
- Understanding service discovery, zero-downtime deployments, and SOC2-compliant environments
-
Β· 31 views Β· 1 application Β· 3d
Security Researcher (Embedded)
Full Remote Β· Ukraine Β· Product Β· 2 years of experienceBig product software company is looking for a Security Researcher (Embedded). Remote work, high salary + financial bonuses (up to 100% of the salary), regular salary review, interesting projects, good working conditions. REQUIREMENTS: - 2+ years of...Big product software company is looking for a Security Researcher (Embedded). Remote work, high salary + financial bonuses (up to 100% of the salary), regular salary review, interesting projects, good working conditions.
REQUIREMENTS:
- 2+ years of relevant professional experience;
- Practical experience in reverse engineering;
- Hardware skills;
- Knowledge of C/C++;
- Higher education;
- Technical English (higher level is advantage).
COMPANY OFFERS:
- Employment under gig-contract, all taxes are paid;
- Flexible working hours;
- 28 days of paid vacation + 15 days at your own expense;
- Paid sick leave;
- Medical insurance (with dentistry and optics), including the children;
- Opportunity to become an inventor of international patents with paid bonuses;
- Career and professional growth;
- Own base of courses and trainings;
- Office in the Kyiv city centre / remotely;
- Provision of necessary up-to-date equipment;
- Regular salary review and financial bonuses (up to 100% of the salary);
- Bonuses for wedding, birth of children and other significant events;
- Paid maternity leave;
- Paid lunches, tea, coffee, water, snacks;
- Discounts to company's products, services.
More -
Β· 16 views Β· 0 applications Β· 3d
Mobile Assessment Engineer
Full Remote Β· Ukraine Β· Product Β· 2 years of experienceBig product software company is looking for a Mobile Assessment Engineer. Remote work, high salary + financial bonuses (up to 100% of the salary), regular salary review, interesting projects, good working conditions. REQUIREMENTS: - 2+ years of relevant...Big product software company is looking for a Mobile Assessment Engineer. Remote work, high salary + financial bonuses (up to 100% of the salary), regular salary review, interesting projects, good working conditions.
REQUIREMENTS:
- 2+ years of relevant professional experience;
- Deep understanding of Android security architecture;
- Practical experience in reverse engineering;
- Knowledge of C/C++, Java;
- Higher education;
- Technical English (higher level is advantage).
COMPANY OFFERS:
- Employment under gig-contract, all taxes are paid;
- Flexible working hours;
- 28 days of paid vacation + 15 days at your own expense;
- Paid sick leave;
- Medical insurance (with dentistry and optics), including the children;
- Opportunity to become an inventor of international patents with paid bonuses;
- Career and professional growth;
- Own base of courses and trainings;
- Office in the Kyiv city centre / remotely;
- Provision of necessary up-to-date equipment;
- Regular salary review and financial bonuses (up to 100% of the salary);
- Bonuses for wedding, birth of children and other significant events;
- Paid maternity leave;
- Paid lunches, tea, coffee, water, snacks;
- Discounts to company's products, services.
More -
Β· 23 views Β· 1 application Β· 3d
Senior Data Platform Engineer
Full Remote Β· Ukraine Β· 7 years of experience Β· B2 - Upper IntermediateWe are seeking an exceptional Senior Data Platform Engineer to architect and develop a highly scalable data infrastructure for a fast-growing AI-powered vertical software startup revolutionizing retail and commerce operations. This role requires designing...We are seeking an exceptional Senior Data Platform Engineer to architect and develop a highly scalable data infrastructure for a fast-growing AI-powered vertical software startup revolutionizing retail and commerce operations. This role requires designing robust data systems capable of handling massive financial and operational datasets while delivering real-time insights through advanced AI integration. You will work with cutting-edge data technologies to build the backbone that powers financial forecasting, demand planning, and inventory optimization for leading e-commerce brands at enterprise scale.
Key Responsibilities:
- Design and implement highly scalable data warehouse architecture and ELT/ETL pipeline systems
- Build robust data processing pipelines capable of handling massive datasets from diverse eCommerce sources
- Develop real-time data quality monitoring and fault tolerance systems across the entire data stack
- Design secure, performant data integrations from external sources to enrich the Drivepoint platform
- Optimize database queries and pipeline performance for speed and cost efficiency at scale
- Implement new customer-facing features using TypeScript and Python across multiple product areas
- Integrate AI/LLM-powered analytics capabilities into data processing workflows
- Collaborate with cross-functional teams including product managers, customer success, and business stakeholders
- Ensure data integrity, security, and compliance across all financial and operational data processing
Required Skills and Qualifications:
- 6+ years of experience in data engineering with strong expertise in SQL-based data warehouses (BigQuery or Snowflake)
- Extensive experience building and optimizing ELT/ETL pipelines using Airbyte/Fivetran, DBT, and Airflow
- 4+ years of professional software engineering experience with TypeScript and/or Python
- Proven track record in designing and building high-scale distributed data systems
- Strong database design and optimization skills for performance-critical financial analytics applications
- Experience with modern SDLC frameworks and the ability to deliver both infrastructure and customer-facing features
- Demonstrated ability to architect systems for massive scale, reliability, and fault tolerance
- Strong analytical and problem-solving capabilities with complex data processing challenges
- Excellent communication skills and collaborative approach to technical and business challenges
- Ability to thrive in a fast-paced startup environment with rapidly evolving requirements
Preferred Skills and Qualifications:
- Direct experience with financial data processing, forecasting, or eCommerce analytics platform
- Experience integrating AI/ML models and building AI agent-powered data processing features
- Familiarity with real-time data streaming and event-driven architectures
- Experience working with retail, eCommerce, or financial technology data at scale
- Knowledge of data governance, quality frameworks, and observability best practices
- Experience with cost optimization strategies for large-scale data processing workloads
- Understanding of financial modeling, forecasting methodologies, and business intelligence requirements
- Comfortable using modern AI tools to accelerate development and problem-solving workflows
-
Β· 51 views Β· 19 applications Β· 3d
DevOps Engineer β Infrastructure and Edge Operations
Full Remote Β· Worldwide Β· 6 years of experience Β· C1 - AdvancedDescription On behalf of our Client, Scalors is looking for a DevOps Engineer to join a full-remote team for a full-time position. About Client: Our Client delivers cutting-edge software solutions for the cruise and hospitality industries, driving...Description
On behalf of our Client, Scalors is looking for a DevOps Engineer to join a full-remote team for a full-time position.
More
About Client:
Our Client delivers cutting-edge software solutions for the cruise and hospitality industries, driving efficiency and reliability across mission-critical systems. Our Infrastructure & Edge Operations team ensures that our global platforms run smoothly, securely, and at scale.
The Role:
As a DevOps Engineer, youβll be part of a dynamic team responsible for building, maintaining, and optimizing infrastructure that powers our edge operations worldwide. You will collaborate with developers, QA engineers, and product teams to ensure resilient deployments, high availability, and smooth operations.
Responsibilities:
β’ Build and maintain scalable, reliable infrastructure for edge and cloud systems.
β’ Implement CI/CD pipelines and automate deployments.
β’ Monitor, troubleshoot, and optimize system performance.
β’ Support edge operations across distributed environments.
β’ Collaborate with cross-functional teams to improve reliability and efficiency.
β’ Ensure security and compliance standards are met.
Requirements:
β’ Proven experience as a DevOps Engineer or similar role.
β’ Strong knowledge of cloud platforms (AWS, Azure, or GCP).
β’ Hands-on experience with CI/CD, Kubernetes, Docker, Terraform, Ansible (or similar tools).
β’ Familiarity with monitoring tools (Prometheus, Grafana, ELK, etc.).
β’ Strong scripting skills (Python, Bash, or similar).
β’ Knowledge of networking, edge computing, and system security.
Nice to Have:
β’ Experience in edge computing or distributed systems.
β’ Exposure to Site Reliability Engineering (SRE) practices.
β’ Background in high-availability systems for mission-critical applications.
What Success Looks Like:
β’ Systems run with high reliability and minimal downtime.
β’ Automated deployments reduce errors and speed up delivery.
β’ Edge operations remain secure, scalable, and efficient.
β’ Collaboration leads to smoother development and release cycles.
Why Join Us:
β’ Work on cutting-edge infrastructure at the intersection of cloud and edge.
β’ Be part of a collaborative and innovative global team.
β’ Competitive compensation, flexible work model, and growth opportunities.
Apply now and help us power the future of cruise and hospitality technology! -
Β· 22 views Β· 1 application Β· 3d
MLOps Engineer
Hybrid Remote Β· Ukraine (Kyiv) Β· Product Β· 2 years of experience Β· B2 - Upper IntermediateAbout us: Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently...About us:
More
Data Science UA is a service company with strong data science and AI expertise. Our journey began in 2016 with uniting top AI talents and organizing the first Data Science tech conference in Kyiv. Over the past 9 years, we have diligently fostered one of the largest Data Science & AI communities in Europe.
About the client:
Our client is an IT company that develops technological solutions and products to help companies reach their full potential and meet the needs of their users. The team comprises over 600 specialists in IT and Digital, with solid expertise in various technology stacks necessary for creating complex solutions.
About the role:
We are looking for an MLOps Engineer specializing in Large Language Model (LLM) infrastructure to design and maintain the robust platform on which the AI models are developed, deployed, and monitored. As an MLOps Engineer, you will build the backbone of the machine learning operations β from scalable training pipelines to
Reliable deployment systems β ensuring that the NLP models (including LLMs) can be trained on large datasets and served to end-users efficiently.
This role sits at the intersection of software engineering, DevOps, and machine learning, and is crucial for accelerating the R&D in the Ukrainian LLM project. Youβll work closely with data scientists and software engineers to implement best-in-class infrastructure and workflows for the continuous delivery of AI innovations.
Requirements:
Experience & Background:
- 4+ years of experience in DevOps, MLOps, or ML Infrastructure roles.
- Strong foundation in software engineering and DevOps principles as they apply to machine learning.
- A Bachelorβs or Masterβs in Computer Science, Engineering, or a related field is preferred.
Cloud & Infrastructure:
- Extensive experience with cloud platforms (AWS, GCP, or Azure) and designing cloud-native applications for ML.
- Comfortable using cloud services for compute (EC2, GCP Compute, Azure VMs), storage (S3, Cloud Storage), container registry, and serverless components where appropriate.
- Experience managing infrastructure with Infrastructure-as-Code tools like Terraform or CloudFormation.
Containerization & Orchestration:
- Proficiency in container technologies (Docker) and orchestration with Kubernetes.
- Ability to deploy, scale, and manage complex applications on Kubernetes clusters; experience with tools like Helm for Kubernetes package management.
- Knowledge of container security and networking basics in distributed systems.
CI/CD & Automation:
- Strong experience implementing CI/CD pipelines for ML projects.
- Familiar with tools like Jenkins, GitLab CI, or GitHub Actions for automating testing and deployment of ML code and models.
- Experience with specialized ML CI/CD (e.g., TensorFlow Extended TFX, MLflow for model deployment) and GitOps workflows (Argo CD) is a plus.
Programming & Scripting:
- Strong coding skills in Python, with experience in writing pipelines or automation scripts related to ML tasks.
- Familiarity with shell scripting and one or more general-purpose languages (Go, Java, or C++) for infrastructure tooling.
- Ability to debug and optimize code for performance (both in data pipelines and model inference code).
ML Pipeline Knowledge:
- Solid understanding of the machine learning lifecycle and tools.
- Experience building or maintaining ML pipelines, possibly using frameworks like Kubeflow, Airflow, or custom solutions.
- Knowledge of model serving frameworks (TensorFlow Serving, TorchServe, NVIDIA Triton, or custom Flask/FastAPI servers for ML).
Monitoring & Reliability:
- Experience setting up monitoring for applications and models (using Prometheus, Grafana, CloudWatch, or similar) and implementing alerting for anomalies.
- Understanding of model performance metrics and how to track them in production (e.g., accuracy on a validation stream, response latency).
- Familiarity with concepts of A/B testing or canary deployments for model updates in production.
Security & Compliance:
- Basic understanding of security best practices in ML deployments, including data encryption, access control, and dealing with sensitive data in compliance with regulations.
- Experience implementing authentication/authorization for model endpoints and ensuring infrastructure complies with organizational security policies.
Team Collaboration:
- Excellent collaboration skills to work with cross-functional teams.
- Experience interacting with data scientists to translate model requirements into scalable infrastructure.
- Strong documentation habits for outlining system designs, runbooks for operations, and lessons learned.
Nice to have:
LLM/AI Domain Experience:
- Previous experience deploying or fine-tuning large language models or other large-scale deep learning models in production.
- Knowledge of specialized optimizations for LLMs (such as model parallelism, quantization techniques like 8-bit or 4-bit quantization, and use of libraries like DeepSpeed or Hugging Face Accelerate for efficient training) will be highly regarded.
Distributed Computing:
- Experience with distributed computing frameworks such as Ray for scaling up model training across multiple nodes.
- Familiarity with big data processing (Spark, Hadoop) and streaming data (Kafka, Flink) to support feeding data into ML systems in real time.
Data Engineering Tools:
- Some experience with data pipelines and ETL.
- Knowledge of tools like Apache Airflow, Kafka, or dbt, and how they integrate into ML pipelines.
- Understanding of data warehousing concepts (Snowflake, BigQuery) and how processed data is used for model training.
Versioning & Experiment Tracking:
- Experience with ML experiment tracking and model registry tools (e.g., MLflow, Weights & Biases, DVC).
- Ensuring that every model version and experiment is logged and reproducible for auditing and improvement cycles.
Vector Databases & Retrieval:
- Familiarity with vector databases (Pinecone, Weaviate, FAISS) and retrieval systems used in conjunction with LLMs for augmented generation is a plus.
High-Performance Computing:
- Exposure to HPC environments or on-prem GPU clusters for training large models.
- Understanding of how to maximize GPU utilization, manage job scheduling (with tools like Slurm or Kubernetes operators for ML), and profile model performance to remove bottlenecks.
Continuous Learning:
- Up-to-date with the latest developments in MLOps and LLMOps (Large Model Ops).
- Active interest in new tools or frameworks in the MLOps ecosystem (e.g., model optimization libraries, new orchestration tools) and a drive to evaluate and introduce them to improve the processes.
Responsibilities:
- Design and implement modern, scalable ML infrastructure (cloud-native or on-premises) to support both experimentation and production deployment of NLP/LLM models. This includes setting up systems for distributed model training (leveraging GPUs or TPUs across multiple nodes) and high-throughput model serving (APIs, microservices).
- Develop end-to-end pipelines for model training, validation, and deployment.
- Automate the ML workflow from data ingestion and feature processing to model training and evaluation, using technologies like Docker and CI/CD pipelines to ensure reproducibility and reliability.
- Collaborate with Data Scientists and ML Engineers to design MLOps solutions that meet model performance and latency requirements.
- Architect deployment patterns (batch, real-time, streaming inference) appropriate for various use-cases (e.g., a real-time chatbot vs. offline analysis).
- Implement and uphold best practices in MLOps, including automated testing of ML code, continuous integration/continuous deployment for model updates, and rigorous version control for code, data, and model artifacts.
- Ensure every model and dataset is properly versioned and reproducible.
- Set up monitoring and alerting for deployed models and data pipelines.
- Use tools to track model performance (latency, throughput) and accuracy drift in production.
- Implement logging and observability frameworks to quickly detect anomalies or degradations in model outputs.
- Manage and optimize our Kubernetes-based deployment environments. Containerize ML services and use orchestration (Kubernetes, Docker Swarm, or similar) to scale model serving infrastructure.
- Handle cluster provisioning, health, and upgrades, possibly using Helm charts for managing LLM services.
- Maintain infrastructure-as-code (e.g., Terraform, Ansible) for provisioning cloud resources and ML infrastructure, enabling reproducible and auditable changes to the environment.
- Ensure the infrastructure is scalable, cost-effective, and secure.
- Perform code reviews and guide other engineers (both MLOps and ML developers) on building efficient and maintainable pipelines.
- Troubleshoot issues across the ML lifecycle, from data processing bottlenecks to model deployment failures, and continuously improve system robustness.
The company offers:
- Competitive salary.
- Equity options in a fast-growing AI company.
- Remote-friendly work culture.
- Opportunity to shape a product at the intersection of AI and human productivity.
- Work with a passionate, senior team building cutting-edge tech for real-world business use. -
Β· 35 views Β· 2 applications Β· 3d
Senior Python (PyQt) developer
Hybrid Remote Β· Ukraine (Kyiv, Lviv) Β· 5 years of experience Β· B2 - Upper IntermediateWe are looking for a skilled Senior Python (PyQt) Developer to join a healthtech startup team building next-generation AI-powered solutions for healthcare. The product helps clinicians work smarter with real-time data visualization, medical imaging, and...We are looking for a skilled Senior Python (PyQt) Developer to join a healthtech startup team building next-generation AI-powered solutions for healthcare. The product helps clinicians work smarter with real-time data visualization, medical imaging, and AI-driven decision supportβall designed to improve patient outcomes.
Requirements:
- 5+ years of professional experience with Python
- 3+ years of experience developing production-grade applications with PyQt or PySide
- Strong understanding of desktop UI/UX patterns (MVVM/MVC, multithreading, responsive UI)
- Experience packaging and deploying desktop apps (PyInstaller, MSIX, AppImage, etc.)
- Knowledge of security best practices (TLS, encryption at rest, RBAC)
- Intermediate English level or higher
Will be a plus:
- Background in AI/ML integration for imaging or clinical workflows
- Experience with healthcare data standards (DICOM, HL7, FHIR) or libraries (pydicom, VTK, SimpleITK)
- Familiarity with HIPAA/GDPR compliance and medical device software standards (IEC 62304)
- Hands-on knowledge of C++ for performance-critical components
- Previous experience in healthtech or other regulated industries
Responsibilities:
- Design and develop desktop applications in Python + PyQt (PySide6)
- Build UI components for clinical workflows (data entry, dashboards, medical image viewers)
- Integrate with healthcare data standards and APIs (DICOM, FHIR, HL7, DICOMweb)
- Collaborate with AI engineers to embed decision support and data visualization tools
- Ensure product compliance with security and privacy requirements (HIPAA, GDPR)
- Optimize application performance for large imaging datasets
- Contribute to coding standards, CI/CD, and testing for production-grade releases
- Work closely with product, backend, and data teams in a cross-functional setup
Product
Healthcare professionals often struggle with fragmented systems, slow workflows, and limited visibility into patient data. Our AI-powered platform provides real-time clinical data visualization, medical imaging, and automated decision support to improve efficiency and accuracy in care delivery.
The startup is backed by leading healthtech investors and driven by a team of experienced engineers, clinicians, and AI specialists passionate about transforming the future of healthcare.
Work Schedule:
- Full-time, flexible hours
- Work from anywhere (fully remote or in our offices in Ukraine/Poland)
Interview stages:
- Call with the Recruiter
- Interview with Tech Lead + Recruiter
- Client interview