AI/SLAM/Autonomy Engineer $$$ Offline

R&D center WINSTARS.AI Verified Employer

This project aims to enable GPS-independent navigation, last-mile autonomy, and coordinated multi-agent behavior in complex environments through onboard AI.

 

Responsibilities:
• Design and integrate real-time SLAM pipelines (visual-inertial preferred) for autonomous navigation.
• Object tracking algorithms using OpenCV.
• Train and deploy lightweight ML models for onboard inference in real-world conditions.
• Integrate AI/ML modules for target detection and behavioral decision-making.
• Support GPS-agnostic navigation via sensor fusion and terrain-aware motion planning.
• Collaborate with embedded, full-stack, and hardware teams to align the autonomy stack with system-
level constraints.
• Optimize performance on ARM-based edge compute platforms (Raspberry Pi, Jetson Nano, etc.).

 

Requirements:
• 3 years+ of experience in autonomy, robotics, or computer vision systems.
• Proficient in OpenCV, visual SLAM, and real-time localization.
• Strong skills in C++ and Python; experience with ROS/ROS2.
• Demonstrated experience with ML model training and deployment (TensorFlow, PyTorch, etc.).
• Ability to run perception and AI systems on embedded Linux hardware.
• Understanding of flight dynamics and real-time motion planning.
• Ability to debug and tune systems in field environments.
 

Preferred Qualifications:
• Experience with ArduPilot, PX4, or similar UAV control frameworks.
• Familiarity with object detection models (e.g., YOLO, MobileNet) and model quantization.
• Experience with multi-agent autonomy, swarm logic, or distributed path planning.
• Background in defense, tactical robotics, or mission-critical systems.
• English level not lower than B2
• Agile development experience (Jira, Git).

Required languages

English B2 - Upper Intermediate

The job ad is no longer active

Look at the current jobs ML / AI →

Loading...