Build and deploy robotics software on edge devices, developing perception, localization, and tracking systems that run on our tractor-mounted camera platforms in real-world farm environments.
Turn raw sensor data into structured insights, working with large-scale image streams (terabytes per day) to detect, track, and analyze fruit, trees, and field conditions.
Design and implement localization systems, fusing data from GNSS, stereo depth, and other sensors to precisely map where the system is and what it is observing.
Optimize performance on embedded systems, writing efficient Python and C++ code that runs reliably on NVIDIA Jetson-class hardware under real-world constraints.
Work across the full edge robotics stack, contributing to everything from perception and inference to sensor processing, networking, and on-device infrastructure.
Own production systems end-to-end, shipping code quickly and iterating in a fast-paced environment where your work can be deployed to customers in days, not months.
Hands-on experience developing robotics software from the ground up.
2+ years of real-world industry experience in robotics, perception, or localization.
Expertise inΒ one or more of the following areas: computer vision, stereo depth perception, pose estimation, object detection, multi-object tracking, image processing, robotics software.
Enthusiasm for taking on multiple roles and responsibilities as our company grows.
You like fast-paced environments where you have high ownership over your work.
Nice to haves:
Experience deploying & optimizing robotics algorithms to run fast on embedded compute like NVIDIA Jetson.
Familiarity with fusing data from GNSS, IMU, depth, and other sensors.