Sometimes it's okay to not know where you are, as you long as you keep moving. True if you're a robot, true if you're doing a PhD.

Taking mobile robots out of the lab and into the world is hard, but it's something we need to start doing more often. It's difficult to fully understand how people are going to interact with them otherwise.

Part of the reason it's hard to take robots out of controlled environments is that we implement systems with the assumption that the robot knows where it is at any time. Sounds reasonable, and clearly it's an important capability if you want that robot to be able to drive to some destination, but this means the robot has to have powerful sensors and a map to localize itself against. There are ways to make a robot navigate that don't require these things. Some methods build a map and localize at the same time. Others estimate the motion of the robot using a regular camera, which most robots would already be equipped with. Unfortunately, these methods can be computationally expensive and usually aren't very robust.

Even well-equipped robots struggle to navigate autonomously. Here's a BWIBot stuck on the wall, circa 2018. After much investigation, we determined that the bench's chrome finish was creating confusing lidar readings that would sometimes cause the robot to trap itself. This robot costs 20 times as much as Kuri. The lidar alone costs twice as much.

We ran into this when we wanted to put Kuri in the halls of our building for a user study. Kuri's lidar sensor can't see far enough to localize against a map, and its computer is too weak for visual navigation approaches.

We realized that for the kinds of things we wanted to study, we didn't need Kuri to know exactly where it was most of the time. It would be nice if the robot could drive itself back to the charger, but this would be infrequent enough that it was fine if a person helped with that. There are a bunch of interesting studies which might seem technically infeasible until you think, "maybe it's okay if our robot just wanders around."


If you've seen a robot vacuum cleaner doing its thing, you're already familiar with what wandering looks like. The robot drives until it can't, maybe because it senses the wall or because it bumps into an obstacle, then it turns and keeps going. Run it long enough, and it's likely to cover the whole floor. Fancier models can actually make a map and clean systematically, but these are (you guessed it) more expensive.

So, think of a wandering behavior as made up of two things:

  1. the events that trigger the selection of a new direction, and
  2. the method that's used to actually select a new direction

Making Kuri Wander

The important components of our system. Note that many of the parts of the wandering module, like the costmap and local controller, can be pulled from common autonomous navigation implementations. Our implementation leaned on the the ROS nav stack wherever possible.

We found that baking a preference for moving in the same direction as long as possible was effective in making Kuri roam the long, wide corridors of our building. The robot tries to head in the direction using a local controller until it can't, then selects a new direction.

Our informed direction selection approach is to use a local costmap based on Kuri's lidar and odometry to pick the direction that has the lowest obstacle cost, breaking ties in preference of directions that are closer to the previously selected direction. This is more complicated than say always turning 90 degrees, but the extra system requirements come relatively cheap; passable odometry and sensors that can pick up nearby obstacles.

Just like more intelligent autonomous navigation approaches, this sometimes goes wrong. Kuri's lidar has a hard time seeing dark materials, so it would sometimes wedge itself against them. We use the same kinds of recovery behaviors that are common in other systems, detecting when the robot hasn't moved (or hasn't moved enough) for a certain duration, then rotating in place or moving backward. We found it important to tune our recovery behaviors to be able to unstick the robot from the hazards in our building, and it's likely you'd want to do the same if you were deploying the system in a new place.

Click to reposition the robot. Click and drag to draw obstacles into the map (with shift to erase).

Using Human Help

Eventually, the robot needs to get to a charger and wandering isn't going to do that. Fortunately, it's easy for a human to help. We built a chatbot which the robot would use to ping a remote helper when its battery was low. Kuri is small and light, so we opted to have the helper carry the robot back to its charger, but you can imagine giving a remote helper a teleoperation interface and letting them drive the robot back.


We ran this system for four days in our academic building. It was able to navigate all 1,200 feet of hallways on the floor, and ran for 32 hours total without once running out of battery.

Each of the 12 times it needed to charge, the system notified the designated helper and they placed the robot on its charger. The robot’s recovery behaviors kept it from getting stuck most of the time, but the helper needed to manually rescue it 4 times when it got wedged near the difficult-to-perceive banister. Overall, the system required around half-an-hour of the helper’s time over the course of its 32-hour deployment..

Our building features glass walls, difficult-to-perceive black chainlink banisters, long hallways, and widely varying lighting conditions. Kuri took photos periodically during its deployment, so we manually annotated the position of a random sample of these photos. The process by-which Kuri decided to take photos wasn't random, so this doesn't let us know if the robot spent more time in one place than another, but it does indicate that the robot was able to visit all of the floor.

To Wander, or Not To Wander

Wandering works for some limited scenarios, but these happen to include many things that are interesting to human-robot interaction researchers including

  • perceptions of robots,
  • bystander interactions, and
  • interaction with remote users and operators.

You can't build a mail delivery service robot using wandering, but you can start to study interesting problems these robots will face. And you'll even be able to do it with expressive and engaging robots like Kuri, which wouldn't be up for the task otherwise.

If you happen to be interested in using Kuri in particular, there's no reason not to give our code a shot, though we recommend you take inventory of the hazards in your environment. If you have cliffs, you'll need to connect Kuri's cliff sensors to the costmap. Anywhere you experience the robot getting trapped, you can sub in a new recovery behavior (a few lines of C++) to unstick the base.

The video we made for HRI 2022 that describes this work.

BibTeX Entry

author = {Amal Nanavati and Nick Walker and Lee Taber and Christoforos Mavrogiannis and Leila Takayama and Maya Cakmak and Siddhartha Srinivasa},
cofirst = {2},
title = {Not All Who Wander Are Lost: A Localization-Free System for In-the-Wild Mobile Robot Deployments},
booktitle = {Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction},
month = mar,
year = {2022},
publisher = {IEEE Press},
doi = {10.5555/3523760.3523817},
pages = {422–431},
numpages = {10},
keywords = {wizard of oz, wandering, in-the-wild deployment, robot navigation, robots asking for help},
location = {Sapporo, Hokkaido, Japan},
series = {HRI '22},
wwwtype = {conference},
wwwpdf = {},
wwwcode = {},
wwwurl = {}