Training Wheels
How Kristi Nguyen ’21 used thousands of pictures (and the right algorithm) to help robots see.
Kristi Nguyen ’21 knows the terrain and path of the dirt track surrounding Santa Clara’s softball field well. So well that only the self-navigating robots she trained could possibly know it better—and they need the help of a machine-learning algorithm to do it.
Nguyen spent last summer teaching these small vehicles how to see for themselves, in a manner of speaking. Walking the track alongside them, Nguyen systematically snapped pictures of their surroundings from thousands of angles and positions using the built-in camera, each photo providing visual and positional data. The data could then be plugged into an algorithm on her computer to help build a machine learning model for navigating the robots.
“You probably don’t really notice it if you’re just walking along the track normally, but from a machine’s point of view, the terrain drastically changes,” Nguyen says. “For example, there’s a part close to Alameda that has a fence and bushes in the background. Then there’s a part where it’s the fence, bushes, and then the parking lot. To a human, that’s very easy to figure out where the track is. But to a machine, they freak out when they see anything different.”
Nguyen’s work was supported by the School of Engineering’s Kuehler Undergraduate Research Grant. This project was designed as a first step into using deep learning techniques for future Robotic Systems Lab projects such as robotic sidewalk navigation or agricultural field navigation.
To start, Nguyen took an existing algorithm created by Silicon Valley tech company NVIDIA that helped robots navigate rural paths and trails where GPS was unavailable. Then she used a tactic called transfer learning to build a new model to use on Santa Clara’s campus.
The process was tedious, time-consuming, and involved a lot of trying and failing. Nguyen would choose a section of the track—40 feet for example—and set up a method for taking pictures that would limit variables, taking into account lighting conditions and other factors. Later, she’d try out her model on the track to see how well the robots could navigate the terrain. Then she’d do the process all over—taking new pictures, adjusting her model, and testing it again on the track until she reached a low margin of error.
“By the end of the summer I created a model that basically used fewer resources than the original and was more efficient,” Nguyen says.
Opportunities like the Kuehler grant is one of the reasons Nguyen came to şÚÁĎÍř. She took a class in machine learning last year and Professor Christopher Kitts asked her if she was interested in a project he was working on. She said yes and got started.
Nguyen liked the open-ended nature of the project. And the independence. She had as much support and guidance as she needed from Kitts but also some freedom to make mistakes and learn from them. It helped her more fully understand the project and have a sense of ownership over its path.
“There weren’t strict guidelines,” Nguyen says. “The scope of my project just got more and more narrow as we learned more.”
The result was a lot of 40-hour weeks and some trying moments before she figured it out.
“It took me 150 attempts just to train, and I took 50 data sets during the course of the summer,” Nguyen says. “So it taught me perseverance because the thing with the lab is you manage yourself. I knew if I wanted to do something, I’d be the one to do it.”