Arvind Pereira's personal webpage

Robotic Self-Checkout POS

Have you ever found yourself stuck in a long line at a grocery store, waiting for you turn behind others with overflowing carts ahead of you? I did – and the self-checkout lines were also fully occupied and going very slowly. I noticed that the store didn’t have enough attendants to keep more lanes open, so it took more than 15 minutes in line before I was able to buy the couple of items in my hand. This gave me the idea of a line-busting robot – a robot equipped with a mobile POS which could quickly reduce the time people have to wait in line by enabling those in line with very few items to pay and leave faster, thus increasing the throughput of the store.

Several years back I had purchased an old Evolution Robotics (ER-1) chassis from my lab-mate Dr. Jonathan Kelly (currently an Asst. Prof., Univ. of Toronto). I had also bought a Microsoft Kinect around the time it was first launched with the intention of using it to get the ER-1 robot to do something useful. Unfortunately my thesis work with Underwater Vehicles did not involve much in terms of perception (the water on the west coast is too deep to make vision an useful sensing modality), so this remained on the back-burner, until now. Clover had a 2 day hackathon with the goal of hacking together something that was at least 10X better than it had been before – and it seemed like the Golden opportunity to get started with the idea right away.

So on the morning of our 10X+ hackathon I strapped my ER-1 chassis with the Kinect into the back seat of my car.

Robot hitching a ride to the hackathon in the back seat

Realizing that the robot needed some work both on the mechanical and electrical sides, I had asked Sam Qiu (from our ME team), and Alvin Dominguez (from our EE team) to team up with me on this hack. When the robot arrived I quickly consulted with Sam and Alvin to discuss what needed to be done on both aspects – the robot chassis (especially the vertical bar on which the Kinect was being mounted) needed to be stabilized if I wanted to have any hope of capturing reasonably useful data from the Kinect without having it wobbling too much when the robot started moving. Alvin was going to help wire up the Kinect in such a way that it could be powered off either a wall-plug or from the 12 V battery which I had already managed to mount on the robot.

I had a lot of work to do too, so I decided to get started with finding a laptop which would act as the brains of the robot, on which I could install the Robot Operating System. I found a cheap Lenovo Thinkpad (which gave me trouble later because it was underpowered), but at the time I was most interested in getting something on which I could run Linux. Half an hour later I was installing the latest Jade flavor of ROS.

Anyway, a long story short – after learning the basics of ROS in the morning, I was able to write the driver for the ER-1’s motor controller by basing it loosely off a driver that was so old – it was deprecated in Player of all things. It needed quite a lot of  modifications, but really helped me take care of the motor driver node that would receive commands from my controller and drive the robot around.

Presumably the hardest part of the entire hack was near-futile attempt to get the Kinect to work with ROS. It took almost until 9 pm for me to get the Kinect to work with OpenNI and PCL. I had to compile both from scratch and switch to the older long-time-support version of ROS Indigo. By the end of the first day I was able to receive Point-clouds using PCL, and was also able to control the robot using my motor controller. I did not have much time to hook up the perception to actual control of the robot.

Bernard plays a customer and uses the self-checkout feature to buy a power-strip after the robot offers to help him buy it without having to go to the counter.

Bernard plays a customer and uses the self-checkout feature to buy a power-strip after the robot offers to help him buy it without having to go to the counter.

arvind

Demo time at the Hackathon – I was talking about the perception subsystem here.

As I drove in on the second day, I began to realize that there was no hope of implementing a full SLAM solution with planning (my forte) running on the robot since I would need good localization to make all that happen. Instead I opted for a simpler solution – place the robot in a location where it is likely to see a line forming, and build a perception system that can detect people and write a controller that can get the robot to navigate to the line so the robot could offer its services. I would also need to write an Android app that would allow the customer to scan in their items and pay for them at the robot. There was no way I would be able to do both the perception and control by myself, so I asked my colleague Jacob Abrams (a brilliant software engineer) if he could help me with the Android app. Jacob was busy, so he only got started on this app about 1.5 hours from the deadline. I got cranking on perception and quickly hacked up a people detector that could count the number of people the robot was looking at. Once the robot would find these people it would compute the centroid of the cluster of people and then move toward it. It would stop when it was 3 feet away from the person (to avoid bumping into them). At this point the robot would offer to allow the nearest person to pay using Jacob’s self-checkout app. Luckily for us, Jacob was very familiar with another app that was also customer-facing, so he was able to churn out code at break-neck speed and get me a self-checkout app. I was also able to put together my portion and get perception and control integrated to accomplish the part of getting the robot to move toward groups of people (presumably waiting in line). Everyone who was passing by seemed interested in the robot and what it was doing since it seemed to dart after people every time it saw enough of them.

Here are a few pictures of the robot and the presentation at the Hackathon. I’m planning to take this project to the next step and actually build a version that isn’t simply reactive, but also builds maps, localizes accurately w.r.t. the map, has very robust obstacle avoidance, great perception and is able to plan both paths and deliberative actions well. There is also going to be an important human-robot aspect to this too and we will need to get that right too, but I’m glad I’ve got the basics out of the way relatively quickly. Now we can get to the meat of the problems that I have always loved tackling as a roboticist.

The robot as it drove up to people in the line.

Hard problems are so much more fun than easy things, aren’t they?

Clover is a great place to work at – our founders like thinking big and encourage the team to think outside the box. We also get to work on cool side-projects and many of the people I work with are like magicians when it comes to software and systems engineering. If you love building new products that make a big difference to merchants, and want to work on one of the fastest-moving teams of  engineers consider joining our team at Clover.

Leave a Reply