Gregory Koch

USC Interaction Lab, REU 2011


Friday, July 29, 2011 11:03 PM Posted by Gregory Koch

Lab Entry #25 - Learning Game Demo w/ Video



I just got back from the lab, having finished coding the Learning Game demo and ran it on the actual robot with Juan and a few of the other REU students (Aaron and Jeff). We shot a few rounds of the demo, using a sample script I wrote to demonstrate how Bandit can learn actions from verbs taught to it by the player, then reapply those actions using a new syntax of words it builds up in its knowledge base.

Unfortunately, the robot's left arm seemed to stopped functioning after four or five demo runs, so I had to turn off the robot and stop filming for the night. The arm shouldn't be too hard to fix for any future videos, but I think I have enough video for the moment.

Thursday, July 28, 2011 10:44 PM Posted by Gregory Koch

Lab Entry #24 - Learning Actions from Voice Commands



Most of the work pertaining to coding the Learning Game is now finished; the FSM for the game seems to be in working order and all of the data structures for the knowledge base are in place. We decided that verbs will be represented by a string name and an array of floating point numbers representing two joint angles in one arm: elbow and shoulder, respectively. Meanwhile, nouns and adjectives have been combined into one data structure, NounAdjective. We were able to combine the definitions for both types of words because adjectives will contain the general definition for a modifier while nouns will contain the shell for the corresponding objwect; for example, the adjective "left" will contain the information for the left arm joint numbers, whereas the noun "arm" will have slots which the adjective information can be applied to. This step will occur when it is time for the robot to ground a command with information from its knowledge base and execute that command, once the prompting stage is finished.

We also decided that we wanted to make the demo a little more interactive than we had originally planned. Instead of just being able to add verbs by relating them to known verbs already in the knowledge base, we feel that it would be much better to add new verbs by reading in camera data on player joint angles, similar to the Imitation Game. This way, players can physically demonstrate new verbs to Bandit, allowing him to grow his knowledge base without relying on hardcoded base definitions. Juan assures me this won't be too hard to do, as long as we write a function to access the camera data from the BanditROS class and create a new verb out of that information.

Wednesday, July 27, 2011 8:01 PM Posted by Gregory Koch

Lab Entry #23 - Learning Game / Planning and Implementation



With my final REU presentation approaching on Monday, August 1st, Juan and I have decided to begin work on our final demo earlier than we had originally planned. We want the demo to give a proof of concept for how well my microphone input and natural language processing systems work in real time to produce some semblance of intelligence behavior. In the spirit of the other games that have already been developed as modules for Bandit, we're going to call the demo the "Learning Game". In the Learning Game, Bandit will listen for input commands from the human, then use the NLP data from the microphone parse to determine what the different parts of speech are. Bandit will then compare the primary words used in the sentence to its own knowledge base, and prompt for unknown words. The player can then teach Bandit new words by relating those new words to concepts already contained in its knowledge base.

As the session continues, Bandit's knowledge base will grow, giving it a series of relations between previously unknown words. Because noun, verb, and adjective definitions will be sufficiently general, words acquired can then be used in subsequent commands, even in a context different to the one in which the word was originally seen. If successful, this demo should show our ability to use microphone and NLP data in an interactive session with a player, even if the system itself for using that data will be mostly uninformed, relying on base knowledge from the programmer to succeed.

Monday, July 25, 2011 7:20 PM Posted by Gregory Koch

Lab Entry #22 - Documentation / Bug Fixes and JPL



I'm currently in the process of updating and refactoring a lot of my code to make it more streamlined and ensure it works properly before we commit any of the code next week. One of the major steps is actually writing proper documentation for most of the packages that I've created, including compilation instructions, so that most of the code I've written for the microphone input and NLP processing can be hopefully reused for future projects or other robots. One of the trickiest parts is putting together a guide for the Stanford NLP module, because the way in which we create and send our custom data structure from the raw NLP data is somewhat elaborate; it will take a day or two to finish writing up the tedious details so that they are clear to others.

Otherwise, the only recent event was the REU "field trip" to the Jet Propulsion Laboratory in Pasadena. Our tour guide was a mechanical engineer working in the Mars rover division, so our tour was centered on the applications of robotics. We had the chance to see some of the laboratories where they worked on engineering the rovers, as well as see the Mars Yard, where physical robot designs are tested in a location designed to roughly resemble the surface of Mars. Our guide explained how the new Mars rover to be sent later this year was much larger than previous designs, but had to be optimized to reduce its overall mass, which would increase the cost of sending it into space. At the end, we also got a chance to tour the small museum they had onsite, which included moon rocks, a video showing how the newest landing system for the next rover will work, and replicas of previous projects by JPL.

Friday, July 22, 2011 3:13 PM Posted by Gregory Koch

Lab Entry #21 - Full Microphone to NLP System Working



Today, I finished implementing the ROS code that connects the various modules for Bandit by creating a centralized Microphone class that takes in incoming microphone data from the DragonReceiver server and forwards it to the NLP Parser server connected to the Stanford NLP parser. Classes that want to use microphone data or NLP data must then only have access the Microphone class; hence, this information is now available from anywhere in Bandit. As a test of this system, I spoke into the microphone, printing out the formatted NLP data structure once it arrived back at the Microphone object, having gone through the entire pipeline. An abstract overview of this system is available below:

Tuesday, July 19, 2011 11:30 PM Posted by Gregory Koch

Lab Entry #20 - Final lab meeting / REU Presentation



All of the REU students gave their final presentations today at the last lab meeting of the summer before we leave the USC campus in early August. With our projects drawing to a close, we each summarized our work up to the present date, with some analysis of future work that we plan to complete by August. Other students showed results on projects as diverse as creating a sheep herding controller for the PR-2 and kinematic retargeting, while I did my presentation on my work with Bandit's speech system.

My talk was entitled "An Integrated Speech-to-Text Parser for a Humanoid Robot", covering my work to improve Bandit's speech controls, set up Dragon NaturallySpeaking speech recognition, capture NLP parsing data from the Stanford NLP Parser, and integrate the modules into ROS.

The slides accompanying the talk can be found here.

Monday, July 18, 2011 8:44 PM Posted by Gregory Koch

Lab Entry #19 - Stanford NLP Functionality Complete



Today, I was able to get the remainder of the Stanford NLP functionality online. I added ROS functionality to my demo program, so that all input text came from an external ROS client, and would be forwarded all the way through the parsing process. My Java program then sends this input text to the SNLP parser, which returns raw NLP parsed data. I then take this data, format it into one compact data structure, and pass it to ROS to return as a response message to the initial request. While there are a few small things to finish up with the NLP files, such as getting the Java application to launch automatically, it works fairly well, and will be able to use ROS to provide this NLP data for the latest microphone input to any module that might need it in Bandit.

The next step will be to add ROS functionality to the Dragon speech-to-text components that I developed near the end to June, and work towards integrating the entire system together. That way, I will be able to a demo that starts from live microphone input, and feeds it all the way to the NLP parsing, thus returning an object containing the original text with all of the NLP data.

Wednesday, July 13, 2011 11:22 PM Posted by Gregory Koch

Lab Entry #18 - SNLP Data Structures (Update)



With a few small adjustments, I was able to complete the parsing object that takes the raw SNLP parser data and converts it to an easy to use data structure. Of the three data structures that I was incorporating into the single larger data structure, the recursive tree algorithm was by far the hardest to implement because I had to build the tree from within the vector.

The next step is to move the input mechanism from within the NLPInterpreter class itself that takes in SNLP output and builds the relevant data structure from that information. We want to wrap the C++ code that transmits microphone input to the Java NLP parser in a ROS node, with another ROS node serving as the client to interface between all of the NLP parser code and the rest of Bandit. When microphone input is received, it can use this second ROS node to access the first one, which is responsible for working with the Java program to produce the necessary parsing data. This parsing data will be automatically returned via the second ROS node, which can then offer the data to any module in Bandit that might request it.

Monday, July 11, 2011 8:45 PM Posted by Gregory Koch

Lab Entry #17 - SNLP Data Structures



Now that I have the Java application complete, providing my client with the raw NLP data returned from parsing an input string, I need to be able to put this data into a usable format. Although I receive data for a set of labels, hierarchy tree, and dependency graph, Juan and I have decided that it's best to combine all three of these data structures into one compact one. So, my next task is to build a lightweight C++ application on the client-side to parse through the NLP output and create a vector of words, each with their associated word labels. The program will then add the directed dependency graph information between nodes in the vector, with the tree being constructed with the nodes of the vector as its leaf nodes. It'll likely take me a few days to implement this, but the result should be one data structure contained in one object, providing all the information required for later use when developing intelligent algorithms.

Friday, July 8, 2011 10:01 PM Posted by Gregory Koch

Lab Entry #16 - Java Application / Updates



Writing the rest of the Java application to interface with the Stanford NLP Parser proved not to be difficult, as expected. By default, the application will return the raw data containing information for the part-of-speech tags, phrase hierarchy, and dependency graph, along with the original phrase. Arguments controlling which data is sent over allow the SNLP parser to be parameterized. The next step will be to set up a NLPInterpreter class to work with this data and turn it into a data structure that can be returned to Bandit, to be used by further algorithms for dialog control that have yet to be written by Juan.

I've also come to the realization that the architecture designs that I drew up earlier in the project are a bit simplistic. Because we want to incorporate ROS nodes to make the code more modular and minimize the changes to the existing flow of control for Bandit, some of my earlier diagrams won't work. For instance, we do not want the server receiving microphone messages from the Windows VM to also be responsible for directly forwarding those messages onto the NLP parser. It would be better to parameterize this and separate the server code from the interpretation of server messages, so that Juan can later have the option of either taking the raw text output from Dragon to do his own parsing, or choose to use a ROS node to get the Stanford NLP data instead. I will be working on making these changes perhaps sometime next week after I am finished working out the details of the SNLP parsing module.

Wednesday, July 6, 2011 9:02 PM Posted by Gregory Koch

Lab Entry #15 - Lab Meeting 2

With the holiday weekend coming to a close, we had our second full lab meeting this summer this afternoon. A large percentage of the meeting was spent discussing various grants that the lab intends to apply for, including the National Robotics Initiative. Afterwards, David Feil-Seifer, another graduate student working in the Interaction Lab, presented some of his recent work. This included an experiment he ran to observe path planning with respect to a human companion. Natural path planning algorithms often have the aim of finding the most efficient route from point A to point B, but an algorithm must be adjusted if the robot also has the maintain the pace of a companion that is following it. For example, if the companion falls too far behind, the robot should stop and not proceed further, and perhaps even turn around and begin to head back towards the companion until they catch up. It would be inappropriate in an interaction to proceed blindly without these considerations, so any good algorithm must balance the input parameters carefully to filter out which are irrelevant to the behavior. To this end, Dave was able to present initial work with Willow Garage's PR-2 that showed a basic following system in place.

Tuesday, July 5, 2011 2:32 PM Posted by Gregory Koch

Lab Entry #14 - Interfacing with the SNLP Parser



In order to extract useful information from text received from our Bluetooth microphone, we will need to be able to interface with the Stanford NLP parser. To accomplish this, I have been constructing a Java application this week that will allow us to send text to the parser and receive the data back. We are most interested in parts of speech (POS) labels and the hierarchical relationships between subphrases, but we may also be interested in the dependencies between words in a particular phrase.

The approach I have selected involves incorporating two components: a C++ client module that will be responsible for communicating between Bandit and the NLP parser, and a Java server that will be responsible for listening to incoming messages from the C++ module, using the Stanford NLP API to give those messages to the parser, and sending relevant data back to the C++ module:

Fortunately, now that I have learned the right API calls to use the SNLP parser, it doesn't seem like it will be too difficult to finish writing this Java application and move on to figuring out how to put the raw output into a good format for the main Bandit code to use.

Saturday, July 2, 2011 11:55 AM Posted by Gregory Koch

Lab Entry #13 - Microphone Resolved, RSS



At last, I was able to resolve my longstanding issue with the microphone. No matter which settings I used, I wasn't able to get any microphone input into Dragon NaturallySpeaking. However, because I was getting no input on the Windows VM or Ubuntu, I guessed that it was a problem with the microphone settings rather than the virtual machine. It turned out, however, that not only was there a switch on the microphone to begin transmitting that I hadn't seen before, but I also had to specify which sound card should be used for microphone input via alsamixer on Ubuntu.

With this problem resolved, it didn't take Juan and I more than ten minutes to get Dragon up and running with Bandit and the Win32 application that I already wrote to interface with Dragon. Using the microphone, we were able to get my Repetition Game demo working this time with the microphone itself, rather than text input! This resolved the majority of the problems that I had been having over the past two weeks and made the demo much more attractive, despite a few lingering bugs. As a result I was able to present it as a part of the Interaction Lab demo session on the last day of RSS. The demo went well, and Juan and I were able to identify several areas for future work

Now that the microphone works, the demo is functional, and RSS is over, we want to work towards integrating the Stanford NLP Parser in order to extract contextual information from input received from the microphone. With this information, we will be able to teach Bandit to respond more intelligently to commands and other input received via the microphone, creating a more natural interaction between human and robot for future work. However, to do so, I will have to install the Stanford NLP Parser and write a Java application to interface between the parser and the existing Bandit code. This is the next phase of my project going into July.

Friday, July 1, 2011 4:01 AM Posted by Gregory Koch

Lab Entry #12 - Interim Report

I have prepared an interim report for my REU project given that I have now been at USC for a month. Outlined in the report are the following topics:

  • Summary of current work
  • Speech control updates
  • TCP application/integration
  • Repetition Game demo
  • Future work

The report can be downloaded here.