Gregory Koch

USC Interaction Lab, REU 2011


Monday, June 27, 2011 7:21 PM Posted by Gregory Koch

Lab Entry #11 - Bandit Code / Repetition Game



I'm currently spending the majority of my time trying to understand all of the different classes incorporated in Bandit so that I can understand how to create an actual demo (Repetition Game) using Bandit, rather than just running my tests out of the main function of one file. Up until recently, I had only looked at the Speech, LipSync, TTSLipSync, and Sound classes, but now I need to understand how the entire robot runs from start to finish. What makes my task more difficult is that a number of features that are deployed on the actual robot, such as a Wiimote, a database, and a camera, must be disabled in my code in order to avoid segmentation faults and avoid errors when just running it on the computer. While these problems are not too prominent, the code is fairly easy to break if the modules are edited in the wrong way. In particular, I'm spending my time understanding how Bandit's full FSM works together with the Scheduler class to time the exercise games and incorporate new modules.

I still can't get the microphone to work, no matter what I try. For now, I'm just making everything work with raw text input and avoiding using Dragon or the microphone on the Windows VM despite the fact that all of the necessary software is already installed. If I can get the demo to work with text, I will eventually be able to get it to work with a microphone. Although I was initially pessimistic about the external microphone working properly with the virtual machine, I don't think it is a virtual machine problem anymore, but it may be a permissions issue or some obscure setting that I'm not yet aware of.

Tuesday, June 21, 2011 8:08 PM Posted by Gregory Koch

Lab Entry #10 - Looking Forward / Various Updates



I'm taking the day off today, but I'm collecting my thoughts to determine what I should be working on next. A number of aspects in the project have emerged in the last week with regard to implementing a more elaborate speech system for Bandit.

  • Website update - I've now fully implemented my project website using HTML/CSS. I still have a few journal entries to upload, but I finished my updated resume today, and hopefully should have my project details and biography up by next week. If I can finish updating my website by the end of June, that would be ideal.
  • Speech controls/timings - Additional commands have been finalized and tested to allow for more dynamic control of Bandit's speech functions, as previously described. I finished implementing the second algorithm proposed, determining the robot's stopping point within the text. The stopText, resumeText, and restartText functions can now be executed from anywhere using a single call.

The basic functionality is definitely complete, but a few errors and enhancements remain to be dealt with, namely:

  • Better contextual awareness - Bandit should always be aware of the full context of what it is saying. Even if it is stopped and has to resume from an arbitrary stopping point, Bandit should still know the context of what is was saying, should it be asked to clarify its speech (restartText command).
  • Ability to handle multiple speech commands from any given state - Currently, one sayText command can be fluidly issued in any state of Bandit's FSM in the current demo that I have running ("Great Speeches" demo). When input text is received, it is parsed even if Bandit is already speaking, so further commands to execute can be stored in a queue. Any stopping algorithms are unable to determine which sayText command is currently being executed. Thus, if several sayText commands are active (in the queue), the correct stopping point cannot correctly be determined.
  • Further developments - Ultimately, while it is acceptable at the moment to match input text directly to commands, to handle the speech input intelligently, the robot must be capable of handling user commands much more flexibly. Currently, with the Wiimote, there is a limited set of commands that can be issued due to a finite set of buttons that can be pressed. Thus, parsing Wiimote input is far less complicated than parsing speech input. I feel the next stage of the project will be to develop a system to this end.

One of my highest remaining priorities is to get my Bluetooth microphone working so that I can do a full demo with Bandit. The idea that I have in mind for the demo is using a microphone with Dragon NaturallySpeaking to send text over to Bandit, which can then be handled and dealt with appropriately. The object responsible for handling or parsing the input text received from the TCP server would separate incoming text into two categories: general audio and commands. Recognized commands would immediately be executed, whereas general audio would be put in a queue for later handling by Bandit's other modules. Using this approach, a human player could interrupt Bandit, but Bandit could also freely ask the player for his name at the beginning of a session, for example. Probably the best idea would be to develop a sort of "Repetition Game", where Bandit simply repeats all of the user input for an alloted period of time until the session is over. This would not necessarily be the most realistic demonstration of how the speech-to-text functionality will be used, but would be a good test of all of the features that have been implemented so far.

Even though I don't quite have the microphone working yet, I plan to begin reviewing Bandit's code and start writing the code for this demo so that I can have it online by the end or June.

Friday, June 17, 2011 9:03 AM Posted by Gregory Koch

Lab Entry #9 - Win32 Application

With the TCP Client/Server working on Linux, I decided that I needed to get it working on Windows, and further, make it work within a basic GUI program with a text area in order to eventually give Dragon NaturallySpeaking an interface for text output from its speech-to-text engine. Although there are many great GUI libraries existing for C++, I figured that if I only needed a basic TCP client with a text area, the easiest thing to do would be to simply include windows.h and write a Win32 application from scratch.

This was not difficult to do, and once I finished my GUI, I merged it with my cross-platform TCPClient.cpp file that I had previously tested on Ubuntu. Because of cross-platform development in this project, I had elected to use Boost.asio rather than native socket libraries.

The next thing I did was design the overall architecture of the system to determine how I wanted to integrate all of my existing code. Although all code so far works, I want to be able to actually make it run with Bandit's code so that my implementation is real, rather than a mere demonstration. The UML diagram for the most important classes follows:

Class Hierarchy for Bandit SST System

UML Class Diagram for Speech-to-text System

With this working, I hope to get the microphone working so that I can give text to Bandit via live speech input rather than inserting strings into the text field of my Windows application.

Thursday, June 16, 2011 4:23 PM Posted by Gregory Koch

Lab Entry #8 - TCP Client/Server Model



The next thing that we would like to do is implement a speech-to-text system that links a microphone with Bandit to allow Bandit to respond to microphone input. Because we want to use Dragon NaturallySpeaking as our speech-to-text engine, a Windows application, I installed Windows XP on Virtualbox, a virtualization solution, along with the relevant USB updates for Virtualbox to allow us to use our Plantronics Bluetooth USB headset.

To connect the Windows VM to Bandit, I developed a TCP server using Boost.asio on the Linux side (integrated directly into Bandit on a separate thread). I then developed a TCP client to run on Windows. To test the TCP connection using the client-server model I developed, I made the server function as an echo server. This worked correctly, as desired.

TCP Echo Client/Server Demo on Ubuntu

TCP Echo Client/Server Demo on Ubuntu

Tuesday, June 14, 2011 7:03 PM Posted by Gregory Koch

Lab Entry #7 - Lab Meeting 1

I attended my first official lab meeting today as a member of the USC Interaction Lab. Ross Mead presented some of his recent work on proxemics in HRI, giving a brief overview of the field of proxemics and some of the historical principles of sensory information processing (visual code, olfactory code, thermal code, etc). He also detailed an experiment he ran using one of the Bandit robots in the lab with a series of human subjects, who participated in the study along with a conspiring participant, known to the experimenter but not any of the subjects. The study was designed to examine spacing between humans and robots during the course of an interaction. One of the interesting results is that the two humans assumed a relatively equilateral triangle formation with the robot during the majority of the interaction, despite the fact that the robot did not speak or move.

We also had lunch during the meeting, and covered a few lab administration matters pertaining to the upcoming Robotics: Science and Systems (RSS) conference being held here at USC during the end of June.

Monday, June 13, 2011 7:32 PM Posted by Gregory Koch

Lab Entry #6 - Great Speeches Demo



Juan told me that the next logical step in the project would be to create a larger demo, combining interactivity with the speech control features that I just finished implementing during the last week. I decided that I would create a "Great Speeches" demo, taking excerpts from four great speeches in history and giving Bandit functionality to allow it to give each of the speeches. In the demo, users would be able to interact with the robot via a series of commands to exhibit the enhancements that I made to Bandit's speech module.

I created a new ROS package directory and copied all of my current code into it, then altered the main function to create my demo. The demo is fairly simple, initializing Bandit with a welcome message and instructions, then prompting the user to choose a speech by entering in a text command (e.g. - "lincoln", "king", "kennedy"). Bandit then uses a FSM to transition into the appropriate speech. At any time during the demo, if an invalid command is received, Bandit will provide an error message, then transition back into the instructions state if desired. This provides a more realistic view of the efficacy of speech controls, using a FSM to replicate Bandit's actual functionality.

FSM for

FSM for "Great Speeches" Demo

Friday, June 10, 2011 3:26 PM Posted by Gregory Koch

Lab Entry #5 - Website Online

Because I finished my first demo earlier this week with the new speech controls and position estimation task complete, I thought I'd start on my REU project website. I spent the majority of the last two days rehashing what I remember of HTML and CSS in order to construct my first attempt at a project website. I was given an account on the USC web server, Asimov (robotics.usc.edu), so I could SSH into the server and upload whatever content I wanted to the domain I was given. I'm now in the process of back uploading all of my journal entries and content that I had already written, although this could take a week or two as I continue to update my website design and work on the project. I'll likely be changing the theme a little bit; the current grey seems too dull, so I might go for a nice red or blue to bring some color into the website.

Monday, June 6, 2011 5:05 PM Posted by Gregory Koch

Lab Entry #4 - Estimating Position, Update



I ended up going with the second algorithm after not being able to come up with a better solution, especially because the second algorithm seemed to approximate the correct stopping character within about a margin of 3 to 4 characters. Also, I realized once I began testing that I didn't want Bandit to begin speaking mid-word, which would be the result of using the exact character outputted by my getSpeechPosition helper method for resumeText. So, I decided that once I got the character from the algorithm, I would scroll backwards in the text via a loop until the previous word was reached. This way, the substring generated for resuming text would only contain whole words. This makes sense, because typically, humans who are interrupted mid-sentence restart not from the syllable they are interrupted on, but the word (or sometimes the beginning of the sentence, which is why I also implemented restartText).

On this note, I also finished writing the restartText function, which was somewhat easier, considering that all I had to do was figure out which sentence in the text was currently being spoken, then generate substrings and append together text accordingly. I already knew how to do this from my experience writing the resumeText function, so this was trivial.

To test the stopping, resuming, and restarting functionalities, I made a small demo. In the demo, Bandit is first issued a command to speak the text of the Gettysburg address. While Bandit is giving the speech, the stop, resume, and restart commands can be issued.

Friday, June 3, 2011 12:32 PM Posted by Gregory Koch

Lab Entry #3 - Estimating Position

With stopText in place, my current goal is to figure out an efficient algorithm to interpret the correct stopping for Bandit's speech module. That is, when Bandit is stopped, how can we determine the approximate character in the string being spoken at which the .wav file cut off? If Bandit is to appropriately and naturally resume what he was saying before mid-sentence, a good method will have to be implemented.

Currently, I have two possibilities that I have been testing:

Algorithm #1 - Get the percentage of the .wav file played, and multiply that percentage by the total length of the string that Bandit was asked to read. This works by computing numOfCharsInString*(stopTime/totalTime). To get this information, I implemented a function called getSoundDuration, which takes a pointer to a SoundFile struct as a parameter (SoundFile structs use the SDL_Mixer library). This returns the total time in seconds. However, because SDL_Mixer does not support tracking elapsed time of an audio chunk, I implemented a small timer using the Utils class already provided in the Bandit shared libraries (jpshared).

The result of this method will be a double, which I can round to the nearest int in order to use with std::string::at, then generate a substring from there to determine which text Bandit should speak upon restarting.

  • Initial results: The algorithm worked surprisingly well on the first test runs, identifying the correct word stopped on within five characters even with strings spanning several sentences. However, when long passages were presented, inaccuracies increased by up to two dozen characters, an unacceptable margin. I conclude that this algorithm provides a good general approximation, but falls prone to pauses because individual sentences over extended durations of speaking.

Algorithm #2 - Get the percentage of the .wav file played, and multiply that percentage by the total length of the current sentence that Bandit was asked to read. This works by computing numOfCharsInCurrentString*(stopTime/totalTime). To get this information, I implemented a function called getSoundDuration, which takes a pointer to a SoundFile struct as a parameter (SoundFile structs use the SDL_Mixer library). This returns the total time in seconds. However, because SDL_Mixer does not support tracking elapsed time of an audio chunk, I implemented a small timer using the Utils class already provided in the Bandit shared libraries (jpshared). The current sentence can be determined from the LipSync lip timing command array.

Because lip movements are pregenerated and stored in an array, it is easy to read this array to figure out which sentence in the current passage Bandit is reading. Lip movements are stored as positive or negative integers, depending on whether the mouth is open or closed. Because the mouth only completely closes between sentences, using these lip movements provides an easy way to determine which sentence is being read. From there, it is not hard to generate a substring from the original string containing only the current sentence. By taking the length of this substring, the number of characters in the current sentence can be determined. This should be at least a slightly more accurate method than Algorithm #1, because it does not vary as a function of string length, but remains relatively constant no matter which string is being read.

The result of this method will be a double, which I can round to the nearest int in order to use with std::string::at, then generate a substring from there to determine which text Bandit should speak upon restarting.

  • Initial results: The algorithm worked surprisingly well on the first test runs, identifying the correct word stopped on within five characters even with strings spanning several sentences. However, when long passages were presented, inaccuracies increased by up to two dozen characters, an unacceptable margin. I conclude that this algorithm provides a good general approximation, but falls prone to pauses because individual sentences over extended durations of speaking.

From these results, it seems like the second algorithm is the correct approach, but I will spend another day or so seeing if I can't think of any alternatives before I commit to using the algorithm in my resumeText function.

Thursday, June 2, 2011 4:05 PM Posted by Gregory Koch

Lab Entry #2 - Speech Controls

It turns out that the first thing I need to do is to enhance the speech controls for Bandit. Currently, Bandit is setup so that once he begins talking, he must complete whatever he is saying. This is an undesirable behavior because. We want to have better functions that allow control over this sort of speech behavior. In particular, we want to be able to stop Bandit, no matter what he is saying, restart from the beginning of the current phrase, and also, if Bandit is paused, resume from the best word possible.

I've already been able to put in place a rudimentary system for stopping Bandit by adding a stopText function to the Speech class, along with a boolean and corresponding accessor/mutators in the LipSync object. Because the LipSync object is responsible for coordinating the lips with the .wav file produced by Neospeech, Bandit's text-to-speech engine, it is necessary to stop speech from within the lip movement loop to ensure the .wav file stops at the exact same time the lips stop moving. Hence, all the stopText function has to do is acquire the mutex for the Speech thread, then flip the boolean that I created. When the boolean is flipped, a function in the Sound object that I created, stopSound, will pause whatever .wav file is active. Finally, the method clears the text request queue of all pending sayText commands, because we do not want Bandit to start saying further text it has planned to say if it has just been commanded to stop its speech.

This allows for a fairly convenient implementation of stopText, requiring just one call to the function from any class that has a pointer to the Speech object. Further, the context of what Bandit was saying is always saved, because the most recent text that Bandit was vocalising will always be saved in a character array that is a member variable of Speech.

Next I will work on function for restarting and resuming speech. This will probably require reading more source code before I will be able to come up with a compact method.

Commands for using rViz for Bandit simulator:
roslaunch robot_exercise bandit_simulation.launch
rosrun rwj_exercise rwj_exercise -b 0

Wednesday, June 1, 2011 7:55 PM Posted by Gregory Koch

Lab Entry #1 - Beginnings

After arriving at the USC Interaction Lab at RTH, it didn't take long to find my way around the lab and get my workspace setup. After meeting with Prof Sukhatme, Prof Golubchik, and the other REU students at 10am, and going with them to lunch, I spent the remainder of the day installing software on my laptop with Juan, the graduate student who is my project mentor. Although most of the day was spent doing installations, I did make some progress starting on the project I'll be doing this summer. Juan told me that I will be adjusting the existing implementation of Bandit, one of the robots here in the USC Interaction Lab, to improve the speech controls and implement a speech-to-text system. To this end, I've begun reading Bandit's code in order to understand the Speech modules. Most of the code is in C++ and implemented using a system called ROS (Robot Operating System), a common operating system developed by Willow Garage to promote a unified approach to robot development. Although my project doesn't involve using ROS as much as the other students here for the REU, I'll still have to try to work through the first few beginner tutorials for ROS to get up to speed. Further, I'll have to get used to compiling code with g++ and writing code is vm or gedit, without using an IDE. I haven't done much of this before, so it'll be a new experience for me.