Publication Detail

Maximizing Reward in a Non-Stationary Mobile Robot Environment

Dani Goldberg and Maja J. Matarić

Citation

Dani Goldberg and Maja J. Matarić. "Maximizing Reward in a Non-Stationary Mobile Robot Environment". To appear in Autonomous Agents and Multi-Agent Systems, 2002. Special issue.

Abstract

The ability of a robot to improve its performance on a task can be critical, especially in poorly known and non-stationary environments where the best action or strategy is dependent upon the current state of the environment. In such systems, a good estimate of the current state of the environment is key to establishing high performance, however quanti ed. In this paper, we present an approach to state estimation in poorly known and non-stationary mobile robot environments, focusing on its application to a mine collection scenario where performance is quanti ed using reward maximization. The approach is based on the use of augmented Markov models (AMMs), a sub-class of semi-Markov processes. We have developed an algorithm for incrementally constructing arbitrary-order AMMs on- line. It is used to capture the interaction dynamics between a robot and its environment in terms of behavior sequences executed during the performance of a task. For the purposes of reward maximization in a non-stationary environment, multiple AMMs monitor events at di erent timescales and provide statistics used to select the AMM likely to have a good estimate of the environmental state. AMMs with redundant or outdated information are discarded, while attempting to maintain suÆcient data to reduce conformation to noise. This approach has been successfully implemented on a mobile robot performing a mine collection task. In the context of this task, we rst present experimental results validating our reward maximization performance criterion. We then incorporate our algorithm for state estimation using multiple AMMs, allowing the robot to select appropriate actions based on the estimated state of the environment. The approach is tested rst with a physical robot, in a non-stationary environment with an abrupt change, then with a simulation, in a gradually shifting environment.

Keywords

obile robots, reward maximization, on-line modeling, collection tasks, non- stationary environments

Files

Manuscript (.gz)