Making Robots Smarter: A Dynamical Model for Goal-Directed Behavior
Thao Nguyen ’18, Theresa Law ’18, Eric Aaron (Computer Science) and John Long (Cognitive Science & Biology)
The Dynamical Intention-Hybrid Dynamical Cognitive Agents (DI-HDCA) framework — which combines low-level (reactive) and high-level (deliberative) intelligence in mobile goal-directed robots — has been employed in digital simulations but not demonstrated in physically embodied robots. With this in mind, our goals were two-fold: (1) show the generality of the DI-HDCA framework by implementing it in two common but very different robot platforms (Arduino, TurtleBot) and (2) investigate, from these two different systems, general features of the DI-HDCA framework that speak to the principles by which one models intelligence in goal-directed robots. In this framework, cognitive elements (beliefs, desires, and intentions) unite low- and high-level intelligence, and these elements are modeled to affect one another and be updated with real-time changes in the environment. Cognitive elements control which behavior the robot performs at a given time, thus enabling robots to adaptively undertake a variety of tasks, including navigation. Navigation, which we model here as a simple target-location task, is fundamental to embodied intelligence and goal-directedness. Both robots successfully navigate. Thus, this first demonstration of the DI-HDCA framework in embodied robots acts as an initial proof of concept for the framework in the real world. The DI-HDCA framework provides the potential for machine learning and goal-directed behaviors that rely heavily on reactivity, a beneficial trait for agents in dynamically changing environments. Future applications of this model may include technologies such as planetary exploration, smart prosthetics, and service robots.