[youtube width=”640″ height=”400″]http://www.youtube.com/watch?v=GGlPlEEmZlE[/youtube]
This issue, we ask “5 Questions” of Eric Aaron, assistant professor of computer science. His article, “Action Selection and Task Sequence Learning for Hybrid Dynamical Cognitive Agents,” was recently published in Robotics and Autonomous Systems. Aaron has a bachelor of arts in math from Princeton University; a master of science and Ph.D in computer science from Cornell University.
Q: How did you become interested in computer science, and specifically artificial intelligence?
A: I’ve always been interested in logical problem solving and how people think. As an undergraduate, I majored in mathematics and took courses in psychology and philosophy, but each of those was only a part of the big picture that really interested me. As I studied more, I found that computer science, and especially artificial intelligence (AI), incorporated parts of all of these perspectives in a single, mind-openingly fascinating and mind-blowingly enormous area of study.
It’s especially rewarding to work on how AI can help us understand intelligence and make people’s lives better. Computers and robots are amazing tools, but they need people to tell them what to do; AI can make these tools work better, so people can spend more time on the important things–the things that robots or intelligent virtual agents can’t do.
Q: What are “intelligent virtual agents” and what are they used for?
A: Loosely, in the AI sense, an agent is something that we think of as perceiving and acting intelligently on its own, such as a robot vacuum cleaner in your house. An intelligent virtual agent (IVA) is a kind of agent, a computer-controlled animated character in a computer-generated graphical environment. IVAs can be teachers in intelligent tutoring systems or teammates and adversaries in video games, among other applications.
Q: Is work on intelligent virtual agents closely connected to work in robotics?
A: The similarities and differences between those fields are sometimes subtle, but they can be deeply important. For example, robots have motors that can break and bodies that can be damaged by collisions, but at least they obey natural laws of physics! In contrast, IVAs have no material parts to break, but because they’re not naturally physical, they don’t obey natural physics, and it can take extra work to make IVAs believable and engaging to people. For my research interests, however, the similarities between the fields matter more than the differences. Both robots and IVAs move through their worlds, performing tasks and responding to their environments, so research into intelligent, collision-free navigation or methods for re-planning task sequences is widely applicable.
Q: What was the focus of your recent article published in Robotics and Autonomous Systems?
A: Basically, it’s about helping robots manage their schedules more effectively without people having to tell them everything. Although the details are technical, the underlying ideas and inspirations come from everyday life. Think about the schedule or plan you intend to follow at the start of your day, maybe a sequence of tasks like eat breakfast, drive to work, go to meeting No. 1, go to meeting No. 2 (etc.), drive home, clean house, and so on. You might start your day planning to do all of them, but as the day goes on, there might be delays or other unexpected occurrences, and you might need to change your plan. Over several days or weeks, you might do this repeatedly, learning better and better strategies for adapting your schedule. That’s roughly what the article is about: my new framework by which service robots, such as robot couriers in offices, could do something similar–intelligently change plans and learn to improve over time.
Q: Other than your primary research, what else are you working on?
A: I’m fortunate to be working with colleagues from various areas on some great projects. For instance, Danny Krizanc (professor of computer science and associate professor of environmental sciences) and I are working on map visitation problems, studying how robot teams might efficiently visit key locations in their environment. Barbara Juhasz (assistant professor of psychology and neuroscience and behavior) and I are looking into relationships between visual attention and logical inference by studying people’s eye movements. I’m also working with colleagues from other institutions on a computer simulation of tumor development and treatment, part of work to better understand cancer and the effectiveness of possible courses of treatment.
On some level, topics such as robotics, visual attention, and tumor modeling are very different, but my work is on something shared by all of them: processes that intelligently adapt to information. That’s one of the things that make computer science a great area for study–because computational processes are so broadly important, computer scientists have a diverse range of interesting problems to solve and mental puzzles to play with!