AAAI-2002 Robot Competition and Exhibition
Registered Teams

Following are the teams have have registered for the 2002 AAAI Robot Competition and Exhibition. All information was provided by the entrants.

[ Robot Challenge | Robot Exhibition | Robot Host | Robot Rescue ]



Robot Challenge

Coworker
iRobot Corporation

Mark Dockser and Jim Allard

The CoWorker robot is an internet controlled, wireless, mobile, remote telepresence platform. CoWorker can be accessed from any PC web browser with a high speed connection (and secure id and password). The user friendly interface provides control over where the CoWorker goes, what it sees, what it hears and provides an interface for speaking. There is even a laser pointer so that the user can highlight what she is referring to at the robot's location. The platform was designed with many available ports (power, serial, PCMCIA) for incorporation of additional hardware including sensors and/or additional cameras.

Our vision for the CoWorker is to meet the needs of industrial users for: 1) remote expert applications; 2) security; and 3) videoconferencing anywhere. By deploying CoWorkers, customers can dramatically reduce travel costs, allow for collaboration anywhere in a company's wireless network (including the shop floor, cafeteria or any other location), and improve worker safety (by deploying a CoWorker rather than a human to potentially hazardous situations). CoWorker robots are currently in beta applications with a number of Fortune 500 companies.


Photo of GRACE GRACE
Carnegie Mellon: Reid Simmons, Greg Armstrong, Allison Bruce, Dani Goldberg, Adam Goode, Michael Montemerlo, Illah Nourbakhsh, Nicholas Roy, Brennan Sellner, David Silver, Chris Urmson; Naval Research Laboratory: Alan Schultz, Myriam Abramson, William Adams, Amin Atrash, Magda Bugajska, Michael Coblenz, Dennis Perzanowski; Metrica: David Kortenkamp, Bryn Wolfe; Northwestern: Ian Horswill, Robert Zubek; Swarthmore: Bruce Maxwell

GRACE (Graduate Robot Attending ConferencE) is a multi-institutional, cooperative effort consisting of Carnegie Mellon University, the Naval Research Laboratory, Metrica, Northwestern University, and Swarthmore College. This year's goal is to integrate software from the various institutions onto a common hardware platform and attempt to do the complete AAAI Robot Challenge task autonomously, from beginning to end. Focus will be on multi-modal human robot interaction (speech and gesture), human-robot social interaction, task-level control in the face of a dynamic and uncertain environment, map-based navigation, and vision-based interaction.

Interacting naturally with humans, GRACE will find its way from the convention entrance to the registration area. It will query bystanders for directions to the registration desk and navigate there based on those directions. Along the way, it will interact with other conferees and will ride in the elevator, using an electronic altimeter to determine when it is on the right floor. It will use color vision to find the registration sign, and use laser and stereo vision to queue itself and wait in line. It will interact with the volunteer at the registration booth, and use map-based navigation to travel to the Exhibition Hall. Finally, it will present a talk about itself at a time and place designated in the Conference Program.


Photo of Leo or Erik Leo and Erik
MIT

John Leonard

Our research addresses the problem of concurrent mapping and localization (CML) for autonomous mobile robots. The problem of CML is stated as follows: starting from a initial position, a mobile robot travels through a sequence of positions and obtains a set of sensor measurements at each position. The goal is for the mobile robot to process the sensor data to produce an estimate of its position while concurrently building a of the environment. While the problem of CML is deceptively easy to state, it presents many theoretical challenges. The problem is also of great practical importance; if a robust, general-purpose solution to CML can be found, then many new applications of mobile robotics will become possible. During the robot challenge, we will attempt a demonstration of an algorithm for real-time large-scale CML using multiple submaps.



Robot Exhibition

Photo of ArmHandOne ArmHandOne
Alcor Group, Dipartimento di Informatica e Sistemistica, Universita di Roma "la Sapienza"

M. Cialente, A. Finzi, I. Mentuccia, F. Pirri, M. Pirrone, M. Romano, F. Savelli, K. Vona

The exhibition we are going to submit is the following. There is a maze, which is re-configurable, that is, it is made of panels which can be suitably arranged. Inside the maze we can position several road signs indicating whether the road is one-way or no-entry, arrows addressing directions for the exit, and so on. We are free to put the road signs inside the maze, that is, the agent performance should not depend on the signs. Furthermore there is a place in the maze in which we can locate a treasure, consisting of a set of colored blocks, suitably arranged: e.g. forming towers etc. The task for the agent is to get into the maze, find a particular block (e.g. the red block), picking it up (to achieve this the robot might need to move lots of other blocks) and direct itself toward the exit and, finally, exit from the maze.

The robot we are talking about is named ArmHandOne (spelled Armandone), weighting about 4 Kg and high 40 cm. It is equipped with a grabber arm, a pan-tilt binocular head with two cameras, and other sensors. Wireless control can reach 1 km.. The novelty of our approach mainly relies in the cognitive architecture we have been building. The architecture is defined on three levels: 1. the cognitive level monitoring sensing and high level control actions, 2. the global level managing the choice of tasks and actions control, and 3. the reactive level managing navigation and localization.


Blue Swarm 3
Utah State University

Dan Stormont

Blue Swarm 3 is the next generation of Blue Swarm robots. The Blue Swarm 3 will be built to compete in the Urban Search and Rescue competition in 2003. The current plan is to build robust, legged robots that communicate with each other and with a handheld terminal (PalmPilot) using IR links. The exhibit will demonstrate some prototypes of the robots that will be developed for Blue Swarm 3.


CONRO
USC/Information Sciences Institute

Wei-Min Shen and Behnam Salemi

The CONRO Project has a goal of providing the Warfighter with a miniature reconfigurable robot that can be tasked to perform reconnaissance and search and identification tasks in urban, seashore and other field environments. CONRO will be miniature and is to be made from identical modules that can be programmed to alter its topology in order to respond to environmental challenges such as obstacles. The base topology is simply connected, as in a snake, but the system can reconfigure itself in order to grow a set of legs or other specialized appendages. Each module will consist of a CPU, some memory, a battery, and a micro-motor plus a variety of other sensors and functionality, including vision and wireless connection and docking sensors. Major challenges include packaging, power and cooling as well as the major issue of programming and program control.


Photo of Crystal robots Crystal robots
Dartmouth Robotics Laboratory

Robert Fitch and Daniela Rus

A robot designed for a single purpose can perform some specific task very well, but it will perform poorly on a different task, in a different environment. This is acceptable if the environment is structured; however if the task is in an unknown environment, then a robot with the ability to change its shape to suit the environment and the required functionality will be more likely to succeed than a fixed-architecture robot. We wish to create more versatile robots by using self-reconfiguration: hundreds of small modules will autonomously organize and reorganize as geometric structures to best fit the terrain on which the robot has to move, the shape the object the robot has to manipulate, or the sensing needs for the given task. For example, the robot could synthesize a snake shape to travel through a narrow tunnel, and then morph into a six-legged insect to navigate on rough terrain upon exit.

Self-reconfiguring robots are well-suited for tasks in hazardous and remote environments, especially when the environmental model and the task specifications are uncertain. A collection of simple, modular robots endowed with self-reconfiguration capabilities can conform to the shape of the terrain for locomotion by implementing "water-flow" like locomotion gaits, which allow the robots to move by conforming to the shape of the terrain.

We have designed and buld the Crystal robot which is capable of autonomous shape changing, locomotion by self-reconfiguration, and self-replication of a big robot into smaller robots with the same functionality.


Identity Emulation (IE), Facial Expression Robot
University of Texas at Dallas

David Hanson, Marshall Thompson, Giovanni Pioggia

Our facial expression robot uses biomimetic structures, aesthetic design principles, and recent breakthroughs in elastomer material sciences to enact a sizable range of natural humanlike facial expressions. This application of robotics will rise in relevance as humans and robots begin to have more face-to-face encounters in the coming years. My team and I are also working on imbuing our robot with several forms of interactive intelligence, including human-form and facial-expression recognition, and natural language interaction. It is anticipated that an integration of mechanics, sociable intelligence, and design aesthetics will yield the most effective Human Computer Interface robots.


Photo of I Comici Roboti I Comici Roboti
University of Connecticut

Karl R. Wurst

Combining Robotics, Puppetry, and Comedy, our troupe of three robots perform a lazzo from the Commedia Dell'Arte.

The Italian Comedies of the 16th and 17th centuries had many improvisational pieces called lazzi. These were comic interludes inserted by a player if a scene started to drag or his eloquence gave out. I Comici Roboti performs the Lazzo of the Statue, in which Arlecchino pretends to be a statue who moves when the backs of the other actors are turned.

Our troupe of three robots perform a short script, with each executing its own plan, cuing off the others to keep themselves in sync. A human director observing the performance can affect the overall performance, or the performance of an individual robot.

The robots themselves consist of Lego bases, carrying a HandyBoard processor, a two-way radio link, and the puppet body.


Photo of Kansa, Wichita, Coronado and Pike Kansa, Wichita, Coronado and Pike
Kansas State University

Eric Matson

Robot teams have advantages over individual robots in accomplishing goals that contain large numbers of tasks. The advantage grows if they have the ability to interchange roles, share responsibility and have some redundant capability. An organization which has the ability to continuously evaluate capabilities and role assignments and reorganize to maximize efficiency will naturally operate at higher levels. Our research is to create a Cooperative Robotics (CR) Reorganization Model to dynamically evaluate and reorganize the team in the event of a failure, or sub-optimal executing condition. We are currently building a model and system to allow a team of heterogeneous robots to conduct real-time reorganizations while working in a specific task environment.


Junior
Idaho National Engineering and Environmental Laboratory

David Bruemmer

The INEEL is working to develop robots that can adjust their level of autonomy on the fly, leveraging their own, intrinsic intelligence to meet whatever level of control is handed down from the user(s). Currently, we have implemented a control architecture with gour different levels of autonomy: teleoperation, safe-mode, shared control and full autonomy. Each level of control encompasses a different role for the operator and makes different demands on the robot and communicataion infrastructure. To meet this objective we are working towards the following technical goals:

Through these technical efforts, we will enable remote robotic operations to be accomplished by fewer operators with less training. This work will pave the way for a new class of mixed-initiative robots that work with humans as well as for them, accepting high-level tasks with minimal demands on the user.


Photo of Mabel the Mobile Table Mabel the Mobile Table
University of Rochester

Undergraduate Robot Research Team
We have developed a Java application which we call the "Learn Server". Its purpose is to provide offboard Graphical User Interfaces and Parameter Adjustment Modules to offboard robotics applications. It allows communication with programs running in multiple languages and on multiple operating systems. Currently the program communicates with the C and C++ languages, in Windows2000, WindowsXP, and Linux. We will be demonstrating this application in the context of our Robot Host System: Mabel the Mobile Table.

Mabel gives an appropriate multi-modal response to people using a combination of speech, food manipulation, and navigation behaviors. We accomplish this using the Sphinx Speech Recognition System developed by CMU augmented by a digital filter. We employ a directed speech recognition microphone, which is actively pointed towards the speaker, using face tracking and a pan-tilt-zoom camera. To accomplish language understanding, a specially designed grammar-based parsing technique is under development.

The vision component's purpose is to provide the navigation component with real-time visual percepts including:


Photo of MinDART MinDART (Minnesota Distributed Autonomous Robot Team)
University of Minnesota Computer Science & Engineering
Paul E. Rybski, Maria Gini

The Minnesota Distributed Autonomous Robot Team (MinDART) is a group of simple and low-cost robots used at the University of Minnesota for research into reactive control strategies. We are interested in studying how environmental and control factors affect the performance of a homogeneous multi-robot team doing a search and retrieval task. Several factors which affect the performance of the team are examined. One factor is the distribution of targets. This is varied from a uniform distribution to having all of the targets clustered together into one or two small clumps. Another factor that we examine is the size of the team (varying from one to five robots). Finally, the type of search strategy is varied between a completely reactive method to a directed search method that uses the robot's ability to localize itself. Current work includes incorporating the ability for the LEGO robots to communicate amongst themselves using an RF data link and determining under what environmental conditions such communications is useful.


Photo of Personal Rover Personal Rover
Carnegie Mellon University, Robotics Institute

Illah Nourbakhsh

This vision-based low-cost robot demonstrates intelligent human-robot interaction using a CMUcam vision technology to perceive landmarks in the world as well as a swinging boom and rocker-bogie chassis to surmount large obstacles up to three times its wheel diameters. An on-board IPAQ processor serves as a real-time control loop executor combined with communication relay to the outside world, from whence the human operator and more strategic software provides the robot with motion and navigation requests. In our demonstration, we will show off the Personal Rover's step-climbing ability as well as its vision-based obstacle avoidance and navigation competencies.


Photo of RoboCup Junior robots RoboCupJunior
Columbia University

Elizabeth Sklar

RoboCupJunior is a project-oriented educational initiative that sponsors local, regional and international robotic events for students. This marks the 3rd year of international competition, with a tournament being held in conjunction with RoboCup 2002. This year's Junior event will include 65 teams of high school and middle school students from 16 countries around the world. Teams build and program autonomous mobile robots to play soccer, perform dances and simulate rescue scenarios.

We have also used the RoboCupJunior motif as the theme for undergraduate classes in AI, robotics and programming.

Our exhibition will introduce RoboCupJunior to the AAAI audience, in search of mentors for teams of young students as well as educators looking for a new twist on standard undergraduate curriculum.


Rosey
Northwestern University

Christopher Dac Le

The RObot Self-Explains whY (ROSEY) system attempts to demonstrate a behavior-based robot's ability to generate verbal explanations in response to questions about its behavior. Specifically, the robot recognizes a class of why questions that seek reasons for its locomotive behavior such as "Why are you turning?". ROSEY the Robot will be running around the exhibit hall while fielding such questions. Questions will be typed in at a remote console.

ROSEY the Robot represents an instance of a class of what we call self-explanatory robots, which should be able to explain what they're doing and why they're doing it. To build such robots, we are exploring how explanations can be generated from the robot's program as well as from the robot's sensory-motor data.



Robot Host

Borivoj
Kansas State University

Vojtech Derbek, Jan Kraus, Tomas Tichy, David A. Gustafson


Frodo & Gollum
Swarthmore College

Bruce Maxwell

We hope to build some interesting human-robot interactions upon our name-tag reading system that we developed for the 2001 competition.

We also hope to have two moving robots this year so that they can converse when they see one another.

Our overall goal, however, will be to successfully serve hor d'oeuvres in an unobtrusive and effective manner.


Photo of Mabel Mabel
University of Rochester
David Feil-Seifer, Jonathan Schmid, Ben Atwood, Michael Isman, Thomas Kollar, Eric Meisner, Tori Sweetser, Jenine Turner

Working to create a human interaction agent on a mobile robotic platform, we have divided the task of making an optimal robot into Conversational Speech Interaction, Active Vision, and Autonomous Navigational Control.

Conversational Speech Interaction involves phoneme recognition in a noisy environment, and parsing natural language into a concrete set of percepts. This information allows for an appropriate multi-modal response using a combination of speech, food manipulation, and navigation behaviors. We accomplish this using the Sphinx Speech Recognition System developed by CMU augmented by a digital filter. We employ a directed speech recognition microphone, which is actively pointed towards the speaker, using face tracking and a pan-tilt-zoom camera. To accomplish language understanding, a specially designed grammar-based parsing technique is under development.

The vision component's purpose is to provide the navigation component with real-time visual percepts including:

We have demonstrated robust and successful implementations for many of the above systems. We have done preliminary work on the features marked "under-development", and expect to include them as part of our final entry.

Autonomous Navigation Control involves creating a robust model for navigating around a crowded room while retaining the ability to return to a base station. We use sonar-based obstacle avoidance for robust navigation. To achieve path planning and execution, we employ a trained waypoint system using wheel counters.



Robot Rescue

Photo of Blue Swarm Sentinel Blue Swarm 2 and Blue Swarm Sentinel
Utah State University

Asti Bhatt, Brandon Boldt, Scott Skousen, and Dan Stormont

The Blue Swarm 2 is made up of six modified remote-control cars. They operate autonomously using a simple subsumption architecture. They will sense the location of victims and send out a signal which can be received by one or more Blue Swarm Sentinels. The Blue Swarm Sentinel is a modified radio-controlled tank that operates either manually or autonomously to locate victims, locate Blue Swarm robots that have located a victim, or locate obstacles. The Sentinel reports information back to the rescuer GUI via a bi-directional RF link. Both types of robots are controlled by Parallax BASIC Stamps.


Frodo & Gollum
Swarthmore College

Bruce Maxwell

We will be examining issues in semi-autonomous robot systems. Our goal is to permit a single user to successfully manage more than one robot.

We will build upon our system from a year ago that identified victims using vision, built maps with paths to found victims, and enabled both teleoperation and completely autonomous robot functioning.


Georgia Tech Yellow Jackets
Georgia Tech

Tucker Balch

Georgia Tech will compete in the Robot Rescue competition using a cooperative multi-robot system. The robots include: an RWI ATRV Mini equipped with 8 DV cameras arranged to provide omnidirectional vision, and four Sony Aibo legged robots. The Aibos are transported by the ATRV, then released to explore areas where the ATRV cannot reach. The ATRV will provide 3D modeling of the environment as well as localization and tracking of the Aibos.


Hanif 1, Hanif 2 (Snake Robot), Hanif 3 (Tracked Robot) Hanif Rescue Robot Team
YSC
Navid Ghaffarzadegan

Because of geographical placement of IRAN, every year earthquakes are both common and fatal. Working on Rescue robots, which could help rescuers in detecting victims, are so necessary. Our aim in this project is design, construct & control of an autonomous robot which could be able to move around in an unstructured environments and could detect victims in hazard areas. Participating in Robocup Rescue competition is a situation to challenge ideas in this field. Our team is divided into three sub teams:

Current Research:
Keystone Fire Brigade
University of Manitoba (Winnepeg)

Jacky Baltes

The Keystone Fire Brigade robots are based on the 4 Stooges, a small sized RoboCup team from the University of Auckland. The robots of the 4 Stooges were designed to be robust and versatile enough to be used in a variety of different ways. This has paid off since the robots of the Keystone Fire Brigade are identical to those.

The Keystone Fire Brigade use a small CMOS camera and Thomas Braunl's Eyebot controller. The Eyebot controller consists of a 35 MHz 68332 processor with 2 MB of static RAM. The design is clearly dated nowadays, but has the advantage that they are comparatively cheap and provide the possibility of directly connecting a CMOS camera to the processor. Furthermore, they provide the necessary interface to connect motors, servos, gyroscopes, and many other sensors directly to the controller.


Photo of Mabel the Mobile Table Mabel the Mobile Table
University of Rochester

Undergraduate Robot Research Team

We have developed a Java application which we call the "Learn Server". Its purpose is to provide offboard Graphical User Interfaces and Parameter Adjustment Modules to offboard robotics applications. It allows communication with programs running in multiple languages and on multiple operating systems. Currently the program communicates with the C and C++ languages, in Windows2000, WindowsXP, and Linux.

In our entry, a human operator will navigate our pioneer DX2 robot through the Yellow Course using the GUI of the "Learn Server" from the Cold Zone.

A map of the environment showing waypoints, a history of robot position, and victim locations will be displayed in the "Learn Server". Wireless headphones will relay an audio signal from the robot's position to the operator. Current frames from the robot's camera will be displayed on the "Learn Server" at whatever framerate the environment's wireless bandwidth allows. Victims will be located autonomously by the robot using vision programs developed for the Robot Host Event.


Moe, Larry, and Curly
The MITRE Corporation

Zach Eyler-Walker and David Smith

We are developing an approach to coordinated search using a team of robots controlled by a single human. The robots are semi-autonomous and able to share information directly with one another, and with a human via a commander console program.

The robots will perform obstacle avoidance, localization, and low-level route planning autonomously. Mapping, target detection, and goal-directed behavior will be performed via coordination between robots and the human commander.

We are currently using three ActivMedia Pioneer 2-AT robots, each equipped with sonar and a single color camera. We expect to eventually integrate other platforms (e.g. iRobot Packbot) and sensors (e.g. laser rangefinder, pyrosensors, and microphones).

This team will be an exhibitor only.


Morph Dragon, Ringo
New Roads High School

The Scarabs, Michael Randall

In 1999, a group of high school and junior high students from Los Angeles took on the enormous challenge of competing against some of the top robotics and artificial intelligence researchers in the world in the RoboCup middle (F2000) league. After over two years of hard work, funded on a shoestring budget (mostly out-of-pocket), the Scarabs robotic team: field-tested a color-tracking system at RoboCup 2000 in Melbourne, Australia; designed and built a prototype vehicle and omnidirectional vision system; and successfully demonstrated this vehicle / vision system combination in the Rescue Robot competition at RoboCup / AAAI 2001 in Seattle, Washington.

The goals of the Scarabs team are: to build viable robots at minimal cost; to learn about math, computer science, electronic engineering, physics, artificial intelligence, system integration, international relations, character development, and teamwork; to have fun (!); and to make a positive difference.

In light of September 11, creating search and rescue robots has taken on added significance and urgency. We are fielding two radically different robots: Ringo, an updated version of the prototype we ran at RoboCup / AAAI 2001; and Morph-Dragon, a sophisticated six-wheeled robot designed to compete on the Robotica television program. Both robots will use the same vision and control systems.

We have upgraded our custom-built omnidirectional vision system with the Axis 2120 Network Camera (www.axis.com). The 2120 features direct connection with a 10/100 MBit Ethernet network and a built-in Linux web server. This allows a single Ethernet cable for video and robot control.


Photo of Tartan Swarm Tartan Swarm
Carnegie Mellon University

Carnegie Mellon Robotics Club

Tartan Swarm is a low-cost multi-robot approach to human detection. Each robot is based on a simple modular diff-drive platform, mounted with a heterogeneous array of sensors. Sensor types include both vision and pyroelectric sensing. Successful human detection is communicated through two channels: a low-bandwidth channel to ward off neighboring robots; and a coded radio broadcast, indicating success and believed relative location. This signal is received by a rescue workstation. Individual robots accomplish their tasks autonomously, using distinct search strategies. Collective behavior is observed through simple success-based interactions.

Tartan Swarm is a simple, low-cost, educational project of the undergraduate Carnegie Mellon Robotics Club.




Comments on this web page? Send e-mail to holly@cs.uml.edu.
Last update: 6 August 2002