SuperButler

May 3, 2012

Robo Butler with Speech and Gesture Recognition

Ramya Srinatha, Sean Cronin, Zach Jones
May 10, 2012

Attach:rsrinath_scronin_zjones_robobutler.pdf

Overview

The objective is to create a robot that will respond to both voice commands and hand gestures, and will be able to localize itself and navigate to various pre-determined destinations. The ultimate goal is to create a robot that can be told to do various tasks, much like a butler.

Photograph

        Bilibot            Pose to Recognize              Start Accepting       
                               User                        Hand Gestures
                   Gesture Commands

Concepts Demonstrated

  • Particle filter localization is used to determine the state of the robot.
  • Speech recognition is used to give the robot commands
  • Text to speech enables the robot to respond to commands
  • Gesture recognition is also used as a way to command the robot
  • ROS Navigation Stack provides path planning and obstacle avoidance

Innovation

The RoboButler relies solely on input that it either sees or hears without the requirement for a computer interface. By using these natural interfaces, we make it easier for people to interact with the robot. In the future, our project could enable people with limited mobility by providing them with a robotic assistant around the house.

Technology Used Block Diagram

Additional Details

Voice Commands

  • Key Command - To start speech recognition
  • Basic Commands for Navigation - GO, STOP
  • Commands to navigate to specific destinations- Ex: 301,Math Room
  • Command to stop speech recognition - KILL