Recent Changes - Search:


Robotics I Fall 2007

Robotics II Spring 2007

Robotics I Fall 2006

Vision Servoing Laboratories

FPGA-Based Vision


LabVIEW Embedded


Site Admin

edit SideBar


The main challenge for integration computer vision in robotics application consists on adjust the frame rate to the sampling and control rates.

There is more than one way to achieve a given goal and sometimes, how you achive the goal is just a matter of robot control "smoothness" and precision.

A common terminology to name the use of computer vision for robotics is called visual servoing (here there is a link to one of the most prominent paper on that subject)

Generally speaking they are two classical ways to integrate computer vision in robotics:

Image Based Visual Servoing In the next image the main system feedback is provided by the Feature extraction box: the camera is acquiring an image and such a box is computing "interest" or "reference" points from where a control action can be computed.

Position Base Visual Servoing The main difference in between the last diagram and the new one is the box whose content allows us to compute the Pose determination algortihm.

In such a way to integrate step-by-step- vision onto robotics we propose you to follow the next sequence on Labs.


Important Links

  • One of the most important concerns about the performance of any program is the amount of computing operations needed to

solve the required algorithm. In the section ACHIEVING OPTIMAL PERFORMANCE FROM THE C/C++ SOURCE CODE of the book C/C++ Compiler and Library Manual for Blackfin Processors you will find a very nice list of conseils that you have to take into account. Here you have a short version of the same information.

  • Here you will find a very nice compendium of computer vision
  • Here is a review of how to teach computer vision to CS' students
  • The camera images are missing.
Edit - History - Print - Recent Changes - Search
Page last modified on October 28, 2006, at 05:45 PM