Recent Changes - Search:

Main

Robotics I Fall 2007

Robotics II Spring 2007

Robotics I Fall 2006

Vision Servoing Laboratories

FPGA-Based Vision

VDSP

LabVIEW Embedded

Code

Site Admin

edit SideBar

Lab2

Vision.Lab2 History

Hide minor edits - Show changes to output

October 12, 2006, at 08:11 PM by Karl Ostmo - intensite -> intensity
Changed line 12 from:
done by detecting high intensite contrast in a region. In this
to:
done by detecting high intensity contrast in a region. In this
Changed lines 59-61 from:
to:
A very nice explanation of the general top-down/botton-up approaches for segmentation is found [[http://www.dam.brown.edu/people/eitans/publications/BorensteinSharonUllman-TDBUseg.pdf | here]]

Changed line 60 from:
[[http://www.analog.com/UploadedFiles/Technical_Articles/47593453337791VideoFiltering.pdf| Video Filtering in the Blakfin]] : Here you will find some ideas on how to optimize a 2D filter.
to:
[[http://www.analog.com/UploadedFiles/Technical_Articles/47593453337791VideoFiltering.pdf| Video Filtering in the Blackfin]] : Here you will find some ideas on how to optimize a 2D filter.
Deleted line 27:
Deleted line 28:
Deleted line 29:
Deleted line 30:
Deleted line 31:
Deleted line 32:
Deleted line 33:
Deleted line 34:
Deleted line 35:
Deleted line 36:
Deleted line 37:
Changed lines 68-69 from:

to:
created by [[http://www.cs.ubc.ca/~lowe/keypoints/ |D. Lowe]].

Changed line 70 from:
[[http://www.analog.com/UploadedFiles/Technical_Articles/47593453337791VideoFiltering.pdf| Video Filtering in the Blakfin]]
to:
[[http://www.analog.com/UploadedFiles/Technical_Articles/47593453337791VideoFiltering.pdf| Video Filtering in the Blakfin]] : Here you will find some ideas on how to optimize a 2D filter.
Changed line 70 from:
[[http://www.analog.com/UploadedFiles/Technical_Articles/47593453337791VideoFiltering.pdf| Video Filtering in the Blakfin]
to:
[[http://www.analog.com/UploadedFiles/Technical_Articles/47593453337791VideoFiltering.pdf| Video Filtering in the Blakfin]]
Changed lines 70-71 from:

to:
[[http://www.analog.com/UploadedFiles/Technical_Articles/47593453337791VideoFiltering.pdf| Video Filtering in the Blakfin]
Changed line 14 from:
have seen in Lab1, color filtering can be as well used to
to:
have seen in [[ http://www.cs.uml.edu/blackfin/index.php?n=Vision.Lab1|Lab1]], color filtering can be as well used to
Changed line 13 from:
case the feature is going to possibly be a ''''border''''. As we
to:
case the feature is going to possibly be a ''border''. As we
Changed line 5 from:
In computer graphics we use some promitives to describe what
to:
In computer graphics we use some primitives to describe what
Changed lines 12-13 from:
done by detecting high intensite contrast in a region. As we
to:
done by detecting high intensite contrast in a region. In this
case the feature is going to possibly be a ''''border''''
. As we
Changed lines 58-59 from:
equally separated and the same radii.
to:
equally separated and the same radii. Even if we are going to see in detail some algorithms related to the 3D pose computation that requires what is known as calibration, there is quite a funny tricks one can apply to obtain such a pose without calibration: [[http://research.microsoft.com/~antcrim/papers/Criminisi_dagm2002.pdf | Single View Metrology]] is quite a new and interesting research are.
Changed lines 53-54 from:
Do you remember the principles of '''Pose Determination''' explained in the introduction ? Well, once you have already design algortihms to filter colors and find edges, How can determine the pose of an known object in the image ? Suppose that we like to find the position and orientation of the next brick in the above images.
to:
Do you remember the principles of '''Pose Determination''' explained in the introduction ? Well, once you have already design algortihms to filter colors and find edges, the next question is : How can determine the pose of an known object in the image ? Suppose that we like to find the position and orientation of the next brick in the above images.
Added lines 57-59:
We can visually recognize that such an object of interest has a prominent color and a very specific geometry. It contains 9 holes
equally separated and the same radii.

Changed lines 51-56 from:
to:
''' Mission'''

Do you remember the principles of '''Pose Determination''' explained in the introduction ? Well, once you have already design algortihms to filter colors and find edges, How can determine the pose of an known object in the image ? Suppose that we like to find the position and orientation of the next brick in the above images.

%center%Attach:brickexample.jpg

Changed lines 19-20 from:
'''' The Sobel Filter ''''
to:
''' The Sobel Filter '''
Added lines 19-22:
'''' The Sobel Filter ''''

An example of edge detector is the famous and well known [[http://en.wikipedia.org/wiki/Sobel | Sobel Filter]]

Changed lines 25-26 from:
An example of edge detector is the famous and well know [[http://en.wikipedia.org/wiki/Sobel | Sobel Filter]]
to:
Changed lines 21-22 from:
An example of edge detector is the famous and well know [[ http://en.wikipedia.org/wiki/Sobel | Sobel Filter]]
to:
An example of edge detector is the famous and well know [[http://en.wikipedia.org/wiki/Sobel | Sobel Filter]]
Changed lines 55-59 from:
algortihm to search distinctive features is the [[ http://en.wikipedia.org/wiki/Scale-invariant_feature_transform | SIFT]



to:
algortihm to search distinctive features is the [[ http://en.wikipedia.org/wiki/Scale-invariant_feature_transform | SIFT]]



Changed lines 52-55 from:



to:
In the last years the vision community have been developing diverse methods to synthetise images to
relevant information. Particulary, when dealing with sequences of images - video - one is interested
in found those "interest points" that continuously appear in such a sequence. On of the most famous
algortihm to search distinctive features is the [[ http://en.wikipedia.org/wiki/Scale-invariant_feature_transform | SIFT]




Changed lines 7-8 from:
hand, if what we like to analise a picture, what we like
to identify are those primitives or features that appears
to:
hand, if what we like to analise is a picture, then we have
to identify those primitives or features, that appear
Changed lines 51-55 from:
Another famous Edge detector is the Canny Edge Detector.



to:
Another famous Edge detector is the [[http://en.wikipedia.org/wiki/Canny_edge_detector | Canny Edge Detector]].



Added lines 48-55:

''' References '''

Another famous Edge detector is the Canny Edge Detector.



Changed lines 19-20 from:
* Starting code
to:
''' Starting code '''
Added lines 19-20:
* Starting code
Changed lines 19-45 from:
An example of edge detector is the famous and well know [[ http://en.wikipedia.org/wiki/Sobel | Sobel Filter]]
to:
An example of edge detector is the famous and well know [[ http://en.wikipedia.org/wiki/Sobel | Sobel Filter]]

for( y = 0; y < height; y++ )

{

for( x = 0; x < width; x ++)

 {

  Sum_X = Sobel_In[y-1][x-1] + 2 * Sobel_In[y][x-1] +   

          Sobel_In[y+1][x-1] -(Sobel_In[y-1][x+1] + 2 *

          Sobel_In[y][x+1] + Sobel_In[y+1][x+1]);

  Sum_Y = Sobel_In[y-1][x-1] + 2 * Sobel_In[y-1][x] +

          Sobel_In[y-1][x+1] - (Sobel_In[y+1][x-1] + 2 *

          Sobel_In[y+1][x] + Sobel_In[y+1][x+1]);

  Sum = (abs(Sum_X) + abs(Sum_Y));

 }
}

Added lines 18-19:

An example of edge detector is the famous and well know [[ http://en.wikipedia.org/wiki/Sobel | Sobel Filter]]
Changed line 17 from:
%center%Attach:blue_0108-0032.jpg Attach:edges
to:
%center%Attach:blue_0108-0032.jpg Attach:edges_with_sobel_after_blue.jpg
Changed line 17 from:
%center%Attach:blue_0108-0032.jpg Attache:edges
to:
%center%Attach:blue_0108-0032.jpg Attach:edges
Changed line 17 from:
%center%Attach:blue_0108-0032.jpg
to:
%center%Attach:blue_0108-0032.jpg Attache:edges
Changed line 17 from:
%center%Attach:blue_0108-0032.jpg |
to:
%center%Attach:blue_0108-0032.jpg
Changed lines 14-17 from:
determine those regions.
to:
determine those regions.


%center%Attach:blue_0108-0032.jpg |
Changed lines 1-2 from:
Border detection
to:
'''Border detection'''
Changed line 4 from:
related to the possibility to obtain information from data.
to:
related to the possibility to obtain information from pixel data.
Changed line 7 from:
hand, if what we like to analise is a picture, what we like
to:
hand, if what we like to analise a picture, what we like
Changed line 11 from:
On method to extract important features of an image is
to:
One method to extract important features of an image is
Added lines 1-14:
Border detection

One of the most interesting applications of computer vision is
related to the possibility to obtain information from data.
In computer graphics we use some promitives to describe what
it should be later pictured as a virtual object. In the other
hand, if what we like to analise is a picture, what we like
to identify are those primitives or features that appears
in the image.

On method to extract important features of an image is
done by detecting high intensite contrast in a region. As we
have seen in Lab1, color filtering can be as well used to
determine those regions.
Edit - History - Print - Recent Changes - Search
Page last modified on October 12, 2006, at 08:11 PM