91.548 home mtg 13: Course Review mtg 12: Research Papers mtg 7: Robot Vision mtg 6: LabVIEW Embedded for Blackfin mtg 5: VDK (visual dsp++ kernel) mtg 4: laser cutter mtg 3: FPGA design/implementation mtg 2: sensors/motors/arch/control mtg 1: intro |
91.548 Lab 7: Robot Visiondue March 29 The objective this week is to write image processing code on the Blackfin using a CMOS digital camera. At present, the cameras do not work on the Blackfin Handy Board, but we have two Blackfin EZ kits with AV Extender cards that accept the cameras. Research/Writing AssignmentAlso this week, I would like you to conduct project planning for your main project. Saturday April 8 at Botfest will be the first public demonstration of your project. For next week, I would like a full page project plan for what you want to do. The plans:
Some ideas for things to do:
Implementation ProjectThe sample code we are using configures the Blackfin's PPI port to acquire images from the camera. The data is a YUV colorspace, which is described on Wikipedia at http://en.wikipedia.org/wiki/YUV. Broadly, the U and V channels define the color (chrominance), while the Y channel is the luminance (or brightness). The actual data from the camera is in the byte order VYUY, with the Y channel is sampled 2x as often as the V and U. The image viewer in VDSP does not directly support this byte ordering, so a byte-swap operation is run to create an image in the YVYU sequence. To view images, run the code with a breakpoint after the byte-swap, and then refresh the image viewer. Your task: Get the image acquisition up and running, and then write some kind of image filtering/feature extraction code. Demonstrate your code by having it produce either a new image or an extracted feature set. Document on a Wiki page. For example, image segmentation (a.k.a blob tracking). Phil Thoren has described his algorithm for this (in Phission) as: The blobbing/segmentation code works by: This is probably more general than you may want, at least for a first pass. For example, you could simplify this with the limitation of exactly one object to be tracked. Other possibilities are edge-detection or motion-tracking. Use your imagination, and find on-line resources and tutorials. Note: people can/should collaborate on this. Make sure all contributors are listed as authors on the Wiki project page. Last modified: Wednesday, 22-Mar-2006 14:34:39 EST by fred_martin@uml.edu |