Eric McCann
5:16 AM, May 3, 2012

attach: emccann_bilibotdoku.pdf



First, I made the Bilibot's powerboard's firmware able to set the Bilibot's arms position regardless of inversion of the arm position potentiometer or limit switches, and contributed the improved firmware code to the Bilibot project. Subsequently, as the Bilibot's arm is ideal for any task that involves one-degree manipulation, I imagined it raising and lowering a writing utensil. I've already written a C# Human-like Sudoku Solver (A*-based), and 85% of a Managed .NET4 (C#) port of the ROS communications stack, so making the Bilibot (photograph 1) perceive and sufficiently understand a Sudoku puzzle using its Kinect's color camera (photograph 2), and some OCR wizardry in order to pass the puzzle information to my Sudoku solver using ROS# made sense.

A Bilibot's arm

The robot "squinting" at a Sudoku puzzle... I think it might need glassses.

A screenshot of my A* solver in action

Videos - Current state of the OCR and Sudoku grid detection components - Visualization of project Git repository using Gource

Concepts Demonstrated

  • Computer vision techniques were used to "read" the puzzle. The algorithm used is based on the detection used by a webcam solver from CodeProject - URL in Additional Remarks)
  • "Green" Software Architecture was used in designing the Sudoku solver. ("recycling")
  • Energy drinks were used to hack for hours on the semi-functional Sudoku OCR.


  • Writing a Sudoku reader and solver for ROS (which might someday be able to write its answers into the empty spaces on the grid... with a more dextrous robot arm and a level of motion planning far beyond my abilities)
  • Creation of a Managed .NET ROS Communications stack (started before class began, and still on-going)
  • Reading and solving a Sudoku puzzle with a robot instead of a smartphone's camera
  • General improvement of the human condition

Technology Used Block Diagram

Additional Remarks

Code snapshot: