Keith Fader
December 10, 2010 Attach:kcf_writeup.doc


This project is intended to find an optimal solution to a scrambled Rubik's Cube. Though solving a Rubik's Cube is not always difficult, finding an optimal solution is much more so.


(You may attach a PNG, GIF, or JPG file. Please note the Attach: syntax for doing this. After you save the wiki page, you will see the Attach: link with a blue triangle. Click on the link, and then you will be brought to a page where you can upload the attachment. After you upload the attachment, the link goes away and you see the image instead.)

Concepts Demonstrated

  • Uninformed search techniques (DFS, BFS, UCS) are used to find optimal solutions to the cube


It is known, thanks to 35 CPU-years of computing time donated by Google, that an optimal solution to any Rubik's Cube may be found in 20 moves or less. Many human-usable techniques to solve cubes exist, and solvers written to use them are common; however these techniques are far from optimal. This is the only cube solver known that attempts to find an optimal solution using search techniques. (All other "optimal" solvers use mathematical models).

Technology Used Block Diagram

Create a simple block diagram of your software system that illustrates the major technical components and how they interact; e.g.:

Evaluation of Results

This project is fully capable of finding an optimal solution to a scrambled cube. However, it usually cannot do so simply because of speed limits inherent to Python. The solver is tested and fully working, but will take a huge amount of time to find a solution longer than about 4 moves. Additionally, I was unable to come up with an admissible heuristic for the application of A*, therefore the fastest search that is running is UCS.

Additional Remarks

Two major roadblocks were encountered in the completion of this project. The first was that Python itself is not all that fast. Initially, my code would take about 2 seconds to expand 1000 search nodes. That was too slow (BFS to depth 3 would be ~11 seconds), and profiling revealed that the roadblock was in the function I was using to manipulate the cube. Due to Python's interpreted nature, the relative time costs of various statements are, to say the least, surprising. (for instance, map() is several orders of magnitude faster than for/in) However, I was able to rewrite my code well enough to achieve over a 650% speedup, expanding 1000 nodes in 0.3 second. Further profiling revealed the bulk of the remaining time to be taken by Python built-ins.

The second roadblock was finding an admissible heuristic. Perhaps it should say something that the only other project using search techniques didn't even try to find a heuristic, instead using advanced set theory to prune the search space down to something conquerable by BFS. Unfortunately, finding an admissible heuristic was simply beyond my capabilities.