Markov chains are used to model all kinds of different phenomena
in nature; they're also intrinsically interesting and fun. One
simple example of a Markov chain is a random process that generates
a string of 1's, 2's, and 3's, where the only constraint is that no
two consecutive digits can be equal;
after a 1, we're equally likely to see a 2 or a 3,
and after a 2, we're equally likely to see a 1 or a 3,
and after a 3, we're equally likely to see a 1 or a 2.
One way to generate such a string is to use three coins:
a penny (which we use after we see a 1; it has two sides, marked 2 and 3),
a nickel (which we use after we see a 2; it has two sides, marked 1 and 3),
and a dime (which we use after we see a 3; it has two sides, marked 1 and 2).
If we generate such a sequence using coin-flips to decide at each
stage which of the two possibilities to append to the string, we
get a sequence in which 1's, 2's, and 3's occur about equally often.
In particular, if we generate N terms, the number of 1's, 2's and 3's
will each be about N/3, with an error roughly on the order of
To quasirandomize this procedure, rig the coins: if the LAST time
you saw a 1 in the sequence it got followed by a 2, then the NEXT
time you see a 1, follow it by a 3 instead! And vice versa. And
likewise for the other cases. That is, instead of flipping the coin
in the usual sense, just turn the coin over.
If you do this, you get an infinite sequence like
that just consists of a finite string (in this case, 1,2,3,2,1,3)
repeated over and over (possibly with some initial terms at the
beginning that don't fit the pattern).
(If you're confused about the rule, here's an illustration of how
we apply it: To figure out what comes next after that final "1",
take a look back in the sequence until you find the PREVIOUS 1.
Since that previous 1 was followed by a 2, the current 1 must be
followed by a 3.)
Note that the infinite string 1,2,3,2,1,3,1,2,3,2,1,3,1,2,3,2,1,...
has the property that if you look at just the part that repeats
(1,2,3,2,1,3), you'll find that 1's and 2's and 3's don't just occur
about equally often; they occur exactly equally often.
So, if you stopped the game after N terms, you might not get each
of the three values to occur exactly N/3 times, but the error will
be bounded (instead of growing with N the way sqrt(N) does).
That's a typical sort of situation for quasirandomization: if you
take out the randomness and replace it by something that's non-random
but "fair", then you get a non-random system that reproduces some
(though not all!) of the features of the random system that you're
trying to study, often in a more exact way.
The $100,000 question is, does this extend to the sorts of random
processes that people other than probability theorists care about?
(That dollar amount isn't random, by the way; it's the total budget
for my National Science
Foundation grant to study quasirandomness.)
A much more interesting example (and one that researchers have
barely begun to understand) is quasirandom aggregation, which
gives cool pictures like
You can learn where that picture comes from by reading Michael Kleber's
The study of quasirandomness pretty much runs the gamut between
these two examples: one nearly trivial, the other very deep.
In my Fall 2007
I'll start students off on the shallow end; by the time
EQL finishes its work
in summer 2009, I hope we'll have made some progress towards the deep end.
If you're interested in finding out more, send me email!
Write to Jim_Propp@uml.edu.ignorethis (remove everything
from my email address after the "edu"). You can also
browse through what's at