Markov Chain
If last night's draw is a coin flip for tonight's, every row of the transition matrix should look the same. If the lottery has memory, some rows pull toward their diagonal. Here are three different readings of 'draw state,' each with its own independence test.
State = how many of the five whites are odd. Six states (0 through 5). A fair lottery's rows should look the same as the marginal distribution — the row you land in shouldn't depend on the row you came from.
How it works
The state. Each draw collapses into a single coarse feature — how many odd whites, how many high whites, or which sum quintile it landed in. A single integer per draw, from a small alphabet.
The transitions. Walk the history in order and tally each (previous, current) pair into a matrix cell. The row is the state you came from; the column is the state you ended up in.
The independence test. Under the null hypothesis (no memory), the expected count in cell (i,j) is rowi × colj / total— exactly the pattern you'd get if the current state were chosen independently of the previous. Compare observed to expected with a Pearson chi-square. Cells with |z| ≥ 2 are highlighted warm (excess) or cool (deficit).
How it compares to autocorrelation. A linear autocorrelation catches drift toward or away from the mean. A Markov table catches patterns that aren't linear — for example, a system that always jumps extremes (low→high→low) has zero autocorrelation at lag 1 but a very lopsided transition matrix.