Skip to content
The Lab

Reverse Engineering

If a deterministic RNG were hiding inside the lottery, we could find it. Lattice-attack an LCG, fit a Markov model, reconstruct a Mersenne Twister state — these are real cryptographic techniques. Here we point two of them at Powerball and watch them come back empty-handed. The failure is the finding.

Rolling Markov prediction accuracy

For each feature, we walk history in order and predict every draw's state from the transition counts accumulated so far. Compare to “always predict the mode” — the null-hypothesis strategy.

FeatureStatesPredictionsMode baselineMarkov accuracyLift
Odd white count
State = how many of the five whites are odd (0–5, six states). The stationary distribution peaks at 2 or 3.
61,28831.44%32.69%+1.24 pp
±2.54 pp
High white count (≥35)
State = how many whites sit in the upper half (35–69). Six states, similar shape to odd count.
61,28834.94%33.93%-1.01 pp
±2.60 pp
Sum quintile
State = which fifth of the empirical sum distribution the draw falls into. Five equally-likely states by construction — mode baseline is 1/5.
51,28817.47%18.63%+1.16 pp
±2.07 pp
Powerball parity
State = Powerball number's parity (0 = even, 1 = odd). Two states. Mode baseline is the more common of even/odd.
21,28850.00%48.91%-1.09 pp
±2.73 pp
±values are the 95% margin of error at each row's sample size. Lifts inside that band are indistinguishable from the mode baseline.

Berlekamp-Massey linear complexity

Bit sequence: parity of each draw's white-ball sum. Plot is the shortest LFSR length L needed to reproduce the first n bits. For truly random data this tracks n/2.

0178357535713n/2 referenceprefix length (bits)L (LFSR length)
Bit length
1,339
LFSR length L
670
50.0% of n
Random expectation
670
n/2, noisy staircase
Verdict
consistent with random
dataset: 1,339 post-2015 draws

How it works

Rolling Markov prediction. For each of four coarse per-draw features, we walk history in order. At every step we use the transition counts accumulated from prior draws to guess the next state, then update the counts. The predictor never sees the draw it's predicting — honest cross-validation. We compare the Markov accuracy to “always predict the most common state so far,” which is what a null-hypothesis predictor would do.

Berlekamp-Massey. We collapse every draw to a single bit — parity of its white-ball sum — giving a sequence as long as the draw history itself. The Berlekamp-Massey algorithm finds the shortest linear feedback shift register (LFSR) that reproduces that bit sequence. Its length Lis the “linear complexity” of the stream. For truly random bits, L grows like a noisy staircase around n/2. A plateau or a slope meaningfully below 1/2 would mean an LFSR actually generates it.

What both tests are really saying. Neither technique can provethe lottery is random. What they can do is close doors: no short LFSR fits, no first-order Markov chain predicts better than guessing the mode. Any conspiracy theory about “the formula” has to survive past both of these filters. Most don't.

DISCLAIMER: Balliqa is an entertainment product. Every Powerball drawing is an independent random event. Pattern analysis of historical draws does not predict or influence future outcomes. The odds of winning the Powerball jackpot are 1 in 292,201,338.

HomeStatsLabTermsPrivacy @balliqa_picks

© 2026 Balliqa. All rights reserved.