Sponsored By

A nice application of the golden ratio in programming

A pretty common programming application - hashing a sequence of possibly serial numbers into a random-looking sequence - has a straightforward solution with some interesting number theory, and a nice application of the golden ratio, behind it.

Steven Stadnicki, Blogger

April 23, 2018

4 Min Read

Recently at work I got to bust out some unusually deep number theory for a pretty prosaic programming application. It’s a task that comes up surprisingly often, and it’s a neat application of the golden ratio, so I figured I’d share it.

 

The core problem is ‘hashing’ a sequence of seed numbers - often, but not always, serial - into a sequence of numbers in a much smaller range, such that the result doesn’t exhibit any clear pattern. For instance, you might want to roll a series of dice, such that the nth roll is generated independently of the ones before it and can be extracted without having to generate all the others; this can be useful in procedural games where you want two players to see the same layout in a given location without needing to store too much of the layout or to have gotten there by exactly the same path. Usually, this problem reduces to the problem of hashing your seed into a number between zero and one, then binning it into one of a few bins. For instance, to flip coins you can hash to a number in 0..1, and then return ‘heads’ if that number is greater than 1/2 or ‘tails’ if it’s smaller; to roll a die, you can hash your seed to a number in 0..1, and then return 1 if it’s between 0 and 1/6th, 2 if it’s between 1/6th and 1/3rd, etc.

 

So, how to do this? Well, one easy way is to multiply your seed by some real number α and take the fractional part. It’s been proven that as long as α is irrational, the fractional parts of its multiples - we usually write this as {nα} - will be uniformly distributed between 0 and 1, so there won’t be any bias in the results; every bin will show up equally often. And while of course we don’t have ‘real’ irrational numbers on a computer, you can also show that the period of the sequence for a rational α will be the denominator of α in reduced form, so for a fixed-point 32-bit ‘float’ this could easily be 2^32. But that doesn’t mean that some numbers α aren’t better than others. For instance, imagine taking α=1/2+1/1000  = 0.501, and using it to flip coins - in other words, look at {nα} and return heads or tails depending on which side of 1/2 the result is on.. The sequence of coin flips will be ‘balanced’ - as many heads as tails, plus or minus - and it’ll have a long period before it actually repeats itself. But look at the first few multiples of this α; they’re .501, 1.002, 1.503, 2.004, so the fractional parts are .501, .002, .503, .004, etc. The first several hundred coin flips will alternate exactly heads, tails, heads, tails, ...! This sequence may be ‘complicated’ in the long run, but in the short term it’s got a very obvious pattern.

 

Looking at it more closely, you can see that the problem is that 2*α is so close to a whole number (in other words, its fractional part is so close to zero) that every entry of the sequence is equally close to the entry two before it. The same thing would happen if you wanted to, e.g., roll a die and you picked α=1/3+1/1000 = 0.3343...; there, the sequence would go 1, 3, 5, 1, 3, 5, ..., for several hundred iterations until you got close enough to the next bin to shift into a 2, 4, 6, 2, 4, 6, ... cycle. This time, the problem is that 3*α is too close to an integer.

 

So for our sequence to be ‘random enough’ - in other words, to not just have a long period but to have as little local repetition as possible - we want integer multiples of α to be as far from integers as possible; to put it another way, we want α to be ‘as irrational as possible’. Now, it turns out that there’s a well-defined notion of just _how_ irrational a number is, in this sense: if you take (the fractional parts) of n multiples of α : {α}, {2α},{3α}, etc., all the way up to {nα}, then at least one of them will have a fractional part that’s less than 1/n (in absolute value), so if you look at the difference between α and a ‘good’ rational approximation |α-p/q|, there will be infinitely many q with |α-p/q|<C/q^2, for some C. Now, we want our number to be _badly_ approximated, so we want to find as bad a constant C as possible - and it turns out that the golden ratio φ is the ‘worst approximated’ number! in this sense! You can find infinitely many p/q with |φ-p/q| < 1/sqrt(5)*q^2, but if you replace sqrt(5) with any better number then you’ll only find finitely many p/q that approximate that closely. What’s more, for any number that doesn’t ‘look like’ φ (in a technical sense) then you can use a better constant - at least sqrt(8)!  So φ tends to be the ideal multiplier for a hash like this.

 

Read more about:

Blogs
Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like