It’s 3/14, and that means it’s international Pi day! A day where we rejoice over the transcendental number that seems to be everywhere.
So, why am I writing about pi on the Iron.io blog? It turns out pi is the best (read: the absolute best!) way to test out computers. It’s sufficiently random, requires large amounts of memory, CPU, and is easy to check.
I first learned about this aspect of pi while reading the book Heres Looking at Euclid. There, I also learned that Pi beyond 40 digits or so isn’t all that useful. So, why do we know pi into the billions of digits? To quote the many time world record holder,
“I have no interest as a hobby for extending the known value of pi itself. I have a major interest for improving the performance of computation. [..] Mathematical constants like the square root of 2, e, and gamma are some of the candidates, but pi is the most effective.”
How To Make Pi
I’m on board! I want to make Pi, myself. If Pi is a great way to test any computer, why not use it to test first-class distributed computing solutions, like IronWorker?
Humans have known about Pi for a while. Which is part of what makes it a great computation. We have multiple recipes for baking the same dish. That means it’s easy to check our work (by comparing two algorithms).
So, what goes into pi? How can I cook this dish? Let’s check out a few of the best recipes.
They say Archimedes built mirrors capable of focusing enough light to set ships ablaze during the Siege of Syracuse. Which, probably makes him the coolest of all the ancient Greeks. He also happens to be the inventor of a very clever method for calculating pi.
Ancient civilizations knew about pi before Archimedes, but obtaining an exact value was rather difficult. Most western societies were still working with Roman numerals at this time. Anything more than addition or subtraction was a nightmare. Tough times when you’re trying to calculate pi.
Pi is defined as comparison between a circle’s circumference and its diameter. Easy enough to calculate as an ancient Greek. Just stretch a length of string along the diameter, and wrap the string around the circle.
Archimedes’ innovation was to inscribe pi into other shapes. Do we know how to calculate the perimeter of a hexagon? Yes? Great, inscribe a hexagon inside of a circle (and vice versa) and we’ll get pretty close to pi.
Are You Ceulen Me Names?
Archimedes method was the standard for centuries. In 1596, Ludolph van Ceulen used it to calculate pi to 20 decimals by means of a 60 x 229 sided polygon. Yipes! That’s a lot of sides to calculate for just 20 digits of pi.
Leibniz And The Calculus
The world would need better tools to reach the 5 trillion some digits we know today. Better computers help, but so do better algorithm’s.
Leibniz and company introduced the world to the infinite series. With it, came a better method for calculating pi. Pi can be summed up as:
Simple, beautiful, and more convenient than an uncountable-sided polygons. The Leibniz formula is what we’ll use for our own Pi calculation. For the truly brave, there are other methods that zero in on pi at a much faster rate.
Pre-Heating the Oven
Now that we have our formula, it’s time to sketch out our plan for distributing it across hundreds of IronWorkers. No problem, just pick a favorite language, call
math.Pi, and … Oh.
So close. To operate across distributed infrastructure, we’ll need to chop the calculation up into more manageable pieces. Our workflow should roughly follow the flow: chunk => calculate => reduce.
Chunking is relatively easy with pi. We can chunk the combination of the infinite series into any number we like. For happy workers, I’ve chosen chunks of 1,000.
Calculating pi requires a bit more thinking. Pi is a transcendental number, which means it is both infinite. Tough noogies if we were hoping to just stuff the intermediate values into a float. We’d lose valuable precision! Thankfully, Go (and ostensibly other cool programming languages) offer a handy lib for handling large rational numbers.
Reducing is where we bring it all back together. Collect all of the pieces, and reduce them down to a single value.
Make the Crust
When baking edible pie, friends tell me it’s best to start with the crust. In the case of pi, I think the calculation step is the easiest to begin with. If we can’t get our implementation right on just one machine, then we’re certainly not going to succeed when trying the distributed approach.
A naive implementation looks something like:
The output of the above is:
val: -0.3333333333333333 val: 0.2 val: -0.14285714285714285 val: 0.1111111111111111 val: -0.09090909090909091 val: 0.07692307692307693 val: -0.06666666666666667 val: 0.058823529411764705 val: -0.05263157894736842 val: 0.047619047619047616
This clearly demonstrates the need for rational numbers (or bigger floats). Let’s add those in, and do some housekeeping.
With some of the `Println` statements uncommented, the above produces:
run number is: 1 sum is: -1/3 run number is: 2 sum is: -2/15 run number is: 3 sum is: -29/105 run number is: 4 sum is: -52/315 run number is: 5 sum is: -887/3465 run number is: 6 sum is: -8066/45045 run number is: 7 sum is: -11069/45045 run number is: 8 sum is: -143128/765765 run number is: 9 sum is: -3485197/14549535 run number is: 10 sum is: -2792362/14549535
Looking good! Now it’s time to start worker-izing things. We’ll need to tell our program to read from a Payload (JSON) file. The relevant bits here are
With the input handled, now we just need to figure out where to send the output. In my case, I’ve chosen an IronMQ. This allows me to store all of the intermediate calculations and easily debug. The important bits here are in the
Prepare the Filling
I found the next easiest bit to do was the chunking. Go is a great language, but when it comes to JSON, some of the dynamic languages have an edge.
For chunking, I chose Ruby. This was the easiest piece, since IronWorker has a well documented Ruby client. Queueing for workers is simple, as well. I just have to add messages, and then hundreds of workers will whir to life to get the job done.
This could be a an initialization script I run from my local machine. But, making it a worker means starting a new job is as easy as a single mouse click in Iron.io’s HUD.
Mix and Pop It In the Oven
The last step is reducing the output to a single value. I’ve chosen to simplify my pipeline by storing output in an MQ. The init script communicates the expected number of messages, and my pi-reduce worker will begin number crunching once the queue hits that expected value.
I’ve set this worker to poll the queue every 12 seconds. This kind of infinite loop might ordinarily be dangerous, but using IronWorkers is kind of like bowling with bumpers in the lane. If any job takes longer than 60 minutes, Iron.io will automatically be terminated.
The pi-reduce code can be found here:
And, it produces output that looks something like this:
MQ size is: 0 . Waiting for expected: 100 MQ size is: 54 . Waiting for expected: 100 MQ size is: 63 . Waiting for expected: 100 MQ size is: 84 . Waiting for expected: 100 MQ size is: 91 . Waiting for expected: 100 MQ size is: 98 . Waiting for expected: 100 MQ size is: 99 . Waiting for expected: 100 MQ size is: 99 . Waiting for expected: 100 The approximate infinite series is: -0.2145993366275515 Pi is approx: 3.141602653489794
Check For Taste
The above code should produce more accurate pi approximations as I increase the number of iterations. When it doesn’t (oops!) it indicates that my setup has a leak somewhere.
The nice thing about pi is this is easy to check. We’re in the right neighborhood with 3.141... If I fail to get more correct digits, I’ll know immediately. It’s easy to check against any known expansion of pi.
For bonus points, say we really go the distance and launch beyond a million digits of Pi. There are tools like the BBP formula. It allows us to calculate the nth binary digit of pi, using base-16 math.
Notes From the Chef
With any distributed system, there are a lot of ways to optimize. One classic example is optimizing for operation at scale. The more processes we have going, the more likely it is that one of them produces an error. Automating error recovery sounds dreamy.
It would also be nice to add real-time processing, continuous calculation (that is, not starting over every time), and automatic validity checks. This is all great! But, they’re nice to haves. Before any of that is possible, we just need to get workable code running in production.
Replace Pi With (almost) Anything
Pi is just an example of a complex calculation that can be run at scale. With a little imagination, it’s easy to plug in a lot other challenging computations in its place.
For instance, imagine processing video. To do it at scale, we’d just need to chunk each video into smaller pieces => process it => recombine it into the original video. Think this sounds a bit crazy?
During @SCALE2015’s keynote speech, Jay Parikh noted that this is exactly how Facebook speeds up their video processing.
The cloud makes distributed computing cheap, and easy. So, experiment with it. Try speeding up your own code by distributing the load.