y-cruncher - A Multi-Threaded Pi-Program

From a high-school project that went a little too far...

By Alexander J. Yee

(Last updated: November 24, 2015)




The first scalable multi-threaded Pi-benchmark for multi-core systems...


How fast can your computer compute Pi?


y-cruncher is a program that can compute Pi and other constants to trillions of digits.

It is the first of its kind that is multi-threaded and scalable to multi-core systems. Ever since its launch in 2009, it has become a common benchmarking and stress-testing application for overclockers and hardware enthusiasts.


y-cruncher has been used to set several world records for the most digits of Pi ever computed.


Current Release:

Windows: Version 0.6.8 Build 9461 (Released: May 7, 2015)

Linux      : Version 0.6.8 Build 9461 (Released: May 7, 2015)


Official Xtremesystems Forums thread.




Performance Announcement for Ubuntu 15.10:


This applies to Ubuntu 15.10, but may also apply to other Linux distributions with the same kernel version.


When performing swap mode computations on Ubuntu 15.10, the OS has been observed to do excessive swap file access when y-cruncher is performing heavy I/O. This swapping is so severe that the OS becomes unresponsive and the computation stalls. It has not been observed in Ubuntu 15.04.


So far, the only solution that seems to work reliably is to completely disable the swap partition. Then reduce the amount of memory that y-cruncher should use so that you don't run out of memory. Another possible solution is to set the "swappiness" value to zero. But this is untested.


The next version (v0.6.9) will have a lower default memory setting for swap mode.



200 billion digits of Catalan's Constant: (June 8, 2015)


I'm pleased to announce that after running for more than half a year, Robert Setti has computed Catalan's Constant to 200 billion digits.

This is very impressive because Catalan's Constant is one of the slowest to compute. (among popular constants that can be computed in quasi-linear time)



Older News


Records Set by y-cruncher:

y-cruncher has been used to set a number world record size computations.


Blue: Current World Record

Green: Former World Record

Red: Unverified computation. Does not qualify as a world record until verified using an alternate formula.

Date Announced Date Completed: Source: Who: Constant: Decimal Digits: Time: Computer:
November 22, 2015 November 14, 2015   Ron Watkins Lemniscate 125,000,000,000

Compute:  217 hours

Verify:  251 hours

4 x Xeon X6550 @ 2 GHz - 256 GB
August 11, 2015 August 7, 2015   Ron Watkins Zeta(3) - Apery's Constant 250,000,000,000

Compute:  152 hours

Verify:  162 hours

4 x Xeon X6550 @ 2 GHz - 512 GB
July 24, 2015 July 22, 2015
July 23, 2015
Source Ron Watkins
Dustin Kirkland
Golden Ratio 2,000,000,000,000

Compute:  77.3 hours

Verify:  76.33 hours

Compute:  79.3 hours

Verify:  80.8 hours

4 x Xeon X6550 @ 2 GHz - 512 GB
Xeon E5-2676 v3 @ 2.4 GHz - 64 GB
July 13, 2015 July 12, 2015   Ron Watkins Log(2) 250,000,000,000

Compute:  126 hours

Verify:  140 hours

4 x Xeon X6550 @ 2 GHz - 512 GB
June 26, 2015 June, 24, 2015   Matthew Hebert e 1,400,000,000,000

Compute:  15 days

Verify:  22 days

FX-8370 @ 4.0 GHz - 8 GB
FX-8370 @ 4.0 GHz - 16 GB
June 8, 2015 June 7, 2015   Robert Setti Catalan's Constant 200,000,001,100

Compute:  185 days

Verify:  72 days


Validations: 1, 2

Intel Core i7 4820K @ 4.5 GHz
24 GB DDR3
October 8, 2014 October 7, 2014   "houkouonchi" Pi 13,300,000,000,000

Compute:  208 days

Verify:  182 hours


Validation File

2 x Xeon E5-4650L @ 2.6 GHz
192 GB DDR3 @ 1333 MHz
24 x 4 TB + 30 x 3 TB
March 24, 2014 March 10, 2014   Shigeru Kondo Log(10) 200,000,000,050

Compute:  44.4 hours

Verify:  49.7 hours


Validations: 1, 2

2 x Xeon E5-2690 @ 3.3 GHz
256 GB DDR3 @ 1600 MHz
12 x 3 TB
December 28, 2013 December 28, 2013 Source Shigeru Kondo Pi 12,100,000,000,050

Compute: 94 days

Verify: 46 hours

2 x Xeon E5-2690 @ 2.9 GHz
128 GB DDR3 @ 1600 MHz
24 x 3 TB
December 22, 2013 December 22, 2013   Alexander Yee Euler-Mascheroni Constant 119,377,958,182

Compute:  50 days

Verify:  38 days


Validations: 1, 2

2 x Intel Xeon X5482 @ 3.2 GHz
64 GB SSD (Boot) + 2 TB (Data)
8 x 2 TB (Computation)
February 9, 2012 February 9, 2012   Alexander Yee Square Root of 2 2,000,000,000,050

Compute:  110 hours
Verify:  119 hours

2 x Xeon X5482 @ 3.2 GHz - 64 GB
8 x 2 TB
Core i7 2600K @ 4.4 GHz - 16 GB
5 x 1 TB + 5 x 2 TB

See the complete list including other notably large computations.



Aside from computing Pi and other constants, y-cruncher is great for stress testing 64-bit systems with lots of ram.




Sample Screenshot: 100 billion digits of Pi

Core i7 4770K @ 4.0 GHz - 32GB DDR3 @1866 MHz - 16 HDs


Latest Release: (May 7, 2015)

Windows: y-cruncher v0.6.8.9461.zip (9.91 MB)
Linux      : y-cruncher v0.6.8.9461.tar.gz (8.42 MB)


System Requirements:



All Systems:

Very old systems that don't meet these requirements may be able to run older versions of y-cruncher. Support goes all the way back to Windows XP and x86 without SSE.


Version History:

Main Page: y-cruncher - Version History


Other Downloads (for C++ programmers):


Advanced Documentation:






Known Issues:


Functionality Issues:


Performance Issues:



Comparison Chart: (Last updated: November 1, 2015)


Computations of Pi to various sizes. All times in seconds.

The timings include the time needed to convert the digits to decimal representation, but not the time needed to write out the digits to disk.



Single-Processor Desktops:

Processor(s): Core 2 Quad Q6600 Core i7 920 Core i7 3630QM FX-8350 Core i7 4770K Core i7 5960X Core i7 6700K*
Generation: Intel Core Intel Nehalem Intel Ivy Bridge AMD Piledriver Intel Haswell Intel Haswell Intel Skylake
Cores/Threads: 4/4 4/8 4/8 8/8 4/8 8/16 4/8
Processor Speed: 2.4 GHz 3.5 GHz (OC) 2.4 GHz (3.2 GHz turbo) 4.0 GHz (4.2 GHz turbo) 4.0 GHz (OC) 4.0 GHz (OC) 4.0 GHz (4.2 GHz turbo)
Memory: 6 GB - 800 MHz 12 GB - 1333 MHz 8 GB - 1600 MHz 16 GB - 1333 MHz 32 GB - 1866 MHz 64 GB - 2666 MHz 16 GB - 3200 MHz
Version: v0.6.3 - SSE3 v0.6.3 - SSE4.1 v0.6.7 - AVX v0.6.7 - XOP v0.6.7 - AVX2 v0.6.7 - AVX2 v0.6.8 - AVX2
25,000,000 12.925 6.852 5.302 6.188 2.180 1.502 1.548
50,000,000 27.713 14.272 11.239 11.629 4.733 2.929 3.482
100,000,000 59.752 30.910 24.130 23.839 10.206 5.822 7.614
250,000,000 171.932 86.899 69.358 63.987 28.675 15.593 21.878
500,000,000 388.090 191.235 154.446 142.572 63.602 34.570 49.696
1,000,000,000 862.522 429.040 358.208 307.381 139.011 74.362 109.294
2,500,000,000       869.671 398.734 212.347 315.167
5,000,000,000         863.474 450.680  
10,000,000,000           976.559  

*Credit to Dan Durden.



Multi-Processor Workstation/Servers:

Processor(s): 2 x Xeon X5482 2 x Xeon E5-2690*
Generation: Intel Penryn Intel Sandy Bridge
Cores/Threads: 8/8 16/32
Processor Speed: 3.2 GHz 3.5 GHz
Memory: 64 GB - 800 MHz 256 GB - ???
Version: v0.6.3 - SSE4.1 v0.6.2/3 - AVX
25,000,000 6.923 2.283
50,000,000 14.386 4.295
100,000,000 28.242 8.167
250,000,000 76.197 20.765
500,000,000 157.537 42.394
1,000,000,000 346.963 89.920
2,500,000,000 964.038 239.154
5,000,000,000 2123.981 520.977
10,000,000,000 4633.681 1131.809
25,000,000,000   3341.281
50,000,000,000   7355.076

*Credit to Shigeru Kondo.



Fastest Times:

The full chart of rankings for each size can be found here:

These fastest times may include unreleased betas.

Got a faster time? Let me know: a-yee@u.northwestern.edu

Note that I usually don't respond to these emails. I simply put them into the charts which I update periodically.

Algorithms and Developments:



Q:  Where can I download the digits for Pi and other constants that are featured in the various record computations?

A:  There is currently no reliable place to get the digits. Due to the large sizes of the data, it simply isn't feasible to move them around.


Personally, I have an archive of some of the digits including the first 12.1 trillion digits of Pi. But because I'm on a consumer internet connection with rate and possibly bandwidth limits, it simply isn't possible for me to upload them. When I was still in school, I was able to seed some torrents since university connections are very fast. But that isn't possible anymore.


To answer the question directly, your best bet at getting the digits is to:

  1. Compute them locally using y-cruncher if you have the resources to do so.
  2. Contact the person who ran the computation and see if they still have the digits.

Under extreme circumstances (if you're a famous professor trying to do research), I may make special arrangements to run research code locally on my machine. But this has happened only once and I was dealing with some pretty amazing professors which I didn't want to let down.



Q:  Is there a version that can use the GPU?

A:  This is still a no-go for current generation GPUs. But things may get more interesting with Xeon Phi.

  1. As of 2015, most GPUs are optimized for single-precision performance. Their double-precision and 64-bit integer throughput is far from impressive. (with notable exceptions being the Nvidia Tesla and Titan Black cards)

    The problem is that every single large integer multiplication algorithm uses either double-precision, 64-bit integer multiply, or carry-propagation. All of these operations are inefficient on current GPUs. And no, single-precision cannot be used because it imposes size limits that make the algorithms useless.

  2. GPUs require massive vectorization. Large number arithmetic is difficult to vectorize due to carry-propagation. The speedups that are currently achieved with CPU vectorization (SSE, AVX) are only modest at best.

  3. Large computations of Pi and other constants are not limited by computing power. The bottleneck is in the data communication. (memory bandwidth, disk I/O, etc...) So throwing GPUs at the problem (even if they could be utilized) would not help much.

Fundamental issues aside, the biggest practical barrier would be the need to rewrite the entire program using GPU programming paradigms.


It is worth mentioning the Xeon Phi co-processor line. Programming for these do not require a change of programming paradigm. Furthermore, the ISA convergence to AVX512 between Skylake and Knights Landing makes Xeon Phi even more attractive.


In the end, GPUs are still limited by PCIe bandwidth. Furthermore they do nothing to solve the disk I/O bottleneck. So while they may be interesting for small benchmarks, they won't offer much in terms breaking world records.



Q:  Why can't you use distributed computing to set records?

A:  No for more or less the same reasons that GPUs aren't useful.

  1. Just as with GPUs, computational power is not the bottleneck. It is the data communication. For this to be feasible as of 2015, everyone would need to have an internet connection speed of more than 1 GB/s. Anything slower than that and it's faster to do it on a single computer.

  2. Computing a lot of digits requires a lot of memory which would need to be distributed among all the participants. But there is no tolerance for data loss and distributed computing means that participants can freely join or leave the network at any time. Therefore, a tremendous amount of redundancy will be needed to ensure that no data is lost when participants leave.


Q:  Is there a distributed version that performs better on NUMA and HPC clusters?

A:  Not specifically. y-cruncher is still a shared memory program, so it inherently will not scale well into large networks.


However, there is one way that y-cruncher could (theoretically) scale. Most of the algorithms involved are cache-oblivious divide-and-conquer. This means that they (should) be able to scale given a hierarchical cache structure. However, this requires the following:

These policies effectively emulate the hierarchical cache that is necessary for cache-oblivious algorithms to run efficiently. Failing either of these and the computation will thrash the interconnect. That said, I'm not sure if any operating systems provide this sort of NUMA policy.


Thanks to Hyperthreading, y-cruncher is not overly sensitive to memory latency. It's the bandwidth that matters. So it may be beneficial to interleave memory across all the nodes to utilize all the bandwidth even if it hurts locality a bit.

On Linux this can be done using: numactl --interleave=all "./y-cruncher.out"



Q:  Is there a publicly available library for the multi-threaded arithmetic that y-cruncher uses?

A:  This was something I tried back in 2012, but it didn't work out. The problem is that the interface changes far too quickly for it to be maintainable.


Currently, there's some work being done to revive it within another spin-off project. But there's no telling whether it will suffer the same fate.



Q:  What's the difference between "Total Computation Time" and "Total Time"? Which is relevant for benchmarks?

A:  "Total Computation Time" is the total time required to compute the constant. It does not include the time needed to write the digits to disk nor does it include the time needed to verify the base conversion. "Total Time" is the end-to-end time of the entire computation which includes everything.


The CPU utilization measurements cover the same thing as the "Total Computation Time". It does not include the output or the base convert verify.


For benchmarking, it's better to use the "Total Computation Time". A slow disk that takes a long time to write out the digits will affect neither the computation time nor the CPU utilization measurements. Most other Pi-programs measure time the same way, so y-cruncher does the same for better comparability. All the benchmark charts on this website as well as any forum threads run by myself will use the "Total Computation Time".


For world record size computations, we use the "Total Time" since everything is relevant - including down time. If the computation was done in several phases, the run-time that is put in the charts is the difference between the start and end dates.


There's currently no "standard" for extremely long-running computations that are neither benchmarks nor world record sized.



Q:  What's the deal with the privilege elevation? Why does y-cruncher need administrator privileges in Windows?

A:  Privilege elevation is needed to work-around a security feature that would otherwise hurt performance.


In Swap Mode, y-cruncher creates large files and writes to them non-sequentially. When you create a new file and write to offset X, the OS will zero the file from the start to X. This zeroing is done for security reasons to prevent the program from reading data that has been leftover from files that have been deleted.


The problem is that this zeroing incurs a huge performance hit - especially when these swap files could be terabytes large. The only way to avoid this zeroing is to use the SetFileValidData() function which requires privilege elevation.


Linux doesn't have this problem since it implicitly uses sparse files.



Q:  Why is the performance so poor for small computations? The program only gets xx% CPU utilization on my xx core machine for small sizes!!!

A:  For small computations, there isn't much that can be parallelized. In fact, spawning N threads for an N core machine may actually take longer than the computation itself! In these cases, the program will decide not to use all available cores. Therefore, parallelism is really only helpful when there is a lot of work to be done.


For those who prefer academic terminology, y-cruncher has weak scalability, but not strong scalability. For a fixed computation size, is it not possible to sustain a fixed efficiency while increasing the number of processors. But it is possible if you increase the computation size as well.



Q:  Is y-cruncher open-sourced?

A:  In short no. But roughly 5% of the code (mostly involving the Digit Viewer) is open sourced.



Here's some interesting sites dedicated to the computation of Pi and other constants:


Questions or Comments

Contact me via e-mail. I'm pretty good with responding unless it gets caught in my school's junk mail filter.