(Last updated: August 2, 2014)
Libraries and Dependencies:
Other Internal Requirements:
y-cruncher's design is split into 3 major layers. As of v0.6.5:
Instruction Set Management:
y-cruncher has a compile-time framework for selecting and dispatching all the different instruction set extensions that it uses. This makes it easy to take on new instruction sets without breaking backwards compatibility for older processors.
As of v0.6.5, y-cruncher uses the following instruction sets:
|Binary||Target Processor(s)||Instruction Set Extensions|
|x64 AVX2 ~ Airi||Intel Haswell||
x64, SSE, SSE2, SSE3, SSSE3, SSE4.1, AVX, ABM, FMA3, AVX2, BMI2
|x64 XOP ~ Miyu||AMD Bulldozer||
x64, SSE, SSE2, SSE3, SSSE3, SSE4.1, AVX, ABM, FMA4, XOP
|x64 AVX ~ Hina||Intel Sandy Bridge||
x64, SSE, SSE2, SSE3, SSSE3, SSE4.1, AVX
|x64 SSE4.1 ~ Ushio||Intel Nehalem||
x64, SSE, SSE2, SSE3, SSSE3, SSE4.1
|x64 SSE4.1 ~ Nagisa||Intel Core 2 Penryn||
x64, SSE, SSE2, SSE3, SSSE3, SSE4.1
|x64 SSE3 ~ Kasumi||AMD K10||
x64, SSE, SSE2, SSE3
SSE, SSE2, SSE3
The extensions in this table are explicitly used in the source code either via compiler intrinsics or inline assembly. But it is possible that the compiler may generate other instructions that are not listed here but are implied by turning on a higher instruction set. (i.e. SSE implies MMX. BMI2 implies BMI1.)
y-cruncher has two algorithms for each major constant that it can compute - one for computation, and one for verification.
All complexities shown assume multiplication to be O(n log(n)). It is slightly higher than that, but for all practical purposes, O(n log(n)) is close enough.
Both of these formulas were pulled from: http://numbers.computation.free.fr/Constants/PiProgram/userconstants.html
Note that the AGM-based algorithms are probably faster. But y-cruncher currently uses these series-based formulas because:
Prior to v0.6.1, only log(2) and log(10) were supported using hard-coded machin-like formulas. A generic log(n) was needed for Ramanujan's formula for Catalan's constant. That was implemented using Arithmetic-Geometric Mean (AGM).
In v0.6.1, Ramanujan's formula for Catalan's constant was removed - thereby removing the need for a generic log(n). Instead, v0.6.1 supports the computation of log(n) for any small integer n. This is done using a formula generator that generates (at run-time) machin-like formulas for arbitrary small integer n.
Generation of machin-like formulas for log(n) is done using table-lookup along with a branch-and-bound search on several argument reduction formulas.
Series summation is done using standard Binary Splitting techniques with the following catches:
This series summation scheme (including the skewed splitting and backwards summing) has been the same in all versions of y-cruncher to date. All of this is expected to change when GCD factorization is to be incorporated.
A few years ago, y-cruncher was pretty heavily developed. But I've since left school and found a full-time job.
While y-cruncher will probably remain a side-hobby for the near future, it won't get as much attention as it used to get.
At the very least, I'll continue to fix whatever bugs that are discovered. And I'll make an effort to keep the program up-to-date with the latest instruction set extensions and development toolchains. But for the most part, the project is done.
Nevertheless, it's an never-ending project. So there are things on the to-do list. But it can be a long time before anything gets done.
AVX2 support for Haswell processors.
|Command Line Options||
Run the program directly from the command line without needing to enter input.
All the major features in y-cruncher should be accessible via command line.
By sheer popular demand...
|Reduced Memory Mode||
For Pi Chudnovsky and Ramanujan, add a mode that will allow a computation to be done using less memory/disk at the cost of slower performance.
For a typical computation, most of the work requires very little memory. It's the occasional memory spike that causes y-cruncher to have such a high memory requirement.
There are about 4 large memory spikes in a Pi computation. In approximate descending order of size, they are:
These spikes can be flattened via space-time tradeoffs in the respective algorithms. Since the the trade-off only needs to be done at the spikes, the overall performance hit should be reasonably small.
This feature is in progress and is able to suppress the 1st spike. Since the 2nd spike isn't much smaller than the 1st spike, the drop in memory usage is less than 10% making it almost pointless. Neverthess, we used it in the 12.1 trillion digit computation of Pi.
The feature is usable. But since it is far from complete, it is currently disabled in the publicly released versions of the program.
Implement the optimization described here. At minimum do it for Pi Chudnovsky+Ramanujan. See if it can also be done to Zeta(3) and Catalan.
Because of the way that y-cruncher micro-manages its memory, this isn't going to be easy to do. Furthermore, y-cruncher lacks frameworks for integer division and segmented sieve.
|Small Primes NTT||
It seems that y-cruncher is the only serious Pi-program that doesn't use this algorithm. But that may change in the future.
While a 64-bit small primes NTT isn't expected to be faster than the existing large-multiply algorithms, it is capable of going above 90 trillion digits.
At those sizes, the computation will probably be completely I/O-bound anyway.