y-cruncher - Language and Algorithms

By Alexander J. Yee

 

(Last updated: July 29, 2014)

 

 

Back To:

Implementation (as of v0.6.5):

 

General Information:

Libraries and Dependencies:

Compilers:

Other Internal Requirements:

Overall Design:

y-cruncher's design is split into 3 major layers. As of v0.6.5:

 

Instruction Set Management:

y-cruncher has a compile-time framework for selecting and dispatching all the different instruction set extensions that it uses. This makes it easy to take on new instruction sets without breaking backwards compatibility for older processors.

 

As of v0.6.5, y-cruncher uses the following instruction sets:

Binary Target Processor(s) Instruction Set Extensions
x64 AVX2 ~ Airi Intel Haswell

x64, SSE, SSE2, SSE3, SSSE3, SSE4.1, AVX, ABM, FMA3, AVX2, BMI2

x64 XOP ~ Miyu AMD Bulldozer

x64, SSE, SSE2, SSE3, SSSE3, SSE4.1, AVX, ABM, FMA4, XOP

x64 AVX ~ Hina Intel Sandy Bridge

x64, SSE, SSE2, SSE3, SSSE3, SSE4.1, AVX

x64 SSE4.1 ~ Ushio Intel Nehalem

x64, SSE, SSE2, SSE3, SSSE3, SSE4.1

x64 SSE4.1 ~ Nagisa Intel Core 2 Penryn

x64, SSE, SSE2, SSE3, SSSE3, SSE4.1

x64 SSE3 ~ Kasumi AMD K10

x64, SSE, SSE2, SSE3

x86 SSE3 -

SSE, SSE2, SSE3

x86 -

none

The extensions in this table are explicitly used in the source code either via compiler intrinsics or inline assembly. But it is possible that the compiler may generate other instructions that are not listed here but are implied by turning on a higher instruction set. (i.e. SSE implies MMX. BMI2 implies BMI1.)

 

 

Formulas:

y-cruncher has two algorithms for each major constant that it can compute - one for computation, and one for verification.

 

All complexities shown assume multiplication to be O(n log(n)). It is slightly higher than that, but for all practical purposes, O(n log(n)) is close enough.

 

Square Root of n and Golden Ratio

1st order Newton's Method - Runtime Complexity: O(n log(n))
Note that the final radix conversion from binary to decimal has a complexity of O(n log(n)2).
e

Taylor Series of exp(1): O(n log(n)2)


Taylor Series of exp(-1): O(n log(n)2)


Pi

Chudnovsky Formula: O(n log(n)3)


Ramanujan's Formula: O(n log(n)3)

Log(n)

Machin-Like Formulas: O(n log(n)3)



Starting from v0.6.1, y-cruncher is able to generate and utilize machin-like formulas for the log of any small integer.
 
Zeta(3) - Apery's Constant

Amdeberhan-Zeilberger Formula 2: O(n log(n)3)



Amdeberhan-Zeilberger Formula 1: O(n log(n)3)


Lemniscate

Gauss Formula: O(n log(n)3)


Sebah's Formula: O(n log(n)3)


Both of these formulas were pulled from: http://numbers.computation.free.fr/Constants/PiProgram/userconstants.html

 

Note that the AGM-based algorithms are probably faster. But y-cruncher currently uses these series-based formulas because:

  1. It lacks an implementation of sqrt(N) for arbitrary N. It only supports N being a small integer.
  2. There already is a series summation framework that can be easily applied to the ArcSinlemn() function.

 

Catalan's Constant

Lupas Formula: O(n log(n)3)


Huvent's Formula: O(n log(n)3)



y-cruncher uses the following rearrangement of Huvent's formula:

 
Euler-Mascheroni Constant

Brent-McMillan with Refinement: O(n log(n)3)



                          


Brent-McMillan (alone): O(n log(n)3)



                

Note that both of these formulas are essentially the same.
Therefore, in order for two computations to be independent enough to qualify as a verified record, they MUST be done using different n.

For simplicity of implementation, y-cruncher only uses n that are powers of two - which serves a dual purpose of allowing the use of (the easily computed) Log(2), as well as lending itself to shift optimizations.

 

Arithmetic Algorithms:

 

Expanded Articles:


 

 

 

Multiplication:

Like most bignum programs, y-cruncher uses different multiplication algorithms for different sizes:

 

As of v0.6.3, y-cruncher uses:

Standard libraries like GMP avoid the use of floating-point FFTs due to the difficulty of proving bounds on round-off error. But with modern SIMD, the performance of floating-point is through the roof. So rather than throwing all this computational power away, y-cruncher utilizes it to the maximum extent.

 

The FFT, Hybrid NTT, and VST algorithms all use floating-point in a way that (due to round-off error) has not been proven to be mathematically correct.

But y-cruncher ensures their correctness by means of conservative parameter selection and algorithmic error-detection. Barring any programming bugs, the probability that y-cruncher's multiplication makes an undetected error is less then the probability of a silent hardware error. So from an engineering standpoint, y-cruncher is actually more reliable than other implementations that lack algorithmic error-detection.

 

The last two of these algorithms (Hybrid NTT and VST) are homebrew algorithms that were developed between 2008 - 2012. Both of them remain unpublished.

 

 

y-cruncher currently does not use any of the following algorithms:

Toom-Cook is completely outclassed by the floating-point FFT. SSA doesn't stand much of a chance either so as long as the FFT fits in CPU cache.

Outside of the processor cache, y-cruncher's current large multiply code seems to be better. Furthermore, neither Toom-Cook nor SSA are particularly vectorizable.

 

In the past, the NTT over small primes algorithm was blown off as not feasible. But recent advances in the multiply-mod algorithm along with better hardware support for the 64-bit integer mulitply are making things more interesting. Nevertheless, preliminary findings on a Haswell processor suggest that it is unlikely to be faster than what y-cruncher currently has.

 

 

Log(n):

Prior to v0.6.1, only log(2) and log(10) were supported using hard-coded machin-like formulas. A generic log(n) was needed for Ramanujan's formula for Catalan's constant. That was implemented using Arithmetic-Geometric Mean (AGM).

 

In v0.6.1, Ramanujan's formula for Catalan's constant was removed - thereby removing the need for a generic log(n). Instead, v0.6.1 supports the computation of log(n) for any small integer n. This is done using a formula generator that generates (at run-time) machin-like formulas for arbitrary small integer n.

 

Generation of machin-like formulas for log(n) is done using table-lookup along with a branch-and-bound search on several argument reduction formulas.

 

 

 

Series Summation:

Series summation is done using standard Binary Splitting techniques with the following catches:

This series summation scheme (including the skewed splitting and backwards summing) has been the same in all versions of y-cruncher to date. All of this is expected to change when GCD factorization is to be incorporated.

 

 

 

Developments:

 

A few years ago, y-cruncher was pretty heavily developed. But I've since left school and found a full-time job.

While y-cruncher will probably remain a side-hobby for the near future, it won't get as much attention as it used to get.

 

At the very least, I'll continue to fix whatever bugs that are discovered. And I'll make an effort to keep the program up-to-date with the latest instruction set extensions and development toolchains. But for the most part, the project is done.

 

Nevertheless, it's an never-ending project. So there are things on the to-do list. But it can be a long time before anything gets done.

 

Feature Description Notes
AVX2

AVX2 support for Haswell processors.

Done: v0.6.5

Command Line Options

Run the program directly from the command line without needing to enter input.

All the major features in y-cruncher should be accessible via command line.

By sheer popular demand...

Reduced Memory Mode

For Pi Chudnovsky and Ramanujan, add a mode that will allow a computation to be done using less memory/disk at the cost of slower performance.

 

For a typical computation, most of the work requires very little memory. It's the occasional memory spike that causes y-cruncher to have such a high memory requirement.

 

There are about 4 large memory spikes in a Pi computation. In approximate descending order of size, they are:

  1. The final Binary Splitting merge in the series summation.

  2. The Final Multiply.

  3. The radix conversion verification.

  4. The first split of the radix conversion.

These spikes can be flattened via space-time tradeoffs in the respective algorithms. Since the the trade-off only needs to be done at the spikes, the overall performance hit should be reasonably small.

 

This feature is in progress and is able to suppress the 1st spike. Since the 2nd spike isn't much smaller than the 1st spike, the drop in memory usage is less than 10% making it almost pointless. Neverthess, we used it in the 12.1 trillion digit computation of Pi.

The feature is usable. But since it is far from complete, it is currently disabled in the publicly released versions of the program.

GCD Factorization

Implement the optimization described here. At minimum do it for Pi Chudnovsky+Ramanujan. See if it can also be done to Zeta(3) and Catalan.

 

Because of the way that y-cruncher micro-manages its memory, this isn't going to be easy to do. Furthermore, y-cruncher lacks frameworks for integer division and segmented sieve.

 

Small Primes NTT

It seems that y-cruncher is the only serious Pi-program that doesn't use this algorithm. But that may change in the future.

 

While a 64-bit small primes NTT isn't expected to be faster than the existing large-multiply algorithms, it is capable of going above 90 trillion digits.

 

At those sizes, the computation will probably be completely I/O-bound anyway.