(Last updated: January 29, 2017)
Back To:
General:
Large Number Arithmetic:
Implementation (as of v0.7.1):
General Information:
Libraries and Dependencies:
ycruncher has no other nonsystem dependencies. No Boost. No GMP. Pretty much everything that isn't provided by C++ is built from ground up.
Furthermore, the Cilk Plus dependency is not a hard dependency. It can be trivially removed with affecting the core functionality of the program.
Compilers:
Other Internal Requirements:
Code Organization:
ycruncher's root source tree is (roughly) broken up into the following subdirectories:
Module  Files  Lines of Code  Open Sourced?  Description 
Public Libs  58  5,738  Yes  The public portion of the common support library. It provides a common interface for stuff like time, file I/O, and colored console output. 
Private Libs  125  15,487  No  The private portion of the common support library. It has stuff like the memory manager, the parallel framework, and the multilayer raidfile. 
Digit Viewer  87  9,866  Yes  The bundled Digit Viewer. 
BBPv2  32  4,321  No  The bundled BBP digit extraction app for Pi. 
Objects  39  9,509  Partial  Large number objects. (BigInt, BigFloat, etc...) 
Functions  25  3,443  No  Nontrivial math functions. (division, square root, etc...) 
YMP Library  14  2,116  Headers Only  A public interface to the internal large number library. 
Number Factory  31  3,388  Yes  Research infrastructure and test app for the YMP library. 
Launcher  9  651  No  The CPU dispatcher that picks the optimal binary to run. It's the module that builds the ycruncher(.exe) binary. 
ycruncher  265  37,761  No  ycruncher itself. This has most of the console UI and the implementations for all the constants. 
Modules  1,407  246,508  No  All the lowlevel arbitraryprecision arithmetic. 
Experimental  21  1,848  No  Sandboxes for experimental code. 
Misc.  7  972  No  Settings and versioning. 
Total:  2,120  341,608  Software bloat anyone? 
Notes:
ycruncher didn't become this bloated to compile until around 2014 when it started using template metaprogramming. But for now, the compilation problem can still be solved by throwing bigger hardware at it. As of 2016, ycruncher can still be comfortably compiled on the highestend of laptops.
Like most other programs, there are theoretical limits to ycruncher. Most of these limits are enforced by the program.
32bit  64bit  Comments  
Ram Usage 
~1.8 GiB  ~ 1 EiB (10^{18} bytes)  Limited by memory address space. 
Disk Usage 
~ 1 EiB  Limited by 64bit address space. 

Task Decomposition 
65,536  Arbitrary limit. 

RAID  Level 1 
8 paths 


RAID  Level 2 
64 x Level 1 RAID groups  Limited by the # of bits in largest integer. Will likely be increased in the future. 

Largest Multiplication 
(2.02 * 10^{18}) x (2.02 * 10^{18}) bits (6.7 * 10^{17}) x (6.7 * 10^{17}) decimal digits 
Small Primes NumberTheoretic Transform:


Convolution Length 
4.03 * 10^{18} bits 1.34 * 10^{18} decimal digits 

Computation Size (for all constants) 
10^{15} decimal digits  Limited by doubleprecision floatingpoint.* 

BBP Hexadecimal Offset 
2^{46}  1  Implementationspecific limitation. 
*ycruncher uses doubleprecision floatingpoint for things such as:
The result of these calculations are generally rounded to integers and must be accurate to +/ 1 for the program to operate correctly. The problem is that doubleprecision floatingpoint only has 53 bits of precision which will run out at around 9 * 10^{15}. Since there is roundoff error, the limit will certainly be lower. The exact limit is unknown and will vary with the different constants. Therefore ycruncher arbitrarily caps it to 10^{15} decimal digits. Colloquially, I call this the "floatindexing limit".
There are currently no plans to raise this limit since it is already well beyond the capability of current hardware (as of 2015).
It is worth mentioning that the floatindexing limit is the only thing left that prevents ycruncher from going all the way up to the 64bit limit. Without it, it should be possible to reach 6.7 * 10^{17} decimal digits (the limit of the Small Primes NTT).
Getting rid of the floatindexing limit will require a floatingpoint type with at least a 64bit mantissa. A viable option is to use 80bit extendedprecision via the x87 FPU although some compilers don't support it. But since "float indexing" isn't exactly a performance bottleneck, any sort of software emulation will probably work as well.