(Last updated: March 24, 2019)
Back To:
This is the function reference for the Custom Formula feature in y-cruncher.
Support | Basic Arithmetic | Functions | Built-in Constants | Binary Splitting Series | Function Templates |
Integer | LinearCombination | Invsqrt(x) | GoldenRatio | SeriesHyperdescent | ArcTan(1/x) for integer x - Taylor Series |
Scope | Multiply | Sqrt(x) | E | SeriesHypergeometric | ArcTan(1/x) for integer x - Euler Series |
Divide | AGM(1,x) and AGM(a,b) | Pi | SeriesBinaryBBP | ArcTanh(x) for real x | |
Shift | Log(x) | Zeta3 | ArcCoth(x) for real x | ||
Integer Power | Log(x) with precomputed constants | Catalan | ArcSinh(x) for real x | ||
ArcCoth(x) for integer x | Lemniscate | ArcCosh(x) for real x | |||
ArcSinlemn(x) for rational x | EulerGamma |
It should be noted that the functionality here isn't exactly meant to be user-friendly or intuitive. Instead, these functions should be seen as portals to y-cruncher's internal functionality with little to no overhead. Therefore, writing a custom formula is akin to writing assembly language where the "instructions" are calls to y-cruncher's native internal functionality.
General support.
{ Integer : 123 }
Purpose:
Convert an integer into a large number.
This function is generally not needed as you can simply inline the integer instead of using the full function form. Some functions will recognize the inlined integer and switch to a faster method instead.
where:
Performance Metrics:
{ Scope : { Locals : [ {var0 : {...}} {var1 : {...}} ] Formula : {...} } } { Scope : { Locals : [ {var0 : {...}} {var1 : {...}} ] Formula : { Sqrt : "var1" } } }
Purpose:
Defines one or more local variables that can be referenced inside a scope. The sole purpose of this is to allow subexpressions to be reused.
The sample formula, "Universal Parabolic Constant.cfg" uses this to avoid computing Sqrt(2) twice.
Variables are defined inside the "Locals" node. Once defined, the variable can be referenced anywhere inside the "Formula" node including all sub-scopes.
Variable names must be alpha-numeric. Underscores are also allowed.
Variables of the same name in the same scope are disallowed. (The config parser will reject it anyway.)
Variables of the same name in nested scopes will shadow. The inner scope will shadow the outer scope.
Expressions inside the "Locals" node can reference variables defined before it in the same scope.
The syntax of this function has changed slightly since v0.7.7.9498 in order to preserve the ordering of the local variables. The old syntax will still be accepted for the short term.
Tips:
Performance Metrics:
The current implementation has a limitation that forces it to copy the variable each time it is loaded - thus incurring an O(n) run-time cost. This applies to both Ram Only and Swap Mode. So loading a variable in Swap Mode will incur a disk copy. This is slated for improvement in the future.
The reason for this limitation has to do with return value optimization and mutability of function return values. The current internal API is that all functions return a mutable large number into a buffer provided by the caller. This allows the caller to overwrite the number if needed to reduce memory/storage consumption. However, this design is incompatible with variables since they are immutable. The current work-around is to simply copy the immutable variable into the mutable buffer.
Basic arithmetic.
{ LinearCombination : [ [121 {...}] [-10 {...}] [654 {...}] ] }
This is a variable-input function that will accept any number of operands.
Computes:
a_{0}*x_{0} + a_{1}*x_{1} + a_{2}*x_{2} + ...
where:
Tips:
Performance Metrics:
Metric | Addition | Multiply by Coefficient | Multiply by 1 or -1 |
Computational Cost | O(n) | O(1) | |
Auxiliary Memory | O(n) | O(1) | O(1) |
{ Multiply : [ {...} {...} {...} ] }
This is a variable-input function that will accept any number of operands.
Computes:
x_{0} * x_{1} * x_{2} * x_{3} ...
where:
Tips:
Performance Metrics:
Metric | Large Multiply |
Computational Cost | M(n) |
O(n log(n)) | |
Auxiliary Memory | O(n) |
{ Divide : [ {...} 123 ] } { Divide : [ {...} {...} ] }
Computes:
x_{0} / x_{1}
where:
Tips:
Performance Metrics:
Metric | Divide by Integer | Divide by Large Number |
Computational Cost | 2.167 M(n) | |
O(n) | O(n log(n)) | |
Auxiliary Memory | O(n) | O(n) |
{ Shift : [ {...} -12 ] }
Multiply the 1st operand by a power-of-two indicated by the 2nd operand.
Computes:
arg[0] * 2^{arg[1]}
where:
Tips:
Performance Metrics:
Metric | Power not divisible by word size | Power divisible by word size |
Computational Cost | O(n) | O(1) |
Auxiliary Memory | O(1) | O(1) |
{ Power : [ {...} -3 ] }
Computes:
arg[0]^{arg[1]}
where:
Tips:
Performance Metrics:
Metric | Power = 2 (Square) |
Power = -1 (Reciprocal) |
Positive Power | Negative Power |
Computational Cost | 0.667 M(n) | 1.667 M(n) | varies | |
O(n log(n)) | O(N log(N) log(|power|)) | |||
Auxiliary Memory | O(N) | O(N) |
The current implementation of this function is straight-forward binary exponentiation. If the binary representation of the exponent has a lot of 1 bits, it may be faster to use alternate representations of the power - such as factoring the exponent and doing nested calls.
Elementary and transcendental functions.
{ Invsqrt : 23 } { Invsqrt : {...} }
Computes:
1 / sqrt(x)
where:
Tips:
Performance Metrics:
Metric | Integer Input | Large Input |
Computational Cost | 1.333 M(n) | 2.833 M(n) |
O(n log(n)) | ||
Auxiliary Memory | O(n) |
y-cruncher's implementation for large inverse square roots is not well optimized.
{ Sqrt : 23 } { Sqrt : {...} }
Computes:
sqrt(x)
where:
Tips:
Performance Metrics:
Metric | Integer Input | Large Input |
Computational Cost | 1.333 M(n) | 3.833 M(n) |
O(n log(n)) | ||
Auxiliary Memory | O(n) |
y-cruncher's implementation for large square roots is not well optimized.
{ AGM : {...} } { AGM : [ {...} {...} ] }
Computes:
Computes the Arithmetic-Geometric Mean.
- The 1st version computes AGM(1, x).
- The 2nd version computes AGM(a, b).
where:
Tips:
Performance Metrics:
Be aware that the AGM has a very large Big-O constant and is extremely memory intensive. So it is very slow on bandwidth-constrained systems and has terrible performance in swap mode. For constants that can be computed with both the AGM and a hypergeometric series, it is often the case that the AGM implementation will be faster in memory, but slower than the hypergeometric series in swap mode.
Metric | Complexity |
Computational Cost | 4.833 lg(n) M(n)* |
O(n log(n)^{2}) | |
Auxiliary Memory | O(n) |
*This assumes that the two inputs are roughly the same in magnitude. The cost will go up if there is a large difference in magnitude.
y-cruncher's implementation for the AGM is not well optimized. Part of it is inherited from the unoptimized square root inplementation.
{ Log : 10 } { Log : {...} }
Computes:
Natural logarithm of x.
where:
Tips:
Performance Metrics:
Metric | Complexity |
Computational Cost | 9.666 lg(n) M(n) + cost(Pi) + cost(log(2)) |
O(n log(n)^{3}) | |
Auxiliary Memory | O(n) |
{ Log-AGM : { Pi : "pi" Log2 : "log2" Argument: {...} } }
Computes:
Natural logarithm of x using the AGM algorithm. This version of the Log function requires you to provide the constants Pi and Log(2) as variables.
Both constants Pi and Log(2) must already be defined in an enclosing scope using the specified names. Likewise, both constants must actually be the correct values of Pi and Log2 out to the full precision. Otherwise, the output will be incorrect.
The purpose of this function is to let you reuse the constants Pi and Log(2) so that they don't need to be recomputed if they are also needed elsewhere.
The following sample formulas use this function:
where:
Tips:
Performance Metrics:
Metric | Complexity |
Computational Cost | 9.666 lg(n) M(n) |
O(n log(n)^{2}) | |
Auxiliary Memory | O(n) |
{ ArcCoth : { Coefficient : 18 x : 26 } }
Computes:
Coefficient * ArcCoth(x)
where:
Tips:
Performance Metrics:
Metric | Complexity |
Computational Cost | O(n log(n)^{3}) |
Auxiliary Memory | O(n) |
{ ArcSinlemn : { Coefficient : 4 x : 7 y : 23 } }
Computes:
Coefficient * ArcSinlemn(x / y)
where:
Tips:
Performance Metrics:
Metric | Complexity |
Computational Cost | O(n log(n)^{3}) |
Auxiliary Memory | O(n) |
These are y-cruncher's built-in implementations for all the constants. Some of these also expose the secondary algorithms. This is to help construct computationally independent formulas for the usual compute+verify process.
{ GoldenRatio : {} } { GoldenRatio : { Power : -1 } }
Computes:
φ^{power}
φ = 1.61803398874989484820458683436563811772030917980576...
where:
Performance Metrics:
Metric | Complexity |
Computational Cost | 1.333 M(n) |
O(n log(n)) | |
Auxiliary Memory | O(n) |
{ E : {} } { E : { Power : -1 } }
Computes:
e^{power}
e = 2.71828182845904523536028747135266249775724709369995...
where:
There's no option here to select the algorithm. Internally, it uses:
In other words, it uses the alternating series when power is -1.
Performance Metrics:
Metric | Complexity |
Computational Cost | O(n log(n)^{2}) |
Auxiliary Memory | O(n) |
{ Pi : {} } { Pi : { Power : -1 Algorithm : "chudnovsky" } }
Computes:
Pi^{power}
Pi = 3.14159265358979323846264338327950288419716939937510...
where:
Performance Metrics:
Metric | Chudnovsky | Ramanujan |
Computational Cost | O(n log(n)^{3}) | |
Auxiliary Memory | O(n) |
{ Zeta3 : {} } { Zeta3 : { Power : -1 Algorithm : "wedeniwski" } }
Computes:
ζ(3)^{power}
ζ(3) = 1.20205690315959428539973816151144999076498629234049...
where:
Performance Metrics:
Metric | Wedeniwski | Amdeberhan-Zeilberger |
Computational Cost | O(n log(n)^{3}) | |
Auxiliary Memory | O(n) |
{ Catalan : {} } { Catalan : { Power : -1 Algorithm : "pilehrood-short" } }
Computes:
Catalan^{power}
Catalan = 0.91596559417721901505460351493238411077414937428167...
where:
There are no plans to expose the remaining built-in algorithms since they will probably be removed in the future anyway.
Performance Metrics:
Metric | Pilehrood (short) | Pilehrood (long) |
Computational Cost | O(n log(n)^{3}) | |
Auxiliary Memory | O(n) |
{ Lemniscate : {} } { Lemniscate : { Power : -1 Algorithm : "agm-pi" } }
Computes:
Lemniscate^{power}
Lemniscate Constant = ^{}5.24411510858423962092967917978223882736550990286324...
where:
The default algorithm (agm-pi) is computationally ~2x faster than Gauss' formula. But it also requires about twice the amount of disk access. For this reason, Gauss' formula tends to be faster in Swap Mode if the machine has slow disk access.
Performance Metrics:
Metric | AGM-Pi | Gauss | Sebah |
Computational Cost | O(n log(n)^{3}) | ||
Auxiliary Memory | O(n) |
{ EulerGamma : {} } { EulerGamma : { Algorithm : "brent-refined" } }
Computes:
Euler-Mascheroni Constant = ^{}0.57721566490153286060651209008240243104215933593992...
where:
For a fixed # of digits, the two algorithms will always pick a different n parameter. This ensures that they are computationally independent and can be used for compute+verify computation pairs.
Performance Metrics:
Metric | Brent-McMillan Formula | |
Refined | Original | |
Computational Cost | O(n log(n)^{3}) | |
Auxiliary Memory | O(n) |
This function is the only way to access the Euler-Mascheroni Constant. As of 2018, there are no known formulas for this constant that can be expressed as a finite combination of the other functionality in the custom formulas feature.
This section covers the various Binary Splitting routines that y-cruncher supports.
{ SeriesHyperdescent : { Power : -1
CoefficientP : 1
CoefficientQ : 1
CoefficientD : 1
PolynomialP : [1]
PolynomialQ : [0 -2 -4]
} }
This provides access to y-cruncher's Hyperdescent Binary Splitting framework. This is an optimized special case of SeriesHypergeometric where R(x) is 1 or -1.
This series type is really only useful for computing e, but it can also be used to efficiently compute silly stuff like:
Computes:
Given:
- coefP = CoefficientP
- coefQ = CoefficientQ
- coefD = CoefficientD
- P(x) = PolynomialP
- Q(x) = PolynomialQ
- R(x) = PolynomialR
The polynomials are arrays of the coefficients in increasing order of degree. The 1st element is the constant.
Basic Restrictions:
Polynomial Restrictions:
Tips:
Performance Metrics:
Metric | Complexity |
Computational Cost | O(n log(n)^{2}) |
Auxiliary Memory | O(n) |
Notes:
{ SeriesHypergeometric : { Power : -1
CoefficientP : 1
CoefficientQ : 2
CoefficientD : 2
PolynomialP : [0 0 0 2 3]
PolynomialQ : [-1 -6 -12 -8]
PolynomialR : [0 0 0 1]
} }
This provides access to y-cruncher's CommonP2B3 Binary Splitting framework. This is a generic series type that allows summation of any hypergeometric series of rationals - subject to restrictions such as convergence and implementation limits.
Computes:
Given:
- coefP = CoefficientP
- coefQ = CoefficientQ
- coefD = CoefficientD
- P(x) = PolynomialP
- Q(x) = PolynomialQ
- R(x) = PolynomialR
The polynomials are arrays of the coefficients in increasing order of degree. The 1st element is the constant.
Admittedly, this formula may be inconvenient since it does not match the conventional definition of the hypergeometric function. But it was chosen because it matches the implementation. So it is close to metal and any optimizations that are done to the formula will mirror the actual performance improvements. This representation is also slightly more general in that you can use irreducible polynomials that will not factor into a normal rational hypergeometric function.
Basic Restrictions:
Polynomial Restrictions:
The last restriction enforces a minimum rate of convergence on the series. This restriction is rather broad as it will exclude a large number of valid and convergent series. But unfortunately, it is needed for y-cruncher to work correctly.
For example, y-cruncher will reject the following convergent series:
This series diverges for the first 1,000,000 terms before it starts to converge. Furthermore, it exhibits destructive cancellation. This is quite typical of Confluent Hypergeometric series for large inputs. But as of version 0.7.7, y-cruncher cannot handle them and will reject all of such series.
Tips:
Performance Metrics:
Metric | R(x) and Q(x) are same degree | R(x) is lower degree than Q(x) | R(x) is higher degree than Q(x) |
Computational Cost | O(n log(n)^{3}) | O(n log(n)^{2}) | Series is Divergent |
Auxiliary Memory | O(n) | O(n) |
{ SeriesBinaryBBP : { Power : 1
CoefficientP : 1
CoefficientQ : 0
CoefficientD : 1 Alternating : "true" PowerCoef : -10 PowerShift : 9
PolynomialP : [1]
PolynomialQ : [-3 4]
} }
This provides access to y-cruncher's BinaryBBP Binary Splitting framework. This is an optimized special case of SeriesHypergeometric where Q(x) is a power-of-two multiple of R(x). All BBP-type formula with a base 2 radix will fall into this series type.
Computes:
Given:
- coefP = CoefficientP
- coefQ = CoefficientQ
- coefD = CoefficientD
- r = PowerCoef
- s = PowerShift
- P(x) = PolynomialP
- Q(x) = PolynomialQ
The polynomials are arrays of the coefficients in increasing order of degree. The 1st element is the constant.
Computes one of the following depending on whether "Alternating" is true:
As with SeriesHypergeometric, this formula is slightly unconventional.
Basic Restrictions:
Polynomial Restrictions:
Unlike the generic SeriesHypergeometric, the BinaryBBP series has no issues with irregular convergence or destructive cancellation.
Tips:
Performance Metrics:
Metric | Complexity |
Computational Cost | O(n log(n)^{3}) |
Auxiliary Memory | O(n) |
These are not built-in functions, but are templates for various functions.
These are useful for building Machin-like ArcTan() formulas for Pi.
Template using SeriesHypergeometric:
coefP = 1
coefQ = 1
coefD = x
P(k) = 1
Q(k) = -(2k+1) x^{2}
R(k) = 2k+1
{ SeriesHypergeometric : {
CoefficientP : 1
CoefficientQ : 1
CoefficientD : x
PolynomialP : [1]
PolynomialQ : [-x^{2} -2x^{2}]
PolynomialR : [1 2]
} }
Unlike ArcCoth(x), y-cruncher has no native implementation of ArcTan(). It's not needed since nothing to date can touch Chudnovsky and Ramanujan's formulas.
Euler's series for ArcTan() is slightly faster than the Taylor series.
Template using SeriesHypergeometric:
coefP = x
coefQ = x
coefD = 1+x^{2}
P(k) = 2k
Q(k) = (2k+1)(1+x^{2})
R(k) = 2k
If x is an odd number, a factor of 2 can be removed from P(k), Q(k), and R(k).
{ SeriesHypergeometric : {
CoefficientP : x
CoefficientQ : x
CoefficientD : 1+x^{2}
PolynomialP : [0 2]
PolynomialQ : [-1-x^{2} -2-2x^{2}]
PolynomialR : [0 2]
} }
See Pi - Machin.cfg for an example.
The following computes ArcTanh(x) for any real x ∈ (-1, 1^{}):
{
Shift : [
{Log : {Scope : {
Locals : {
arg : {...} // <-- Argument goes here.
}
Formula : {
Divide : [
{LinearCombination : [[1 "arg"][1 1]]}
{LinearCombination : [[-1 "arg"][1 1]]}
]
}
}}}
-1
]
}
If you need to reference Pi or log(2) in your argument, the following version is more efficient.
{
Shift : [
{Scope : {
Locals : {
pi : {Pi : {}}
log2 : {Log : 2}
arg : {...} // <-- Argument goes here. You can reference "pi" and "log2".
}
Formula : {
Log-AGM : {
Pi : "pi"
Log2 : "log2"
Argument : {
Divide : [
{LinearCombination : [[1 "arg"][1 1]]}
{LinearCombination : [[-1 "arg"][1 1]]}
]
}
}
}
}}
-1
]
}
The following computes ArcCoth(x) for any real x ∈ (-∞, -1^{}) or (1, ∞^{}):
{
Shift : [
{Log : {Scope : {
Locals : {
arg : {...} // <-- Argument goes here.
}
Formula : {
Divide : [
{LinearCombination : [[1 "arg"][1 1]]}
{LinearCombination : [[1 "arg"][1 -1]]}
]
}
}}}
-1
]
}
If you need to reference Pi or log(2) in your argument, the following version is more efficient.
{
Shift : [
{Scope : {
Locals : {
pi : {Pi : {}}
log2 : {Log : 2}
arg : {...} // <-- Argument goes here. You can reference "pi" and "log2".
}
Formula : {
Log-AGM : {
Pi : "pi"
Log2 : "log2"
Argument : {
Divide : [
{LinearCombination : [[1 "arg"][1 1]]}
{LinearCombination : [[1 "arg"][1 -1]]}
]
}
}
}
}}
-1
]
}
The following computes ArcSinh(x) for any real x:
{
Log : {Scope : {
Locals : {
arg : {...} // <-- Argument goes here.
}
Formula : {
LinearCombination : [
[1 "arg"]
[1 {Sqrt : {
LinearCombination : [
[1 {Power : ["arg" 2]}]
[1 1]
]
}}]
]
}
}}
}
If you need to reference Pi or log(2) in your argument, the following version is more efficient.
{
Scope : {
Locals : {
pi : {Pi : {}}
log2 : {Log : 2}
arg : {...} // <-- Argument goes here. You can reference "pi" and "log2".
}
Formula : {
Log-AGM : {
Pi : "pi"
Log2 : "log2"
Argument : {
LinearCombination : [
[1 "arg"]
[1 {Sqrt : {
LinearCombination : [
[1 {Power : ["arg" 2]}]
[1 1]
]
}}]
]
}
}
}
}
}
The following computes ArcCosh(x) for any real x:
{
Log : {Scope : {
Locals : {
arg : {...} // <-- Argument goes here.
}
Formula : {
LinearCombination : [
[1 "arg"]
[1 {Sqrt : {
LinearCombination : [
[1 {Power : ["arg" 2]}]
[1 -1]
]
}}]
]
}
}}
}
If you need to reference Pi or log(2) in your argument, the following version is more efficient.
{
Scope : {
Locals : {
pi : {Pi : {}}
log2 : {Log : 2}
arg : {...} // <-- Argument goes here. You can reference "pi" and "log2".
}
Formula : {
Log-AGM : {
Pi : "pi"
Log2 : "log2"
Argument : {
LinearCombination : [
[1 "arg"]
[1 {Sqrt : {
LinearCombination : [
[1 {Power : ["arg" 2]}]
[1 -1]
]
}}]
]
}
}
}
}
}