High precision calculator in Windows 7  Printable Version + HP Forums (https://archived.hpcalc.org/museumforum) + Forum: HP Museum Forums (https://archived.hpcalc.org/museumforum/forum1.html) + Forum: Old HP Forum Archives (https://archived.hpcalc.org/museumforum/forum2.html) + Thread: High precision calculator in Windows 7 (/thread154780.html) 
High precision calculator in Windows 7  jotne  08202009 If you need a calculator with lots of digits, try out the calc in Windows 7 and set it to scientific mode.
Pi = 3.1415926535897932384626433832795 Not sure if we do need it :)
Re: Higher precision calculator in OS/X and Linux  Egan Ford  08202009 $ bc l
If you must have RPN, then install calc: $ calc d1000 pi  fold w68 Re: High precision calculator on the HP200LX  Gerson W. Barbosa  08202009 In the 200LX set the calculator to the proper modes, that is, Radians and RPN, then
PI Re: Higher precision calculator in OS/X and Linux  Les Wright  08212009 Egan, this is cool! I wasn't aware of the existence of bc. I am assuming that a() is the arctangent function. Where does one find a listing of functions for bc?
Re: Higher precision calculator in OS/X and Linux  Les Wright  08212009 Cancel that. Google is my friend: Re: Higher precision calculator in OS/X and Linux  Egan Ford  08212009 man bcAlso works.
One feature of bc that I like is ibase/obase. It makes work with bases very easy. Edited: 21 Aug 2009, 10:43 a.m.
Re: Higher precision calculator in OS/X and Linux  Jonathan Eisch  08212009 Quote:Real men use dc. Jonathan
(For that matter, even bc traditionally uses dc to preform the calculations.) Edited: 21 Aug 2009, 12:20 p.m.
Re: Higher precision calculator in OS/X and Linux  Howard Owen  08212009 Both bc and dc have long histories. According to Wikipedia "[dc] is one of the oldest Unix utilities, predating even the invention of the C programming language.." bc was traditionally implemented in dc, simplifying the former's terse syntax. The bc page at Wikipedia references a manual page for bc in the Unix v7 manual set. A POSIX draft standard specified bc in 1991. The current bc in most places is GNU bc, a superset of the POSIX bc. I'm sure you were all dying to know that. :)
Regards, Re: High precision calculator in Windows 7  jotne  08212009 New record for PI Re: Higher precision calculator in OS/X and Linux  Egan Ford  08212009 Quote:I guess real men do not need trig, log, and other transcendental functions either. :)
I use calc over dc because I want more math functions with a CLI calculator.
Re: High precision calculator in Windows 7  Gerson W. Barbosa  08212009 Don't be misled by the website address: it's only 2.5 trillion digits :)
Re: High precision calculator in Windows 7  Eric Smith  08212009 What's a discrepancy of only 22.5 trillion digits between friends?
Re: Higher precision calculator in OS/X and Linux  Les Wright  08222009 bc has a very limited repertoire of builtin math functions. One needs to know a few trig and inverse trig identities to compute arcsine, arccosine, or even tangent. The scripting language seems really straightforward and I don't think it is too hard to build routines to compute more complicated functions. I may work on something to compute Gamma, a longtime pet project of mine :)
Re: Higher precision calculator in OS/X and Linux  Egan Ford  08222009 Quote:This is true if you invoke 'bc l'. dc on the other hand has no such set of functions (although with effort one could write them).
bc l math functions: s (x) The sine of x, x is in radians. Quote:I'd like to see that. Especially for any scale, e.g. gamma(pi) to 1000 digits :) Re: Higher precision calculator in OS/X and Linux  Eric Smith  08222009 The GMPR arbitraryprecision floating point library (based on the GMP arbitrary precision integer library) includes the gamma function, as well as ln(gamma(x)) and ln(abs(gamma(x))) functions.
Re: Higher precision calculator in OS/X and Linux  Eric Smith  08222009 Real men can compute transcendental functions using only addition and shifting. :) I'm just giving a presentation this afternoon to SVFIG on computing exponential and logarithmic functions without multiplication. It does require having a small table of precomputed constants, though. This is how HP calculators do it, though they use decimal while my presentation only covers binary.
The algorithm is close to one invented by Henry Briggs in 1624, only about 10 years after Napier invented logarithms. There's an explanation and an analysis of the HP35 firmware on Jacques Laporte's site: http://www.jacqueslaporte.org/digit_by_digit.htm
Re: Higher precision calculator in OS/X and Linux  Egan Ford  08222009 Quote:I was waiting for someone to step up and state that. Yesterday I started writing trig functions for dc. I stopped after 15 minutes to ask myself, why? Perhaps I'm not a real man, just a lazy one, and I'm ok with that. :) Re: Higher precision calculator in OS/X and Linux  Les Wright  08222009 Quote: Now THAT'S a challenge. The GMPR library that Eric referred me to seems to do it by using the Stirling approximation and predetermining how many terms to go out to given the desired precision and the argument. It then calculates the Bernoulli numbers on the fly. I have a little routine that does it using Stephen Moshier's qfloat library, but I used a fixed number of terms in the Stirling series for larger arguments, do Pochammer shifting for smaller ones, and I precompute the Bernoulli coefficients and put them in a an array. A qfloat is about 104 or 105 digits, and I get about 102 or 103 of those right. But that is fixed multiple precision, not arbitrary precision. May look at the Spouge methodno quite as good as Lanczos, but the coefficients are much easier to compute and one can predetermine the number of terms required due to a fairly straightforward error bound estimate.
Les
Re: Higher precision calculator in OS/X and Linux  Paul Dale  08222009 Quote: Go to Wolfram Alpha and see this.
 Pauli
Gamma to Arbitary Precision  Les Wright  08232009 Here is a simple approach that just came to me. This is NOT how Gamma is typically computed in software because it is slower than than the Stirling series and related approaches. However, it is easier to code, and does not require any advance fiddling to figure out how many terms one needs for a desired number of digits. 1. Take your desired argument a. 2. Compute the left sided incomplete gamma by its series for (a, a+1) 3. Compute the right sided incomplete gamma by continued fraction for (a, a+1) 4. Add 'em together. The only thing one needs to have the code do is derive on the fly a desire convergence epsilon give the desired precision (or scale as it is called in bc). Code for the incomplete gamma functions is given in Numerical Recipes, and indeed this is what I have ported (indeed, shameless plagiarized) in my various implementations for the incomplete gamma functions and their specialized offspring (like the cumulative normal distribution, error functions, and cumulative chisquare distributions). Some time ago when I was playing around with Stephen Moshier's high precision qfloat library (about 105 decimal digits), this was my adaptation of the Numerical Recipes code:
#include <qfloat.h> Really should be no trouble to port something this straight forward to a bc script. Another little challenge!
Les
Re: Higher precision calculator in OS/X and Linux  Egan Ford  08232009 Quote:Yes, that is how we'll check the results. For Les, its about the journey, not the destination. Re: Higher precision calculator in OS/X and Linux  Eric Smith  08232009 MPFR gives results that match Wolfram Alpha. Here's the trivial main program:
#include <stdio.h> Arbitrary Precision Gamma for bc  Les Wright  08232009 My preliminary efforts go sluggishly. In the gcf half of the code, I am finding I need to precompute an epsilon that is several order of magnitude larger than what I would expect for a given scale in order to get convergence. I also am trying to figure out that if I compute a result to a given increased scale internally how I can output it so it has a lower scale and the extra guard digits I have kept are not displayed. As it stands now, a computed result shows all the extra digits, even if I want to reformat the output to show my final result with a different, somewhat slightly smaller scale. For example, say my desired scale for display is 100. If my code temporarily increases this internally to 110, the output has a scale of 110, even if I set the scale back to the original 100 before returning the result. There must be some way in bc to truncate a result from a higher scale to a lower scale.... Also, the approach is slow for high precisions, as I thought it would be. There is a reason why Stirling or other approaches derived from or inspired by it are used to this day, and my little idea is not!
Les
Re: Arbitrary Precision Gamma for bc  Les Wright  08242009 I may get more luck implementing [link:http://en.literateprograms.org/Gamma_function_with_Spouge's_formula_(Mathematica)]this[/link]. Spouge's formula is a good option since one can easily set a conservative error bound a priori. For 1000 digits (and a relative precision of 1e1000), about 1260 terms in the series need to be computed. Thankfully, this can be done recursivelyeach coefficient is a multiplicative factor of the previous one. Will give it a go and see what i come up with. Les
Edited: 24 Aug 2009, 11:18 p.m.
Re: Arbitrary Precision Gamma for bc  Paul Dale  08252009 Interesting, this is exactly the approximation for (real and complex) Gamma I'm using in the 20b scientific firmware :)
 Pauli
Re: Arbitrary Precision Gamma for bc  Les Wright  08252009 The Spouge series is quite simple to implement and I think easy to implement in arbitrary precision, but I think it can be quite slow compared to a well crafted and properly terminated Stirling series, which to this day seems to be the first choice in the software libraries. That said, I have used Nosy on the HP49G+ and Gamma there looks like either a polynomial or CF approximation of sorts. I have coded the Spouge series in Mathematica just as I would in C++ or bc, but the bc port is giving me headaches since bc lacks a built in exponentiation function for nonintegers, and the routine I have doesn't like it when the number to be raised to a power is 0 (which it often is with Spouge). I am not good at error trapping either! That said, bc is a lot of fun BECAUSE it is so stripped down. A challenge. Like trying to program the HP65 or 33C with so few steps available :)
Les
Re: Arbitrary Precision Gamma for bc  Les Wright  08252009 BTW, for anyone who speaks Mathematica, here is the routine I am trying to port to BC:
gamma[zin_, n_] :=
I coded it more like C and relied little on Mathematica's built in capabilities (like summing over an index) to make porting easier later. It is several times slower than the builtin Gamma[] at a given argument and precision.
Re: Arbitrary Precision Gamma for bc  Les Wright  08262009 Well, having just ported my little Mathematica routine to c++ and Moshier's 105digit qfloat library, I have just discovered the Achilles heal of the Spouge series in fixed precision system, albeit one so extended. Computing the coefficients on the fly with so many evaluations of exp() and log() seems to be contributing to digit loss, where as with my trusty Sterling routine where I have the Bernoulli numerators and denominators stored in a table (I computed them with Maple and just pasted them over) manages to attain 103 or 104 digit accuracy on average. I am perplexed as to the issue, Moshier's qfloat exp() and log() overloaded functions are well written and seem to give full 105 digit accuracy. Must be something fishing in my code that perpetuates losing sometimes as many as twenty digits for some arguments. Still can't get the thing to work in bc either! No clue here.... In the 20b you may want to just precompute and store the coeffiecients rather than compute on the fly each time. For 15 digits of internal precision, that is between 12 and 19 terms, depended on how conservative your error estimate, etc. In an arbitrary precision environment like Mathematica, spouge works great, but it is slower than the built in function. hth,
Les Edited: 26 Aug 2009, 3:49 a.m.
Re: Arbitrary Precision Gamma for bc  Paul Dale  08262009 Quote: I do precompute the coefficients. Precision on the stack is 16 digits, although internally I'm computing to 39. Not all functions are evaluated to the full 39 digits though.
Re: Arbitrary Precision Gamma for bc  Steve Perkins  08262009 I've been very intrigued by this thread. I've been trying to code up a usable arbitrary precision gamma function for a long time. I've been somewhat successful using the Incomplete Gamma, extending the tail as far as needed until I get convergence following an example by Victor Toth. I didn't, however, think about doing the two halfs with different implementations to get better convergence. I'll have to try that! Here's my attempt to simplify Spouge for C. It works well if you keep enough guard digits for values less than 10 (like pi). Note that the powers in the inner loop are all integer, with a square root added. The final computation requires noninteger powers, but that could be done with the exp() and log() functions. I hope this helps someone. I compiled this with my multiprecision package of choice, MAPM. I just define double to be the MAPM extended precision type. I was able to get 100 digits of precision in less than a second. 1000 digits took about 6 minutes on my PC, so that seems too long. I have to define guard digits with 20% more places than my desired output. I'm still not sure why it takes so many guard digits. Usually I can get away with 8 extra guard digits, but not here.
double gamma(double z, int decplaces) {
Edited: 26 Aug 2009, 12:35 p.m.
Re: Arbitrary Precision Gamma for bc  Les Wright  08272009 Since the terms of the series alternate in sign I am assuming digit loss due to subtraction? That is the only thing I can think of. In my C code, Moshier's qfloat exp, log, and sqrt give full 105 digits almost all the time in the ranges I am interested in. I also modified the qfloat.h header file to include an inline qfloat atn() function, which I use to compute pi to full 105 digit precision with 4*atn(1). I like your vastly simplified codelooks like you took the time to reduce the coefficient formulae to an even simpler recursion than what I came up with. Why don't we try this: run two sums, sum1 and sum2, the first one for odd k, the second for even, and do only one big subtraction at the end? Also, in diagnostics you can return sum1 and sum2 as well as the final output to see if there is anything about the relative magnitude of the numbers that contribute to digit loss when subtracted? When I find time I will port your code to a qfloat version and see what I get. I like your version a lot better so far. The adding together the right and left incomplete gammas is pretty good but for lots of digits can be slow, and I am finding with bc it a big challenge to define an epsilon for the continued fraction half that will provide desired convergence for a given arbitrary precision. But with fixed multiple precision qfloat I am pleased. But the approach seems inefficient, and I can see why most libraries seem to want to stick with trusty Stirling, especially if they are able to efficiently generate the bernoulli numbers on the fly for any given precision. I know that my own qfloat gamma with the bernoulli numerators and denominators in a table does really decently. Arbitrarily extending the series of the leftsided incomplete gamma works fine on fixed precision calculators, but in arbitrary precision settings I don't know how far out you need to go to get desired precision for a given argument. Also, I think if you go too far "noise" gets introduced and precision degrades. You may want to look up Glen Pugh, the author of this excellent thesis. He is also approachable, and a couple of years ago was very helpful to me when I had questions about his approach to Lanczos, and he even sent me expanded coefficient and error tables (long ago lost when my inbox was corrupted a while back). Here is his webpage. The 1.26*decplaces estimate of a really is overkill. The wiki page I referred to above uses it, but admits that this is a very generous over estimate, as a key multiplicative factor that can greatly reduce the relative error by a few orders of magnitude is dropped from Spouge's original estimate so that the inequality is easier to solve for a given desired epsilon. I don't have so much time to come back to this for a few days, but I would be interested to know what you find. This is a great little programming problem. FWIW, Free42 uses Hugh Steers BCD20, which implements Lanczos with a table of precomputed coefficients. Les
PS: A question: I am not familiar with "k & 1" instruction. Is this a simple way to test for oddness or evenness? Also, it looks like your routine actually returns gamma(z+1) = z!, which is how Spouge originally wrote it. My routine rewrites the formulae taking the shifting into account at the beginning. This may overly complicate things in my code.
Digit loss mystery solved  Les Wright  08272009 Steve, I ported your routine to Mathematica, set it up so that the argument was handled not as an exact number but an arbitrary precision decimal number to the desired number of digits. I worked with 100 just for the sake of speed. Guess what? If we call "sum" the sum of all the positive terms and "sumneg" the sum of all the negative terms, these numbers happen to be almost identical, and with 100 digits of arbitrary precision the first 16 to 20 or so digits often look identical depending on the argument. For illustration, here is my Mathematica code. Even if you don't speak Mathematica, it is pretty clear:
gamma_steve[zin_, n_] := Module[ Here is output of sum, sumneg, and the final result for Pi and 100 digits. BTW, your code actually computes Gamma[z+1], hence the shift:
In[102]:= gamma_steve[Pi  1, 100] Mathematica has a built in function that estimates the precision of the final result:
In[103]:= Precision[%] The greater the desired precision, the closer together sum and sumneg are, and the more leading digits get lost to subtraction. Hence the need for about 20% more guard digits. I am thinking Spouge ain't all that good for this application unless one is on a fast machine and can pay the price for so many more guard digits. FYI, for those who know Mathematica, if I change the definition of z to z=Rationalize[zin], and pass it to the routine as an exact number, Mathematica uses it as such and carries out the computations exactly before converting it to a 100 digit decimal number at the end, maintaining full precision. The N[] command converts it to an approximate 100 digit number, and a cost is paid for that. Well, at least we know know we aren't going crazy! Hope you find this helpful. If you have access to Mathematica please email and I will send the notebooks.
Les
More bad news  Les Wright  08272009 The bigger the argument, the worse the digit loss. For 100! and 200! I am seeing digit loss due to subraction in the order of 30 or 40 percent when we take a = desired precision.
No mystery  Les Wright  08272009 The digit loss caveat is actually covered in the pithy wikipedia entry on the approximation [link:http://en.wikipedia.org/wiki/Spouge's_approximation]here[/link].
Bugger :(
Re: Arbitrary Precision Gamma for bc  Les Wright  08272009 See the rambling below. I don't think that precomputing the coefficients will save you from the evils of digit loss due to subtracting large numbers that are so close together. For 16 digits of stack precision, you will need ever one of your internal digits for large arguments. But if Pi, log, exp, and your power function internally are full precision, your interim results should be okay. Just keep in mind that digit loss is due to subtraction loss within the alternating series, not the imprecision of the functions used to compute the terms. Bummer.
Les
Re: Arbitrary Precision Gamma for bc  Paul Dale  08272009 Quote: This is a test for oddness. '&' is a bitwise AND operation.
Re: Arbitrary Precision Gamma for bc  Paul Dale  08272009 Quote: Yes, these functions are full precision. I'm using IBM's decNumber library which is a very nice BCD floating point implementation including all the nice extras: infinities, NaNs, subnormals, packed decimal etc. However, I might revisit the implementation eventually, this approximation was quick and easy and I [i]knew[/i] it would be accurate enough over the entire domain. I'm very much aware of the risks of loss of precision having done an amount of numerical analysis over the years.
Re: Digit loss mystery solved  Steve Perkins  08272009 A very nice and helpful analysis! Spouge's method is the fastest way I've found in my environment to calculate the gamma function to arbitrary precision. Even though I have to keep more guard digits around to do it. I haven't tackled the problem of evaluating the function for larger arguments, but they could be handled with range reduction.
I'm glad you realized my function was actually calculting the factorial, or Gamma(z+1). I forgot I subtracted 1 before calling it in my main routine.
Re: Arbitrary Precision Gamma for bc  Paul Dale  09032009 Found that my complex gamma function wasn't working properly so I switched to a Lanczos approximation instead. Smaller code and likely a bit faster.
 Pauli
decNumber library  Eric Smith  09032009 Have you found (or written) any basic trig and inverse trig functions for use with decNumber?
Re: decNumber library  Paul Dale  09032009 Yes. The ill fated OpenRPN project had the first version of the code. The 20b repurposing site has later versions. I can email the current code if you PM your email address to me.
Re: decNumber library  Paul Dale  09032009 Found your brouhaha address and sent the code there.
 Pauli
