MATLAB: Does the code require a long time to evaluate when I use hpf

long time

I am not an expert.
I need to evaluate sinh and cosh at high values of the argument. I am using the hpf toolbox but it takes a long time to evaluate the code and after ten minutes it has not ended.
In my code I use three for cycles with several iterations for each.
Is there a way to speed up the calculation?

Best Answer

The speed of evaluation for hpf entirely depends on how many digits you tell it to carry. Short numbers evaluate fast. Huge numbers, with thousands or millions of digits, well, what can you possibly expect?
There are several tricks you can employ to gain some speed.
First, CARRY FEWER DIGITS IF POSSIBLE. This is important, IF you really need speed. You can set the number of decimal digits in your numbers either by use of the function DefaultNumberOfDigits, or by setting the number of digits explicitly when you create the HPF number. For example:
DefaultNumberOfDigits 50 5
X = hpf('pi')
X =
3.1415926535897932384626433832795028841971693993751
Now every HPF number will be stored as 50 decimal digits, plus a few extra guard digits to control errors in the digits reported. So in this case, the number is stored with a mantissa of 55 digits, but only the first 50 of them will be reported. That allows the code to survive tiny errors in those least significant bits, and you can control that behavior.
DefaultNumberOfDigits 50 5
% X lives in 55 digits, with 50 reported
X = hpf('pi')
X =
3.1415926535897932384626433832795028841971693993751
% X2 lives in 105 decimal digits.
X2 = hpf('pi',[100,5])
X2 =
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068
cosh(X)
ans =
11.591953275521520627751752052560137695770917176205
% The first 50 digits reported should be identical to this next value:
cosh(X2)
ans =
11.59195327552152062775175205256013769577091717620542253821288304846269655822373537560755597851472515
timeit(@() cosh(X))
ans =
0.0212826984265
timeit(@() cosh(X2))
ans =
0.0324995134265
So longer numbers take more time to compute and to work with, though compute time will not be linear in the number of digits. Note that if you try to work with numbers of different lengths, the shorter one wins. So be careful.
X + X2
ans =
6.2831853071795864769252867665590057683943387987502
Next, HPF numbers are stored in the form of Migits, not decimal digits. They are stored in blocks of numbers. As I recall, the default is a block length of 4. So this should look like the digits of pi, in case you know what that value is past the first few digits. (I admit, I only remember the first 20 or so.)
X.Migits
ans =
Columns 1 through 10
3141 5926 5358 9793 2384 6264 3383 2795 288 4197
Columns 11 through 14
1693 9937 5105 8209
However, HPF numbers have the potential to work faster if you use a larger block size. In theory, you should be able to roughly double the computational speed for really, really long numbers, IF you work in a Migit block size of 6 digits, versus 4. The problem is, even 100 digit numbers are not that big, and you don't gain anything much in this domain. Again, if I were computing an approximation for pi to 1 million digits, I would work using blocks of 5 digits. But 100 decimal digits or so? It won't matter.
Next, consider that things like cosh and sinh are exponential things. They get HUGE, and they do so quite rapidly, even exponentially so.
X = hpf(10000000);
cosh(X)
ans =
3.2961162673092197447804430655329544223333861330601e4342944
So 3.3 times 10 raised to a power on the order of 4.3 million. That number would have 4.3 million decimal digits, before you ever saw a decimal point, IF you bothered to do so. And yes, computing that number out to so many digits? UGH.
Its easy to compute things in a high number of digits. But understanding just how huge that number is? Not so easy.
Finally, the best solution is to avoid the need for a tool like HPF. This can come from a good understanding of numerical analysis, of approximation theory, of numerical methods in general. Sometimes all you need to use are logs. Then you can work using double precision computations, which will be incredibly faster.