Find the Inverse of an Infinite Square Matrix

infinite-matricesmatricespolynomials

For my mathematics assignment I am using polynomial interpolation to solve certain problems and I end up with the following scenario:

$\begin{bmatrix}… & 0 &0 & 0 & 1\\… &1^3 & 1^2 & 1 & 1\\ … &2^3 & 2^2 & 2 & 1\\… &3^3 & 3^2 & 3 & 1\\ \unicode{x22F0} & \vdots & \vdots & \vdots & \vdots\end{bmatrix}^{-1} \unicode{x22c5} \begin{bmatrix}a\\b\\c\\d\\ \vdots \end{bmatrix}$

Where the value in the rows of the produced single-column matrix correspond to the coefficients of an infinite-polynomial with decreasing powers.

I was wondering is there was a way to either invert this infinite matrix or find what happens to the polynomial as I increase the size of the matrices used (e.g. see if it approaches another function's taylor or power series)

Best Answer

I'm not sure, whether the following is tranferable to your lhs-matrix because I only use this matrix in a mirrored form, calling it "Vandermonde-matrix" $ZV$.

Surely the infinite-size version cannot be inverted and sequences of finitely sized matrices with increasing size have increasing and strongly diverging determinant, so I think it is unlikely that you could approximate to a meaningful limit when you actually take matrices of finite size and recompute the inverse with always increasing size.

But I used the LDU-decomposition of my version of the Vandermondematrix arriving at two triangular matrix-factors and one diagonal matrix, whose entries don't change when the size is altered, so a inversion of the factors even when infinite size seems possible and all entries of all three factors and of their principal inverses have a simple structure so that you can formally define series occuring by the dot-products.

So if I take

VZ = 
  1  0   0    0    0     0 ...
  1  1   1    1    1     1 ...
  1  2   4    8   16    32 ...
  1  3   9   27   81   243 ...
  1  4  16   64  256  1024 ...
  1  5  25  125  625  3125 ...
 ...                  

and do the LDU-decomposition into three matrix-factors

 L*D*U = VZ    
 L =   (Pascalmatrix)    // the inverse is the same just with signed entries
  1  .   .   .  .  .
  1  1   .   .  .  .
  1  2   1   .  .  .
  1  3   3   1  .  .
  1  4   6   4  1  .
  1  5  10  10  5  1

  D = (diagonal of factorials)  // the inverse has reciprocal factorials
  1  .  .  .   .    .
  .  1  .  .   .    .
  .  .  2  .   .    .
  .  .  .  6   .    .
  .  .  .  .  24    .
  .  .  .  .   .  120


 U =  (Stirlingnumbers 2nd kind)  // inverse has Stirlingnumbers 1st kind
  1  0  0  0  0   0
  .  1  1  1  1   1
  .  .  1  3  7  15
  .  .  .  1  6  25
  .  .  .  .  1  10
  .  .  .  .  .   1

then your result should be somehow equivalent to

   U^-1 * D^-1 * L^-1 * column([a,b,c,d,e,...] )            

where possibly a method of divergent summation should be assigned to the last product

 P= ( D^-1 * L^-1 * column([a,b,c,d,e,...]) ) 
 U^-1 *_(div summation included) P        // this is not always doable

Sometimes (if [a,b,c,...] behave nicely) the parenthese $P$ in the latter formula gives a vector with only finitely many nonzero entries, then the formula for the final result gives only finite -and thus exact- sums.

But actually, given your definition of such a row-mirrored Vandermonde-matrix, I don't know whether your problem is even well-defined at all ...

Related Question