I dont know if you have to compute the conditioning $\kappa$ as the ratio of the eigenvalues $\max(\sigma)/\min(\sigma)$ under norm 2, which would not require to compute the actual inverse.
From the R manual: "The condition number takes on values between 1 and infinity, inclusive, and can be viewed as a factor by which errors in solving linear systems with this matrix as coefficient matrix could be magnified."
The condition number must be specified by the required resolution of your
algorithm.
Double Precision Representation
If you are having computations under IEEE 754 64-bit double precision representation, i.e. 64 bits per value, such as MATLAB, you will have:
- 64 bits for the representation,
- For the fraction: 1 bit for the sign, 52 bits for the value,
- For the exponent: 11 bits for the exponent, biased by 1023,
- Hence, $v=(-1)^{b_{63}} \left(1+\sum_{i=0}^{51} b_i 2^{i-52} \right) 2^\left(\sum_{i=52}^{62} b_i 2^{i-52}-2^{10}+1\right)$
- The minimum real value is $\pm 2.2251\cdot 10^{-308}$,
- The maximum real value is $\pm 1.7977\cdot 10^{308}$,
- The minimum/maximum integer is: $\mp 9223372036854775808$.
From here you can build some VERY rough estimates, which are only referencial about how deep the figures can go.
Significative Order
The smallest order is $10^{-308}$. If your condition number / eigenvalue ratio $\kappa$ is $10^{-6}$, you could only make 308/6=51 products of the power of A approx., before the smallest or the biggest eigenvalue got truncated.
For example, if $v=10^{6}$, $v^{51}=10^{306}$, and $v^{52}=$Inf
.
From here, the allowable conditioning would be given as, with $n$ the number of expected iterations using the compromised values:
$$\kappa=10^{-308/n}$$
Significative Digits
By other side, the representation allows 52 bits i.e. $10^{-15}$ significative digits on the fractions. Hence if you want to keep a resolution in significative digits of $10^{-6}$, you could only make 15-6=9 products approximatedly of the power of A before the smallest or the biggest eigenvalue got truncated.
For example:
- $v=123456$ (6 digits),
- $v^2=15241383936$ (11 digits),
- $v^3=1881640295202816$ (16 digits),
- $v^4=232299784284558852096$ (21
digits) truncated to $\sim 232299784284558847360$, with a error ratio of $10^{-17}$.
- $v^5=28678802168634497644363776$ (26
digits) truncated to $\sim 28678802168634496830758912$, with a error ratio of $10^{-17}$.
- $v^6=3540570200530940541182574329856$ (31
digits) truncated to $\sim 3540570200530 940126422213591000$, with a error ratio of $10^{-16}$.
From here, the allowable conditioning would be given as, with $n$ the number of expected iterations using the compromised values:
$$\kappa=10^{15-n}$$
Hence, it must be observed from the problem factors which is the proper conditioning figure your calculation should retain, based on the usage of the data and the purpose of the algorithms.
https://stat.ethz.ch/R-manual/R-devel/library/Matrix/html/rcond.html
https://www.mathworks.com/help/matlab/ref/cond.html
https://en.wikipedia.org/wiki/Condition_number
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
Let $C$ denote the last column of $A$. Then we have
$$-\frac{1}{2}C=B.$$
This shows that $(0,0,0,0, -\frac{1}{2})^T$ is a solution of the system $Ax=B$.
Since the system has a unique solution, $(0,0,0,0, -\frac{1}{2})^T$ is this solution, hence $x_2=0.$
Best Answer
I'll consider the case where b is a sequence of 1's followed by 0's so that $A$ has its first columns removed, and $B$ has its last columns removed; the approach can be generalized from there. The resulting matrices can be written as $$ A_1 = \pmatrix{-I & A_{12}\\0 & A_{22}}, \quad B_1 = \pmatrix{B_{11} & 0\\B_{21} & -I}. $$ We can multiply $A_1$ by a matrix from the right to get a new system: $$ \left[\pmatrix{-I & A_{12}\\0 & A_{22}} \pmatrix{I & A_{12}\\0 & I}\right] \left[\pmatrix{I & A_{12}\\0 & I}^{-1}X\right] = \pmatrix{B_{11} & 0\\B_{21} & -I} \implies\\ \underbrace{\pmatrix{-I & 0\\0 & A_{22}}}_{A_2} \underbrace{\left[\pmatrix{I & A_{12}\\0 & I}^{-1}X\right]}_{X_2} = \pmatrix{B_{11} & 0\\B_{21} & -I} $$ From there, the matrix $A_2$ should be relatively well conditioned (cf. this post for instance), and the solution process therefore more straightforward. However, we can actually use the structure of $A_2$ to make the solution process easier. Breaking $X_2$ up vertically, we have $$ \pmatrix{-I & 0\\0 & A_{22}} \pmatrix{X_{21}\\X_{22}} = \pmatrix{B_{11} & 0\\ B_{21} & -I} \implies\\ \pmatrix{-X_{21}\\ A_{22}X_{22}} = \pmatrix{B_{11} & 0\\B_{21} & -I}\implies\\ \begin{cases} X_{21} = -\pmatrix{B_{11} & 0}\\ A_{22}X_{22} = \pmatrix{B_{21} & -I}, \end{cases} $$ which leaves you with only one, smaller system to solve in order to build $X_2$. From there, we have $$ X_2 = \pmatrix{I & A_{12}\\0 & I}^{-1}X \implies\\ X = \pmatrix{I & A_{12}\\0 & I} X_2 = \pmatrix{X_{21} + A_{12}X_{22}\\ X_{22}}. $$
Here's how we can generalize this approach. Let $i_1<i_2<\cdots<i_k$ denote the indices $i$ for which $b[i] = 1$, and let $j_1<j_2<\cdots<j_{n-k}$ denote the rest. Let $P$ denote the matrix whose columns are the columns of the identity matrix, but in the order $i_1,i_2,\dots,i_k,j_1,j_2,\dots,j_{n-k}$.
I claim that the matrix $P$ is a permutation matrix (so that $P^{-1} = P^T$) designed so that $AP$ and $BP$ are of the correct form for the above approach. With that, we can solve the system $$ (AP)\underbrace{(P^{-1}XP)}_{X_3} = BP. $$ for $X_3$, then recover $X = PX_3P^T$.