Optimal Combination of vectors to reach a resultant vector.

linear algebraoptimizationvectors

Suppose I have 8 vectors: $v_1+v_2+…+v_8$.
Assign coefficients to each vector: $S=a_1v_1+a_2v_2+…+a_8v_8$,

I want $S$ to be a resultant vector $\vec{R}$ of my choosing.

The problem is to find the minimal sum of coefficients such that the previous equation holds.

Let $C=\sum_{n=1}^{8}a_n$ and $a_1,a_2,…,a_8\in \mathbb{N\cup0}$

$v_1,v_2,…,v_8:S=R \wedge C=C_{min}$, Where $C_{min}$ is the minimum value of $C$ such that $S=R$.

Note the condition of the coefficients being nonnegative integers.

Example

: Say I have these vectors: $[1,2], [-1,2],[1,-2],[-1,-2],[2,1],[-2,1],[2,-1],[-2,-1]$ (These are actually representative of all the possible knight moves in chess.)

Let $a_1,a_2,…,a_8$ represent the number of each vector when added together (coefficient).
Suppose I want the resultant vector to equal $[3,-3]. $

The optimal 'coefficient set' would be $\{0,0,1,0,0,0,1,0\}$. I.e. $a_1=0,a_2=0,a_3=1,…,a_8=0$. With $C=0+0+1+…+0=2$


Is there an analytical or at the very least a solution to this with a mathematical formula which can be computed numerically. (perhaps with multivariable calculus, linear algebra, etc.)?

OR if it doesn't seem possible, an efficient computational solution will suffice.
Or at the very least an efficient method where the number of combinations to trial doesn't grow exponentially, so it can be trialled for large numbers?

If it is a numerical/brute force method, I want it to be reasonable for up to at least $\max(a_i)=100, i\in\{{1,2,…,8}\}$, (you don't need a maximum, but for my attempt, it meant that I could reduce the sample space).

My computational attempt: for a set maximum $\max(a_i)$, loop every combination of these and check the result to hence find which reach the target. Then I search for the one with the lowest $C$ value.
As you may expect, the number of combinations grows exponentially. With $max(a_i)>=7$ being unviable.

Best Answer

As others have suggested, you can solve the problem via integer linear programming (ILP). Let $v_n^i$ be the $i$th component of vector $v_n$, and let $R^i$ be the $i$th component of $R$. With the decision variables $a_n$ that you defined, the problem is to minimize $\sum_n a_n$ subject to: \begin{align} \sum_n v_n^i a_n &= R^i &&\text{for all $i$}\\ a_n &\in \mathbb{N} &&\text{for all $n$} \end{align} You can use any ILP solver to find the optimal values, without artificially putting upper bounds on $a_n$.

Here is SAS code to solve your sample instance:

proc optmodel;
   set DIMS = 1..2;
   num r {DIMS} = [3,-3];
   set VECTORS = 1..8;
   num v {VECTORS, DIMS} = [1,2,−1,2,1,−2,−1,−2,2,1,−2,1,2,−1,−2,−1];

   var A {VECTORS} >= 0 integer;
   min Z = sum {n in VECTORS} A[n];
   con C {i in DIMS}:
      sum {n in VECTORS} v[n,i] * A[n] = r[i];

   solve;
   print A;
quit;

I get an optimal objective value of 2, achieved by $a=(0, 0, 1, 0, 0, 0, 1, 0)$.

Related Question