Your goal should be finding a suitable $2\times3$ matrix which you multiply with your 3D vector to obtain the projected 2D vector.
I assume that $e_1, e_2, e_3$ are both unit length and orthogonal to one another, i.e. that you're dealing with an orthogonal coordinate system in 3D. All 3D vectors are assumed to be expressed in this coordinate system. Without orthogonality, you'd have trouble matchiung the relation of $e_1', e_2', n$ to that of $e_1,e_2,e_3$, as $n$ is orthogonal to $e_1',e_2'$.
You first need to find a vector $e_1'$ which should be unit length, lie in the plane, but may be rotated about the origin in an arbitrary way. One way to achieve this is by choosing an arbitrary vector $v$, and computing the cross product between $v\times n$. The resulting vector will always be orthogonal to $n$. If you are unlucky, $v$ might be parallel to $n$, in which case the cross product has length zero. So in the possible presence of numerical complications (i.e. rounding errors, so you won't get an exact zero), it might be easiest to try $e_1,e_2,e_3$ as $v$, and choose the result with maximal length. Then normalize its length to 1, and you have a suitable $e_1'$.
Next, you compute $e_2'$ as the cross product of $e_1'$ and $n$. Depending on the way $e_1,e_2,e_3$ relate to one another, you'll have to do this either in one or in the other order to end up with the correct sign for $e_1$. Simply try them out. The length of the result should already be unit length, at least if $n$ had unit length and $e_1'$ was chosen as described above.
Now that you have $e_1'$ and $e_2'$, you can simply use these as the rows of your desired projection matrix. The rationale here is as follows: the matrix times vector multiplication will compute two scalar products, which correspond to the portion of your input vector which lies in the direction of that vector, i.e. the length of the orthogonal projection along one direction. Take two of these, and you have coordinates in a coordinate system within the plane, obtained from orthogonal projection onto that plane.
Orthonormalize the normal vectors (e.g. using Gram–Schmidt), then project them out. In vector notation, projecting out a normal vector $n_i$ from a vector $v$ yields
$$
v\to v-(v\cdot n_i)n_i\;;
$$
in matrix notation, this is
$$
v\to\left(I-n_in_i^\top\right)v\;;
$$
and the matrix operation for projecting out all $k$ orthonormal vectors in one go is
$$
v\to\left(I-\sum_in_in_i^\top\right)v\;.
$$
Note that this only works for orthonormal normal vectors $n_i$; you can't apply it directly to your second example, in which the normal vectors aren't orthonormal.
P.S.: It's been pointed out that the question uses a row vector convention. To transform the answer to that convention, simply transpose everything and toggle the transposition markers, which yields
$$
v\to v\left(I-\sum_in_i^\top n_i\right)\;.
$$
Best Answer
Without just throwing an answer and saying "this is it", I'll try a rational approach to what a projection is: a (orthogonal) projection of a vector $v$ onto a hypersurface $S$ is the point $p$ in $S$ such that the distance from $p$ to $v$ is minimized. To solve your problem, we will require that we know a point in the plane, and we will assume that the plane contains the point $0^k$. The normals to the plane are orthogonal to any point in the plane, so if $x$ is in the plane, then $(n_i,v)=0$ for all $1\leq i\leq k-2$, and $(n_i,v)$ represents the standard inner product. We then seek to minimize the function
$$f(x)=||x-v||_2^2=(x,x)-2(x,v)+(v,v)$$
subject to the constraints
$$g_i(x)=(n_i,x)=0$$
This is a classic Lagrange Multipliers problem. We then seek the critical points of the function
$$L(x,\lambda)=(x,x)+2(x,v)+(v,v)-\sum_{i=1}^{k-2}\lambda_i(n_i,x)$$
The derivative with respect to $x_j$ is
$$\nabla_{x_j} L=2x_j-2v_j-\sum_{i=1}^{k-2}\lambda_in_{ij}=0$$
We can create a system of equations that allows for us to solve for the $\lambda$ vector by multiplying by $n_{mj}$ and summing over all $j$ to get
$$(n_m,x)-(n_m,v)-\sum_{i=1}^{k-2}(n_i,n_m)\lambda_i=0$$
or as a linear system,
$$\begin{pmatrix} (n_1,n_1) & \cdots & (n_{k-2},n_1) \\ \vdots & \ddots & \vdots \\ (n_1,n_{k-2}) & \cdots & (n_{k-2},n_{k-2}) \end{pmatrix}\begin{pmatrix} \lambda_1 \\ \vdots \\ \lambda_{k-2}\end{pmatrix}=-\begin{pmatrix} (n_1,v) \\ \vdots \\ (n_{k-2},v)\end{pmatrix}$$
where we have used the orthogonality between $x$ and the normals. From here, there isn't anything we can do without assuming that the normals that we are given are orthonormal. This means that the matrix is simply the identity matrix, and we have
$$\lambda_i=-(n_i,v)$$
We return back to the derivative of the Lagrangian and replace the $\lambda_i$ terms, and we have
$$x_j=v_j-\sum_{i=1}^{k-2}(n_i,v)n_{ij}$$
and as vectors,
$$x=v-\sum_{i=1}^{k-2}(n_i,v)n_i$$
We have generalized the formula for three dimensions by simply adding to the subtraction term, which I view as "removing" the normals from the vector. Again, this required that the normal vectors were orthonormal.
Hope this helps!