From here
For example,
minimise $f(x) = \max(3x-4,2x-1)$
is equivalent to:
minimise t
subject to
$3x-4 \le t$
$2x-1 \le t$
Note that
$$f(x) = 3x-4 \iff x \ge 3$$
So if we have $x = 5$, then $f(5) = 11$ and
$$11 \le t$$
$$9 \le t$$
You will need to use the Karush-Kuhn-Tucker conditions. The linearity constraint qualification (LCQ) holds since all the constraints are linear.
Expressing the problem vectorially is perhaps more helpful, which gives
$$\begin{align*}\max_{\boldsymbol x}\quad&\ln(1+\boldsymbol a^\top \boldsymbol x)-\boldsymbol b^\top \boldsymbol x\\
\text{s.t.}\quad&\boldsymbol a^\top \boldsymbol x-N\leq 0&[\lambda]\\
&\boldsymbol x-\boldsymbol e\leq \boldsymbol 0&[\boldsymbol\mu]\\
&\boldsymbol -\boldsymbol x\leq \boldsymbol 0&[\boldsymbol\nu]
\end{align*}$$
KKT necessary conditions on an optimum $\boldsymbol x^*$ are
$$\begin{align*}
\frac{1}{1+\boldsymbol a^\top \boldsymbol x^*}\boldsymbol a-\boldsymbol b=\lambda\boldsymbol a+\boldsymbol\mu-\boldsymbol \nu&&\text{Stationarity}\\
\boldsymbol a^\top \boldsymbol x^*-N\leq 0&&\text{Primal Feasibility}\\
\boldsymbol x^*-\boldsymbol e\leq \boldsymbol 0&&\text{Primal Feasibility}\\
\boldsymbol -\boldsymbol x^*\leq \boldsymbol 0&&\text{Primal Feasibility}\\
\lambda\geq 0 &&\text{Dual Feasibility}\\
\boldsymbol\mu\geq \boldsymbol 0 &&\text{Dual Feasibility}\\
\boldsymbol\nu\geq \boldsymbol 0 &&\text{Dual Feasibility}\\
\lambda[\boldsymbol a^\top \boldsymbol x^*-N]=0&&\text{Complementarity}\\
\boldsymbol\mu^\top[\boldsymbol x^*-\boldsymbol e]=\boldsymbol 0&&\text{Complementarity}\\
\boldsymbol\nu^\top[-\boldsymbol x^*]=\boldsymbol 0&&\text{Complementarity}
\end{align*}$$
The stationarity conditions give a linear system of equations $A\boldsymbol x^*=\boldsymbol d$ where $\boldsymbol d=(1-\lambda)\boldsymbol a -\boldsymbol \mu+\boldsymbol \nu-\boldsymbol b$ and $A=(\boldsymbol a-\boldsymbol d)\boldsymbol a ^\top$. So If there is a solution, it would be at $\boldsymbol x^*=A^{-1}\boldsymbol d$.
Then we just need to determine the dual variables $\lambda$, $\boldsymbol \mu$ and $\boldsymbol \nu$. Unfortunately, this is where things start to depend too much on the precise values of the constants $\boldsymbol a$, $\boldsymbol b$, and $N$. You will basically need to do case-checking on the complementary conditions, for each individual dual variable $\mu_i$, and $\lambda_i$, to choose which are zero, and which are non-zero. That would give $2^{2I+1}$ cases to check against the KKT sufficient condition, which is that $\frac{\boldsymbol a\boldsymbol a^\top}{1+\boldsymbol a ^\top\boldsymbol x^*}$ is positive semi-definite over the set of vectors $\boldsymbol s$ orthogonal to the active constraints, i.e. over the set $\left\{\begin{bmatrix}s\\\hline\boldsymbol t\\\hline\boldsymbol u\end{bmatrix}: \begin{cases}s[\boldsymbol a^\top \boldsymbol x-N]=0&\text{if }\lambda>0\\
t_i[x_i-1]=0&\text{if }\mu_i>0,\quad\forall{i}\in[1,I]
\\
u_i[-x_i]=0&\text{if }\nu_i>0,\quad\forall{i}\in[1,I]\end{cases}\right\}$
Needless to say, if all these constants do not have fixed values, then obtaining a nice closed-form expression is elusive. The solvers automate this process, sometimes taking clever shortcuts. This kind of explicit work can be helpful for understanding the structure of the problem and the general "shape" of the solution, but not for the solution itself. The problem is in far too much of a general form for that (consider, by way of analogy, the knapsack problem, where the solution depends heavily on the costs and volumes involved).
Best Answer
This problem is the linear programming (LP) relaxation of a 0-1 knapsack problem, and an optimal solution is obtained by sorting in descending order the ratio $(a_i-b_i)/a_i$ and setting $\pi_i$ according to this order. This greedy algorithm is optimal for the LP relaxation but only a heuristic if you require $\pi_i \in \{0,1\}$.