Here is a simple example:
$$\min\{ x_1 + 1.1x_2 : x_1 + x_2 = 1, x_2 \geq 0.1, x \in \{0,1\}^2 \}.$$
The solution is $(0,1)$ whereas the solution of the relaxation is $(0.9,0.1)$. Take $J=\{1\}$ and notice $v^*_J = 0$ while $\tilde{v}_J = 0.9$.
You will need to use the Karush-Kuhn-Tucker conditions. The linearity constraint qualification (LCQ) holds since all the constraints are linear.
Expressing the problem vectorially is perhaps more helpful, which gives
$$\begin{align*}\max_{\boldsymbol x}\quad&\ln(1+\boldsymbol a^\top \boldsymbol x)-\boldsymbol b^\top \boldsymbol x\\
\text{s.t.}\quad&\boldsymbol a^\top \boldsymbol x-N\leq 0&[\lambda]\\
&\boldsymbol x-\boldsymbol e\leq \boldsymbol 0&[\boldsymbol\mu]\\
&\boldsymbol -\boldsymbol x\leq \boldsymbol 0&[\boldsymbol\nu]
\end{align*}$$
KKT necessary conditions on an optimum $\boldsymbol x^*$ are
$$\begin{align*}
\frac{1}{1+\boldsymbol a^\top \boldsymbol x^*}\boldsymbol a-\boldsymbol b=\lambda\boldsymbol a+\boldsymbol\mu-\boldsymbol \nu&&\text{Stationarity}\\
\boldsymbol a^\top \boldsymbol x^*-N\leq 0&&\text{Primal Feasibility}\\
\boldsymbol x^*-\boldsymbol e\leq \boldsymbol 0&&\text{Primal Feasibility}\\
\boldsymbol -\boldsymbol x^*\leq \boldsymbol 0&&\text{Primal Feasibility}\\
\lambda\geq 0 &&\text{Dual Feasibility}\\
\boldsymbol\mu\geq \boldsymbol 0 &&\text{Dual Feasibility}\\
\boldsymbol\nu\geq \boldsymbol 0 &&\text{Dual Feasibility}\\
\lambda[\boldsymbol a^\top \boldsymbol x^*-N]=0&&\text{Complementarity}\\
\boldsymbol\mu^\top[\boldsymbol x^*-\boldsymbol e]=\boldsymbol 0&&\text{Complementarity}\\
\boldsymbol\nu^\top[-\boldsymbol x^*]=\boldsymbol 0&&\text{Complementarity}
\end{align*}$$
The stationarity conditions give a linear system of equations $A\boldsymbol x^*=\boldsymbol d$ where $\boldsymbol d=(1-\lambda)\boldsymbol a -\boldsymbol \mu+\boldsymbol \nu-\boldsymbol b$ and $A=(\boldsymbol a-\boldsymbol d)\boldsymbol a ^\top$. So If there is a solution, it would be at $\boldsymbol x^*=A^{-1}\boldsymbol d$.
Then we just need to determine the dual variables $\lambda$, $\boldsymbol \mu$ and $\boldsymbol \nu$. Unfortunately, this is where things start to depend too much on the precise values of the constants $\boldsymbol a$, $\boldsymbol b$, and $N$. You will basically need to do case-checking on the complementary conditions, for each individual dual variable $\mu_i$, and $\lambda_i$, to choose which are zero, and which are non-zero. That would give $2^{2I+1}$ cases to check against the KKT sufficient condition, which is that $\frac{\boldsymbol a\boldsymbol a^\top}{1+\boldsymbol a ^\top\boldsymbol x^*}$ is positive semi-definite over the set of vectors $\boldsymbol s$ orthogonal to the active constraints, i.e. over the set $\left\{\begin{bmatrix}s\\\hline\boldsymbol t\\\hline\boldsymbol u\end{bmatrix}: \begin{cases}s[\boldsymbol a^\top \boldsymbol x-N]=0&\text{if }\lambda>0\\
t_i[x_i-1]=0&\text{if }\mu_i>0,\quad\forall{i}\in[1,I]
\\
u_i[-x_i]=0&\text{if }\nu_i>0,\quad\forall{i}\in[1,I]\end{cases}\right\}$
Needless to say, if all these constants do not have fixed values, then obtaining a nice closed-form expression is elusive. The solvers automate this process, sometimes taking clever shortcuts. This kind of explicit work can be helpful for understanding the structure of the problem and the general "shape" of the solution, but not for the solution itself. The problem is in far too much of a general form for that (consider, by way of analogy, the knapsack problem, where the solution depends heavily on the costs and volumes involved).
Best Answer
I would not describe the solution to the Lagrangian relaxation as an "approximate solution" to the original problem, since it need not be feasible in the original problem.
In general, the objective value of the LR may or may not be a tight bound for the objective value of the original problem. There is a well-known result that if the extreme points of the convex hull of the solutions to the constraints you did not relax all are integer valued, the Lagrangean bound (from the optimal solution to the LR problem) will be identical to the value of the LP relaxation of the original problem. In the case of the knapsack problem, where you relaxed the only constraint, that convex hull would just be $[0,1]^n$, which definitely satisfies the condition. So I would expect the LR bound for the knapsack to match the LP bound.
When the original problem contains only binary variables (as in your case) and all primal constraints are relaxed in the Lagrangian relaxation (which need not always be true, but is in your case), evaluating the Lagrangian objective for a single choice of $\lambda$ is $O(m\cdot n)$ where $m$ is the number of constraints (1 for the knapsack problem) and $n$ is the number of variables. Note: You do not just set a single $x_i=1$; you set $x_i=1$ for all $i$ such that the coefficient of $x_i$ in the LR is positive.
I'm not aware of any result that the number of such evaluations (the number of different values of $\lambda$ to be considered) is in general polynomial in problem size, although I suppose it could be.