Adjoints of closed densely-defined linear operators on a Hilbert space $X$ are nice, once you get used to working in the graph space. In fact, the proofs are easier using these techniques for general closed densely-defined operators than the special-case proofs offered for the bounded case. John von Neumann introduced this way of working with densely-defined linear operators on a Hilbert space $X$. I'll explain his approach.
A Graph: The first thing to observe is that a subspace $\mathcal{M}\subseteq X\times X$ is the graph $\mathcal{G}(L)=\{ \langle x, Lx\rangle : x\in\mathcal{D}(L)\}$ of a linear operator $L : \mathcal{D}(L)\subseteq X\rightarrow X$ iff $\langle 0,y\rangle \in \mathcal{M}$ implies $y=0$. And $L$ is a closed linear operator iff its graph is a closed in the product space $X\times X$. If the subspace $\mathcal{M}$ is a graph, then the domain of $L$ becomes the set of all first coordinates in the subspace $\mathcal{M}$. It is easy to show that this unique correspondence between $x \in \mathcal{D}(L)$ and the second coordinate is linear because $\mathcal{M}$ is linear.
Closable: A linear operator $L : \mathcal{D}(L)\subseteq X\rightarrow X$ is closable iff the closure $\mathcal{G}(L)^{c}$ in $X\times X$ of its graph $\mathcal{G}(L)$ is a graph. Equivalently, if $\{ x_{n} \}\subseteq \mathcal{D}(L)$ converges to $0$ and $\{ Ax_{n} \}$ converges to some $y$, then $y=0$. That's the condition for a linear operator to have a closed extension, and is equivalent to the requirement that the closure of the graph of $L$ be the graph of a linear operator.
Inverses: If $L$ is a linear operator, then $L^{-1}$ exists iff the transpose of the graph of $L$ is a graph. Using the transpose map $\tau\langle x,y\rangle = \langle y,x\rangle$,
$$
\tau \mathcal{G}(L)=\mathcal{G}(L^{-1}),
$$
provided both exist. Notice that if $L^{-1}$ exists, then $L^{-1}$ is closed iff $L$ is closed because $\tau$ is a unitary map on $X\times X$ with $\tau^{2}=I$.
Adjoints: The real power of using graphs comes in defining the adjoint. If $L$ is a closed densely-defined linear operator, then $L$ has a closed densely-defined adjoint $L^{\star}$ whose graph is
$$
\mathcal{G}(L^{\star})=J[\mathcal{G}(L)^{\perp}],
$$
where the orthogonal complement is taken in $X\times X$ and $J$ is the unitary symplectic transpose
$$
J\langle x, y\rangle = \langle y, -x\rangle.
$$
Notice that $J^{2}=-I$. Because $\tau$ and $J$ are unitary, they commute with the action of taking orthogonal complement. So you may also write
$$
\mathcal{G}(L^{\star})=J[\mathcal{G}(L)^{\perp}]=[J\mathcal{G}(L)]^{\perp}.
$$
Of course the transpose operator $\tau$ also commutes with the action of taking the orthogonal complement.
Commutativity: Also note that $\tau J = - J\tau$, which means $\tau J\mathcal{M}=J\tau\mathcal{M}$ for subspaces $\mathcal{M}$ of $X\times X$. So, when you are considering the action of $J$, $\tau$ and $\perp$ on subspaces $\mathcal{M}\subseteq X\times X$, you can freely interchange these operations. That makes life very simple.
Your Example: Suppose that $L$ is a closed densely defined linear operator with a densely-defined inverse $L^{-1}$. Automatically $L^{-1}$ is closed because $L$ is closed (their graphs are transposes of each other.) That means that $L^{-1}$ will have a closed densely-defined adjoint $(L^{-1})^{\star}$. As you might guess, $(L^{-1})^{\star}=(L^{\star})^{-1}$.
To show $(L^{-1})^{\star}=(L^{\star})^{-1}$: First, you must show that $L^{\star}$ has an inverse, which comes down to showing $\tau\mathcal{G}(L^{\star})$ is a graph:
$$
\tau\mathcal{G}(L^{\star})=\tau[J\mathcal{G}(L)^{\perp}]=
[J\tau\mathcal{G}(L)]^{\perp}=
[J\mathcal{G}(L^{-1})]^{\perp}=\mathcal{G}((L^{-1})^{\star}).
$$
Obviously the subspace on the far right is a graph. So $L^{\star}$ has an inverse and $(L^{\star})^{-1}=(L^{-1})^{\star}$. This proves the following:
Lemma: Let $H$ be a Hilbert space and $L$ a densely-defined closed linear operator on $H$. If $L$ has a densely-defined inverse $L^{-1}$, then $L^{\star}$ has a densely-defined inverse, and $(L^{\star})^{-1}=(L^{-1})^{\star}$.
Note: If $L^{-1}$ is defined everywhere then it is bounded by the closed graph theorem. In that case $(L^{\star})^{-1}$ is also defined everywhere and is bounded because of the graph equation stated in the lemma. This is the case in your problem for $L=U-\lambda I$ because resolvents are, by definition, defined everywhere and are bounded.
Added in Response to your Addition: Another big fact. If $A$ is closed and densely-defined then $A^{\star\star}=A$. This is because $(\mathcal{M}^{\perp})^{\perp}=\mathcal{M}$ for a closed subspace of a Hilbert space such as $H\times H$. This extends the previous lemma.
Lemma: Let $L$ be a closed densely-defined linear operator on a Hilbert space $H$ with adjoint $L^{\star}$. Then $L^{-1}$ exists as a densely-defined linear operator iff $(L^{\star})^{-1}$ exists as a densely-defined linear operator and, in either case, $(L^{-1})^{\star}=(L^{\star})^{-1}$.
Proof: I showed you that if $L$ has a densely-defined inverse, then $L^{\star}$ has a densely defined inverse and $(L^{-1})^{\star}=(L^{\star})^{-1}$. Conversely, if $L^{\star}$ has a densely-defined inverse, then $(L^{\star})^{\star}=L$ has a densely-defined inverse and $((L^{\star})^{-1})^{\star}=L^{-1} \implies (L^{\star})^{-1}=(L^{-1})^{\star}$. $\;\;\Box$
As a corollary: If $L$ is a closed densely-defined linear operator on a Hilbert space, then $L-\lambda I$ has a densely-defined inverse iff $L^{\star}-\overline{\lambda}I$ has a densely-defined inverse and, in either case,
$$
((L-\lambda I)^{-1})^{\star}=(L^{\star}-\overline{\lambda}I)^{-1}.
$$
In particular, $\lambda \in\rho(L)$ iff $\overline{\lambda}\in\rho(L^{\star})$ and the resolvents satisfy $R_{L}(\lambda)^{\star}=R_{L^{\star}}(\overline{\lambda})$. This last equation holds very generally in the sense that one exists iff the other does and, in that case, the two always equal. So, if $L=L^{\star}$ then $\lambda\in\rho(L)$ iff $\overline{\lambda}\in\rho(L)$ and, in that case, $R_{L}(\lambda)^{\star}=R_{L}(\overline{\lambda})$.
This is true. It might be true under more general assumptions on the space $\mathcal N(\lambda-A)^\perp$, but the fact that this is the complement of an eigenspace makes the relevant calculation very simple here.
Let $z\in \mathcal D(A\lvert_{\mathcal N(\lambda-A)^\perp}^*)$, that is $z\in\mathcal N(\lambda-A)^\perp$ so that $|\langle z , Ax\rangle|≤\|x\|\,C_z$ for all $x\in\mathcal D(A)\cap \mathcal N(\lambda -A)^\perp$. We want to show that $z\in\mathcal D(A\lvert_{\mathcal N(\lambda-A)^\perp})$. This, together with symmetry, shows self-adjointness of $A\lvert_{\mathcal N(\lambda-A)^\perp}$.
Now for any $y\in\mathcal D(A)$ you have $y=x+v$ with $v\in \mathcal D(A)\cap \overline{\mathcal N(\lambda-A)}= \mathcal N(\lambda -A)$, that is $v$ an eigenvector of $A$ to the eigenvalue $\lambda$, and $x\in \mathcal D(A)\cap\mathcal N(\lambda-A)^\perp$. Note that $\|x+v\|^2=\|x^2\|+\|v\|^2$ since $x$ and $v$ are perpendicular. It follows that:
$$\langle z, Ay\rangle = \langle z,Ax\rangle +\lambda\langle z,v\rangle = \langle z,Ax\rangle$$
since $z$ is perpendicular to the eigenvectors to $\lambda$ of $A$. Hence
$$|\langle z, Ay\rangle| ≤ \|x\|\,C_z≤\|y\|\,C_z$$
and $z\in\mathcal D(A^*)$. But $\mathcal D(A^*)=\mathcal D(A)$, hence $z\in \mathcal D(A\lvert_{\mathcal N(\lambda-A)^\perp})$ and $A\lvert_{\mathcal N(\lambda -A)^\perp}$ is self-adjoint.
Best Answer
For ease of notation I will denote $\mathcal N(\lambda-A)$ by $N$ and $\mathcal N(\lambda-A)^\perp$ by $N'$. You then have $H=N\oplus N'$ and the operator $A$ respects this decomposition as $A$ is self-adjoint, so you can write $A=(\lambda\Bbb 1\lvert_{N})\oplus B$ where $B:N'\to N'$. The operator $B$ is completely arbitrary, so long as it is self-adjoint and doesn't have an eigenvalue at $\lambda$.
You now want conditions on $A$ so that the operator $B-\lambda$ is invertible. The conditions you list, like for example $\lambda -A$ being non-negative, are not sufficient to guarantee this. As an example consider $\lambda=0$ and $H=L^2([0,1],dx)\oplus \Bbb R$ and let $A(f,z) = (x\cdot f,0)$. This is a non-negative operator and the operator $B$ is the multiplication with $x$ operator on $L^2([0,1],dx)$, which is not invertible.
You can continue to play this game, finding for a condition $A$ a suitable, completely arbitrary operator $B$ (except for the conditions of being self-adjoint and not having an eigenvalue $\lambda$).
So when is $B$ invertible? The relevant criterium is the following: $B$ is invertible iff $\lambda$ is an isolated point of $\sigma(A)$.
This follows directly from $$\sigma(B)=\overline{\sigma(A)-\{\lambda\}}.$$ So we should think about why that equation is true. First note that in our decomposition $A=(\lambda\Bbb1\lvert_N)\oplus B$ you have $\tilde\lambda-A= (\tilde \lambda- \lambda)\Bbb1\lvert_{N}\oplus (\tilde \lambda -B)$. This is invertible if and only if both operator summands are invertible, so for $\tilde\lambda\neq\lambda$ you have that $\tilde\lambda -A$ is invertible iff $\tilde\lambda -B$ is invertible, in particular $\sigma(B)$ can differ from $\sigma(A)$ by at most the point $\lambda$. However $\sigma(B)$ must be closed, so if $\lambda$ is not an isolated point of $\sigma(A)$ it must be in $\sigma(B)$. On the other hand if $\lambda$ were a isolated point of $\sigma(A)$ and $\lambda\in \sigma(B)$, then $\lambda$ must be isolated in $\sigma(B)$. But isolated points in the spectrum correspond to eigenvalues, so $B$ would have to have an eigenvalue at $\lambda$, which is forbidden, hence if $\lambda$ is isolated it is not in $\sigma(B)$.