Yes, I believe a student at the level of this book is able to answer the question. But this is not easy. I think it would be an issue at the Mathematical Olympiad.
I suggest you do this proof by induction on the size $n$ of the $n \times n$ Hilbert matrix $A_n.$ Start the induction step with $n = 2 $ . The statement is essentially the following:
$$
A_n = \left(\frac{1}{i + j-1}\right)_{ij} \mbox{ is a Hilbert matrix
}\Rightarrow A_n^{-1} \in M_{n \times n}(\mathbb{Z}).
$$
Step 1. Verify that by inspection for $ n = 2 $.
Step 2. Verify that if $A_n$ is a Hilbert matrix, then
$$
A_{n+1}=
\begin{pmatrix}
A_n & v^{T} \\
v & \frac{1}{2(n+1)-1}
\end{pmatrix}
\quad
$$
is a Hilbert matrix with $v=\left(\frac{1}{n+1},\dots,\frac{1}{2(n+1)-2}\right)$. Now apply matrix inversion in block form, special case 1.
And in this special case in which the blocks are matrices that commute, there is an exercise in the Hoffman book (do not remember which page).
Edit 1.
And in this particular case, this formula can be obtained using the definition of the inverse of a matrix. Just partition the inverse matrix into blocks of the same size and make the product and solve a matrix system. I believe that a student at the Mathematical Olympiad may have this idea, and an average student can understand it.
Edit 2. Done what was said in Edit 1, according to the notation of the link, just to verify that for $k =\frac{1}{2(n+1)-1} - vA_n^{-1}v^T,$ we have that $ \frac{1}{k}A_n^{-1}v^T \in\mathbb{Z} $. But this can be checked again by induction.
The answer is "yes" if $U$ is nonsingular and totally unimodular, but in general, the answer is "no", as shown by the random counterexample below:
$$
\pmatrix{1&0&0\\ 1&1&0\\ -1&1&1}^{-1}=\pmatrix{1&0&0\\ -1&1&0\\ 2&-1&1}.
$$
Best Answer
Be wise, generalize (c)
I think the nicest way to answer this question is the direct computation of the inverse - however, for a more general matrix including the Hilbert matrix as a special case. The corresponding formulas have very transparent structure and nontrivial further generalizations.
The matrix $A$ is a particular case of the so-called Cauchy matrix with elements $$A_{ij}=\frac{1}{x_i-y_j},\qquad i,j=1,\ldots, N.$$ Namely, in the Hilbert case we can take $$x_i=i-\frac{1}{2},\qquad y_i=-i+\frac12.$$ The determinant of $A$ is given in the general case by $$\mathrm{det}\,A=\frac{\prod_{1\leq i<j\leq N}(x_i-x_j)(y_j-y_i)}{\prod_{1\leq i,j\leq N}(x_i-y_j)}.\tag{1}$$ Up to an easily computable constant prefactor, the structure of (1) follows from the observation that $\mathrm{det}\,A$ vanishes whenever there is a pair of coinciding $x$'s or $y$'s. (In the latter case $A$ contains a pair of coinciding raws/columns). For our $x$'s and $y$'s the determinant is clearly non-zero, hence $A$ is invertible.
One can also easily find the inverse $A^{-1}$, since the matrix obtained from a Cauchy matrix by deleting one row and one column is also of Cauchy type, with one $x$ and one $y$ less. Taking the ratio of the corresponding two determinants and using (1), most of the factors cancel out and one obtains \begin{align} A_{mn}^{-1}=\frac{1}{y_m-x_n}\frac{\prod_{1\leq i\leq N}(x_n-y_i)\cdot\prod_{1\leq i\leq N}(y_m-x_i)}{\prod_{i\neq n}(x_n-x_i)\cdot\prod_{i\neq m}(y_m-y_i)}.\tag{2} \end{align}
For our particular $x$'s and $y$'s, the formula (2) reduces to \begin{align} A_{mn}^{-1}&=\frac{(-1)^{m+n}}{m+n-1}\frac{\frac{(n+N-1)!}{(n-1)!}\cdot \frac{(m+N-1)!}{(m-1)!}}{(n-1)!(N-n)!\cdot(m-1)!(N-m)!}=\\ &=(-1)^{m+n}(m+n-1){n+N-1 \choose N-m}{m+N-1 \choose N-n}{m+n-2\choose m-1}^2. \end{align} The last expression is clearly integer. $\blacksquare$