a) is correct
b) No, it's not true. If $A=\begin{bmatrix}1&0\\0&-1\end{bmatrix}$, then the sum of the singular values is $2$, while the absolute value of the trace is zero.
To discuss whether your matrix is ill-conditioned, you need to say which norm you are talking about. Assuming we are talking about the operator norm (=largest singular value), if the determinant is small it means that some singular values are small; then the inverse will have big singular values and the condition number will be large.
c) You have $\sigma_1\sigma_2\cdots\sigma_n<10^{-k}$; if all $\sigma_j\geq 10^{-k/n}$, then $$|\det A|=\sigma_1\cdots\sigma_n\geq(10^{-k/n})^n=10^{-k};$$
so at least one singular value is less than $10^{-k/n}$.
2) This is not well phrased, because they can be equal. The maximum singular value is $\|A^TA\|^{1/2}$. Now let $\lambda$ be
an eigenvalue of $A$ with unit eigenvector $v$. Then
$$
|\lambda|=\|\lambda v\|=\|Av\|=(v^TA^TAv)^{1/2}\leq\|A^TA\|^{1/2}(v^Tv)^{1/2}=\|A^TA\|^{1/2}.
$$
So every eigenvalue is smaller in absolute value than the biggest singular value.
Condition number of square nonsingular matrix $A$ is defined by
Definition $1$: $cond(A) = \|A\| \|A^{-1}\|$, where the norm $\|.\|$ above could be any of the norms defined for matrices. If we use the usual Euclidean norm on vectors and the associated matrix norm, then the condition number is the ratio of the largest singular value of matrix A to the smallest.
Definition $2$: Condition number for any matrix is defined as:
$cond(A) = \|A\| \|A^+\|$, where $A^+$ is the pseudo inverse of the matrix $A$. Note that for a square non singular matrix $A^+ = A^{-1}$ which implies that Definition $2$ is the generalization of Definition $1$ to find out condition number of any matrix .
Best Answer
$\def\p#1#2{\frac{\partial #1}{\partial #2}}$ Given a symmetric matrix $M$, consider the scalar function $$\mu(x) = x^TMx \quad\implies\quad \p{\mu}{x} = 2\,Mx$$ The objective function is a ratio of such functions, therefore $$\eqalign{ \lambda &= \frac{x^TA^TAx}{x^Tx} \;\doteq\; \frac{\alpha}{\beta} \\ \p{\lambda}{x} &= 2\beta^{-1}(A^TAx-\lambda x) \\ }$$ Setting the gradient to zero yields an eigenvalue equation, whose solutions are the extrema of the objective $$\eqalign{ A^TAx &= \lambda x \\ }$$ Since $(A^TA)$ is SPD, its eigenvalues and singular values coincide. So the minimum of the objective function corresponds to the smallest singular value.