If $A$ is a nonsingular matrix with rows $r_1,r_2,\ldots,r_n$, then $\{r_2,\ldots,r_n\}$ spans an $(n-1)$-dimensional subspace $P$ of $\mathbb R^n$. At least one of the standard basis vectors $e_1,e_2,\ldots,e_n$ is not in $P$, say $e_i$. Then $\{e_i,r_2,r_3,\ldots,r_n\}$ is a basis of $\mathbb R^n$, and it follows that there is a real number $c$ such that $r_1-ce_i$ is in $P$. The matrix $A'$ with rows $(r_1-ce_i),r_2,r_3,\ldots,r_n$ is singular, and it is obtained from $A$ by subtracting $c$ from the entry in the first row and $i^\text{th}$ column.
Here's a way to rephrase this somewhat more geometrically. The subspace $P$ is a hyperplane that divides $\mathbb R^n$ into two half-spaces, and $r_1$ lies in one of these halves. The line through $r_1$ in the direction of a vector $v$ has the form $\{r_1+tv:t\in\mathbb R\}$. This line is parallel to $P$ only if $v$ is in $P$; otherwise, the line will cross $P$. Since $P$ can't be parallel to all of the coordinate directions (or else it would fill up all of $\mathbb R^n$), there must be a line of the form $\{r_1+te_i:t\in\mathbb R\}$ that crosses $P$, where $e_i$ is the standard basis vector with a $1$ in the $i^\text{th}$ position and $0$s elsewhere. This means that there exists $t_0\in \mathbb R$ such that $r_1+t_0e_i\in P$. And then, linear dependence of the vectors $r_1+t_0e_i,r_2,\ldots,r_n$ means that the matrix with those rows is singular.
Best Answer
NB: I'm assuming that this was a question about matrices with entries that are real (or maybe complex) numbers
If you have an all-zero matrix, you need to alter at least $n$ entries to get a non-zero determinant, because the determinant is the sum of all products of exactly $n$ elements taken one-from-each-row-and-column (albeit a sum with signs). If only $n-1$ entries are nonzero, then each summand in the determinant is zero, so the det is zero, and the matrix is not invertible.
If you're allowed to change $n$ entries, then the answer is yes. What you do is pick a really large number $K$ (large relative to the other entries in your matrix --- details below) and alter all the diagonal elements to be $K$. For large enough $K$, this will have nonzero determinant.
I think this is clear for $n = 1$, so I'll consider only $n > 1$ from now on.
The "How large?" question
The determinant of an $n \times n$ matrix $A$ can be computed by taking a sum of terms, where each term is produced by picking $n$ entries of $A$, one from each row and column, and computing their product, $p$, and then multiplying by $\pm 1$. How many such terms are there? It turns out that there are $n!$ of them. For a $2 \times 2$ matrix, for instance, the terms are $a_{11}a_{22}$ and $-a_{21}a_{12}$, and there are exactly $2! = 2$ of these; if you write out the $3 \times 3$ formula, you'll find you have $6$ terms, and the pattern will become obvious. I'm being casual about the plusses and minuses because they're not going to matter much in what follows.
Let's say that $A'$ is the matrix $A$ with the diagonals all replaced by the number $K$, for some $K$. I'm going to divide the terms in the determinant of $A'$ into two piles:
and
I want to estimate how large the sum of terms of type $2$ can be.
First, there are $n! - 1$ of these terms.
Second, the sum of these terms is no larger than the sum of the absolute values of the terms. (I'm using the idea that $|a + b| \le |a| + |b|$, over and over again). So letting $b_{ij} = |a_{ij}|$, I know that each term is a products of some number of $b_{ij}$s, and some number of $K$s, but at most $n-1$ of the $K$s. Let's let $$ M = \max(b_{ij}), i, j = 1, \ldots,n$$ be the largest of the numbers $b_{ij}$, i.e., the largest absolute value of any entry of $A$. And let's agree to choose $K$ larger than $M$, although exactly how large is still to be decided.
Then we know that each of our $n! - 1$ terms has no more than $n-1$ copies of $K$, and hence is no more than $$ T = K^{n-1} M. $$
That means that the sum of all type-2 terms is at most $$ (n! - 1)T = (n! - 1) K^{n-1} M. $$ The remaining term is $K^n$ (from the product of the altered diagonal entries).
Now I'm going to tell you how to pick $K$. I pick $K$ to be $$ K = 2 (n! - 1) M $$ I chose that carefully to be at least twice as big as the biggest possible sum of all the other terms.
Now earlier, I promised I was going to pick $K > M$. Does this formula do that? Well, remember that I said I was giving a proof only for $n > 1$, so $n! \ge 2$, so $n! - 1 \ge 1$, hence $2(n! - 1) \ge 2$. Hence my number $K$ is at least $2M$, hence greater than $M$, so I'm OK.
Now let's look at the determinant. It's the sum of all the terms. The (single) type-1 term is $$ K^n $$ which I'm going to say is $$ K^{n-1} \cdot K = K^{n-1} \cdot (2 (n! - 1) M). \tag{1} $$ Hold that thought.
The sum of all the type-2 terms is no larger than $$ K^{n-1} (n! - 1)M $$ as I showed earlier. Letting $U = K^{n-1}(n! - 1)M$, we have the type-1 term is $2U$ and the sum of all the other terms is no larger than $U$. So even if all the type-two terms ended up negative, we'd still have at least $2U - U = U > 0$, i.e., a nonzero determinant.
All of this is essentially a proof of @Damien's remark about diagonally dominant matrices, which might not be familiar to someone who'd ask this question.