No, being large component-wise is not enough. Consider the system
$$
\begin{cases} 2x+y+z = b_1 \\ x+2y+z=b_2 \end{cases}
$$
If $x,y,z\ge 0$, then obviously $b_2\le 2b_1$. So, for $b=(N,3N)$ there are no nonnegative solutions. But there are integer solutions, e.g. $x=0$, $y=2N$, $z=-N$.
This scenario is explicitly handled by Gordan's theorem, which states
$$
\text{either} \quad
\exists x \in \mathbb{R}_+^m\setminus\{0\} \centerdot Ax = 0,
\quad\text{or}\quad
\exists y\in\mathbb{R}^n\centerdot A^\top y > 0,
$$
where $\mathbb{R}_+$ denotes nonnegative reals.
(Like Farkas's Lemma, this is a "Theorem of Alternatives"; furthermore, it can be proved from Farkas's lemma.)
A nice way to prove this is, as in Theorem 2.2.6 of Borwein & Lewis's text "Convex Analysis and Nonlinear Optimization", to consider the related optimization problem
$$
\inf_y \quad\underbrace{\ln\left(\sum_{i=1}^m \exp(y^\top A \textbf e_i)\right)}_{f(y)};
$$
as stated in that theorem, $f(y)$ is unbounded below iff there exists $y$ so that $A^\top y > 0$. As such, this also gives an unconstrained optimization problem you can plug into your favorite solver to determine which of the two scenarios you are in. Alternatively, you can explicitly solve for either the primal variables $x$ or the dual variables $y$ by considering a similar max entropy problem (i.e.
$\inf_y\sum_i \exp(y^\top A\textbf{e}_i)$, which approaches 0 iff the desired $y$ exists) or its dual (you can find this in the above book, as well as papers by the same authors).
Anyway, considering Gordan's theorem, your condition on the columns (which can be written $\textbf{1}^\top A = 0$) has no relationship to the question at hand. In one of your comments you mentioned wanting to generate these matrices. To pick positive examples, fix a satisfying $x$, and construct rows $b_i'$ by first getting some $b_i$ and setting $b_i' := b_i - (x^\top b_i)x / (x^\top x)$; to pick negative examples, by Gordan's theorem, choose some nonzero $y$, and then consider adding to $A$ a column $a_i$, including it if it satisfies $a_i^\top y > 0$.
Best Answer
Perhaps this should be a comment but it is too long.
The classic result used for existence is (Farkas' Lemma), though this gives a non-existence condition rather than an existence condition. It says given any such problem there is another such problem associated with it and exactly one of them has a solution. It is equivalent to the separating hyperplane, linear programming duality, and minimax theorems.
As for uniqueness, I do not know of a direct characterization. Given a specific such system one can determine uniqueness by trying to minimize and maximize each coordinate over the feasible set. The min and max are equal for all coordinates if and only if the solution is unique (in fact one could get away with solving $n+1$ LPs rather than $2n$). But this doesn't give a directly applicable analytic condition.
Finally, it is worth noting in reference to Wadim's comment, that uniqueness can be a subtle issue for linear programs. In particular, it is possible that the affine subset defined by the equations has positive dimension, but intersects the positive cone in a single point. For example, consider the homogeneous equation $x_1 + \ldots + x_n = 0$. This has one nonnegative solution, the zero solution. No matter how many linearly dependent or independent homogeneous equations you add, this will still be true. So the rank of A does not determine uniquness.