The prime number theorem, as proved by de la Vallée-Poussin, states that
\[\pi(x) = \sum_{p \leq x} 1 = \mathrm{Li}(x) + O\left(xe^{-a\sqrt{\log x}}\right),\]
where
\[\mathrm{Li}(x) = \int_2^x \frac{dt}{\log t} \sim \frac{x}{\log x}.\]
Mertens' second theorem is sometimes stated as
\[\sum_{p \leq x}\frac{1}{p} = \log \log x + M + o(1),\]
but the result that Mertens proved is actually slightly stronger, namely that there exists a constant $M$, the Meissel-Mertens constant, such that
\[\sum_{p \leq x}\frac{1}{p} = \log \log x + M + O\left(\frac{1}{\log x}\right).\]
To get from $\sum_{p \leq x} \frac{1}{p}$ to $\sum_{p \leq x} 1$, one uses partial summation, which is the rigorous way of a replacing a sum by an integral. This states that if $c_n$ is a series of complex numbers and $f(n)$ is a differentiable function, then
\[\sum_{n \leq x} c_n f(n) = C(x) f(x) - \int_{1}^{x} C(t) f'(t) \, dt,\]
where
\[C(x) = \sum_{n \leq x} c_n.\]
So if $c_n = 1/p$ if $n = p$ is a prime and $0$ otherwise, and if $f(n) = n$, then this gives
\[\sum_{p \leq x} 1 = x \sum_{p \leq x} \frac{1}{p} - \int_{e^2}^{x} \left(\sum_{p \leq t} \frac{1}{p}\right) \, dt + O(1).\]
Here I've changed the endpoints on the integral on the right-hand side, which leads to a change by a constant. Using Mertens' second theorem shows that the integral on the right-hand side is
\[\int_{e^2}^{x} \left(\sum_{p \leq t} \frac{1}{p}\right) \, dt = \int_{e^2}^{x} \log \log t \, dt + \int_{e^2}^{x} M \, dx + O\left(\int_{e^2}^{x} \frac{dt}{\log t}\right).\]
Using integration by parts, one can show that
\[\int \log \log x \, dx = x \log \log x - \mathrm{Li}(x) + C,\]
and so combining everything, we get
\[\sum_{p \leq x} 1 = x \left(\sum_{p \leq x} \frac{1}{x} - \log \log x - M\right) + O(\mathrm{Li}(x)) = O\left(\frac{x}{\log x}\right)\]
where the last step follows by Mertens' second theorem again. So we just fall short of the prime number theorem. Indeed, if we had the slightly stronger statement that
\[\sum_{p \leq x}\frac{1}{p} = \log \log x + M + o\left(\frac{1}{\log x}\right),\]
then we can use partial summation to prove the prime number theorem.
This is a form of integration called Riemann-Stieltjes integration. In short, we define
$$ \int_a^b f(x) d g(x) = \sum f(c_j) \big( g(x_{j + 1}) - g(x_j) \big) $$
for partitions $a = x_0 < \cdots < x_n = b$ and points $c_j \in [x_j, x_{j + 1}]$. This is a weighted generalization of typical Riemann integration.
Concretely, we can think of this sort of integral as adding $f(x) \Delta(g(x))$ whenever $g(x)$ changes.
Here, $\vartheta(x) = \sum_{p \leq x} \log p$ is the first Chebyshev function. Observe that this changes only on intervals $(p - \epsilon, p + \epsilon)$ when $p$ is prime (for sufficiently small $\epsilon$). Thus a Riemann-Stieltjes integral of the form
$$ \int_a^b f(x) d \vartheta(x)$$
for a continuous function $f$ on an interval $[a, b]$ (and where $p_1, \ldots, p_k$ are the primes in the interval $[a, b]$) will be the limit of partition terms that look like
$$ \sum_{j = 1}^k f(\alpha_j) \big( \vartheta(p_j + \epsilon) - \vartheta(p_j - \epsilon) \big) \to \sum_{j = 1}^k f(p_j) \log p_j.$$
Here, I write $\alpha_j$ to mean some number in $(p_j - \epsilon, p_j + \epsilon)$. As I've assumed $f$ is continuous, the limit as $\epsilon \to 0$ tends to $f(p_j)$. And the point is that for all $\epsilon > 0$, $\vartheta(p_j + \epsilon) - \vartheta(p_j - \epsilon) = \log p_j$ exactly. As $\vartheta(x)$ only changes around $p_j$, there is no other contribution to the integral.
More explicitly, I have given an explicit limit of partitions of the interval $[a, b]$, consisting of $2\epsilon$ boxes around primes; and I claim that taking refinements of this partition (i.e. taking $\epsilon \to 0$) converges.
Applying this to $f(x) = x^{-s}$ and to the integral $\int_1^B x^{-s} d \vartheta(x)$ shows that
$$ \int_1^B \frac{d \vartheta(x)}{x^s} = \sum_{1 \leq p \leq B} \frac{\log p}{p^s}. $$
Taking the improper integral amounts to taking the limit as $B \to \infty$, which is equal to $\sum (\log p)/p^s$ as long as $s > 1$, which guarantees absolute convergence.
Best Answer
If we replace $A(x)$ by $A(x)-cx$ we are asking if the convergence of $\int_1^\infty A(x)dx\,/x^2$ implies that $A(x)/x\to0$ as $x\to\infty$.
One can create counterexamples even with $A(x)\ge0$. Let $A(x)$ be zero save for "spikes" of height $n$ and width $1/n$ centred at $x=n$. Integrating such a spike will give you a constant $c$, and weighting the spike by $1/x^2$ will give an integral of order $O(1/n^2)$. This will enforce the convergence of $\int A(x)\,dx/x^2$. But $A(x)/x$ will oscillate between $0$ and $1$.
It is necessary in these types of Tauberian theorems to have some extra regularity on the functions involved.