Your understanding of the p-value is correct (well technically it is the probability of seeing the observed correlation or stronger) if no correlation exists.
What is a strong or weak correlation is depends on the context, it is often good to plot your data, or generate random data with a given correlation and plot that to get a feel for the strength of the correlation.
The p-value is determined by the observed correlation and the sample size, so with a large enough sample size a very weak correlation can be significant, meaning that what you saw is likely real and not due to chance, it just may not be very interesting. On the other hand with small sample sizes you can get a very strong correlation that is not statistically significant meaning that chance and no relationship is a plausible explanation (think about 2 points, the correlation will almost always be 1 or -1 even when there is no relationship, so that size can easily be attributed to chance).
[Fixed/improved, based on the feedback from @Momo and @whuber]
I believe that in the context of regression the relationship between $p$-value and Pearson's correlation coefficient is the following: $p$-value can be interpreted as probability that correlation (coefficient), determined in a random sampling-based experiment, is the same or larger than the one, determined from the observed data, provided that the null hypothesis is true. In other words, I think that $p$-value in this context is related to hypothesis testing, where hypotheses themselves are correlation-based, as follows:
\begin{multline}
\shoveleft{H_0: \text{correlation (of the underlying data-generation process) is zero;}}\\
\shoveleft{H_A: \text{the correlation is not zero.}}
\end{multline}
Then, the situation IMHO boils down to the following traditional hypothesis testing interpretation. If $p$-value is small (less than arbitrarily selected significance level $\alpha$, usually equal to 0.05), then you can reject the null hypothesis ("determined correlation is statistically significant"), and, if $p$-value is greater than $\alpha$, than you fail to reject the null ("the correlation is not statistically significant").
In regard to a relationship between $p$-value and sample size $N$, the following formulae present the relationship in question in a mathematical form.
Fisher transformed test statistic of $r$ (aka $z$) is defined as $T(r) = artanh(r)$.
For a bivariate normal distribution, $z$'s standard error depends on sample size $N$, as follows:
\begin{align}
SE(T(r)) \approx \frac{1}{\sqrt{N - 3}}
\end{align}
Moreover, since the test statistic is approximately normal,
\begin{align}
\frac{T(r)}{SE(T(r))} \approx N(0,1) \text{ and } \lim_{N\to\infty} SE(T(r)) = 0
\end{align}
so the standard error in the denominator is getting increasingly smaller for increasingly larger $N$.
P.S. You may also find the following two answers relevant and useful: this and this.
Best Answer
You are misinterpreting the p-value. It actually represents the probability to observe a certain effect or stronger (in your case a correlation) if the null hypothesis - i.e. no correlation - is the correct one. So your results are perfectly consistent: correlation coefficient is close to zero, and p-value confirms that the little correlation apparently visible is actually a statistical fluke. In fact, adding or removing data will change a bit your correlation coefficient, but the p-value tells you that in both instances the correlation is not actually there.