As others have noted there are several different meanings for constructive.
I. Constructive proof in the sense of constructive mathematics
This meaning views an object as existing if we have a description of how to construct the objects (though we don't really need to carry it out), and there are several distinct constructive views. Saying a proof is constructive or not can be ambiguous without specifying which school of constructive mathematics we are talking about.
By the way, it can be the case that we can convert a non-constructive proof to a constructive one (Georg Kreisel's unwinding program or Ulrich Kohlenbach's proof mining program). That does not make the original proof constructive!
Note that algorithmic computability is just one of several constructive perspectives. For example, in intuitionism there are objects which are not algorithmically computable. A way of understanding this is to remember that Churth-Turing thesis is not an axiom that is accepted by all constructivist, there can be constructions which are not algorithmic in the instuitionistic view.
II. Constructive in the sense of complexity theory
This is a more recent meaning. We mean a proof of existence of an object is constructive if it gives directly a method of efficiently computing/constructing the object. This is the more common meaning of the word in combinatorics these days, e.g. in Robin A. Moser's "A constructive proof of the Lovasz Local Lemma" paper from 2008.
Constructive is used in the sense of efficient algorithms in complexity theory, for example, constructive in Alexander Razborov and Steven Rudich's "Natural Proofs" paper means that the property used in the lower-bound proof is efficiently computable.
Note that the proof itself can be non-constructive in the sense of meaning I while remaining constructive in this sense. You can give an efficient algorithm to compute some object and the correctness and efficiency proofs can be non-constructive. We don't have many interesting examples, but a good example would be the Robertson-Seymour theorem. See also Are there non-constructive algorithm existence proofs?
III. Proof complexity perspective
This is kind of the intersection of the previous two, though I don't recall anyone refer to it as "constructive proof" (probably because they are aware of both previous meanings and don't want to confuse people further :).
Here not only the object should be computable efficiently but the correctness and efficiency proofs must use only efficient concepts. The Robertson-Seymour theorem is an example of an efficient algorithm where we don't have a proof using only efficient concepts. I can give more artificial examples to distinguish between this and the meaning in the previous section but I don't recall any other natural ones.
Best Answer
There is an example of this which is important in machine learning: finding linear maps with the restricted isometry property. Given a set $S$ of $m$ points in $\mathbb{R}^N$ (with $N$ very large) and $\varepsilon > 0$, the problem is to find a linear map $L \colon \mathbb{R}^N \to \mathbb{R}^n$ such that $n << N$ and:
$$(1 - \varepsilon)\left\| x - y \right\| \leq \left\| Lx - Ly \right\| \leq (1 + \varepsilon)\left\| x - y \right\|$$
for all $x, y \in S$. Verifying that a given linear map has this property is NP-hard, but the orthogonal projection onto a random $n$-dimensional subspace (or even a $N \times n$ matrix whose entries are Gaussian random variables) has this property with high probability. This is the content of the Johnson-Lindenstrauss Lemma. In the full statement of the lemma the number $n$ can be chosen in a way which depends only on $m$ and $\varepsilon$ (not $N$).
See this paper for a more thorough discussion of the difficulties in finding more deterministic approaches to this specific problem. This paper also provides an attractive name for the general sort of phenomena that you're asking about: "finding hay in a haystack" (attributed to Avi Wigderson).