Some elaboration on Dylan Moreland's comment is in order. Consider the gadget $\text{GL}_n(-)$. What sort of gadget is this, exactly? To every commutative ring $R$, it assigns a group $\text{GL}_n(R)$ of $n \times n$ invertible matrices over $R$. But there's more: to every morphism $R \to S$ of commutative rings, it assigns a morphism $\text{GL}_n(R) \to \text{GL}_n(S)$ in the obvious way, and this assignment satisfies the obvious compatibility conditions. That is, $\text{GL}_n(-)$ defines a functor
$$\text{GL}_n(-) : \text{CRing} \to \text{Grp}.$$
Composing this functor with the forgetful functor $\text{Grp} \to \text{Set}$ gives a functor which turns out to be representable by the ring
$$\mathbb{Z}[x_{ij} : 1 \le i, j \le n, y]/(y \det_{1 \le i, j \le n} x_{ij} - 1).$$
Now, this ring itself only defines a functor $\text{CRing} \to \text{Set}$. What extra structure do we need to recover the fact that we actually have a functor into $\text{Grp}$? Well, for every ring $R$ we have maps
$$e : 1 \to \text{GL}_n(R)$$
$$m : \text{GL}_n(R) \times \text{GL}_n(R) \to \text{GL}_n(R)$$
$$i : \text{GL}_n(R) \to \text{GL}_n(R)$$
satisfying various axioms coming from the ordinary group operations on $\text{GL}_n(R)$. These maps are all natural transformations of the corresponding functors, all of which are representable, so by the Yoneda lemma they come from morphisms in $\text{CRing}$ itself. These morphisms endow the ring above with the extra structure of a commutative Hopf algebra, which is equivalent to endowing its spectrum with the extra structure of a group object in the category of schemes, or an affine group scheme.
In other words, in a category with finite products, saying that an object $G$ has the property that $\text{Hom}(-, G)$ is endowed with a natural group structure in the ordinary set-theoretic sense is equivalent to saying that $G$ itself is endowed with a group structure in a category-theoretic sense. I discuss these ideas in some more detail, using a simpler group scheme, in this blog post.
As the discussion at the linked nLab page notes, the word action can have various meanings and related applications in category theory. The phrase action of a group is more precise, but still susceptible of two distinct but equivalent framings.
We are first given a group $G$ and a category $C$ that has an object $x$. What is defined is an action of group $G$ on such an object $x$. It is said to be "a representation of $G$ on $x$, that is a group homomorphism $\rho:G \to \operatorname{Aut}(x)$". Note that this allows the trivial homomorphism, one that sends all elements of $G$ to the identity arrow on $x$.
Also the category $C$ might as well be reduced to just the one object $x$ in this context, because $\operatorname{Aut}(x)$ depends only on certain arrows from $x$ to itself.
The "more sophisticated" approach restates this idea by "treat[ing] the group $G$
as a category denoted $\mathbf B G$ with one object, say $*$." Here the arrows of category $\mathbf B G$ are the elements of $G$, the composition of arrows is given by group multiplication, and the identity arrow on $\mathbf B G$ corresponds to the identity element of $G$. This construction allows us to concisely define an action of group $G$ as a functor $\rho:\mathbf B G \to C$.
Verifying the equivalence of these two approaches is straightforward once the claim is established that a group essentially amounts to a one-object category in which every arrow has an inverse, as previously shown on Math.SE.
We are able to uniquely identify the object $x$ in category $C$ as the image of object $*$ in category $\mathbf B G$, and likewise the identity arrow on $x$ must correspond to the identity arrow on $*$. Because the arrows of $\mathbf B G$ are the elements of group $G$, the functor $\rho$ sends arrows of $\mathbf B G$ to arrows from $x$ to $x$ in category $C$, and because the group elements are invertible, these arrow images have inverse arrows in $C$, also from $x$ to $x$, and thus these arrow images belong to $\operatorname{Aut}(x)$.
It's hard to be sure I'm not overlooking an important point that causes doubts here about equivalence of the two approaches. A rigorous step-by-step demonstration strikes me as overkill, but if more detail is needed I'd be glad to supply them.
Best Answer
tl;dr In the first part I briefly discuss what the Yoneda lemma has to say about necessary and sufficient conditions. In the second part I restate your somewhat vague question rigorously, and explain why the Yoneda lemma (or general results from category theory) cannot be used to answer it. In the third part, I show that the set of all "necessary but not sufficient" conditions can in fact be used to pin down a proposition uniquely, although for reasons that don't have much to do with the Yoneda lemma. Do note that the question you ask in the title and the question you ask in the edited question body are different: I only answer the question you ask in the question body itself.
I. In the world of posets, people sometimes identify the statement that "a downward-closed subset $S$ of a poset contains the downset $\downarrow\!y$ of the element $y$ precisely if $y \in S$" and "an upward-closed subset $S$ of a poset contains the upset $\uparrow\!y$ of the element $y$ precisely if $y \in S$" as the Yoneda lemma (even though it is, strictly speaking, merely a consequence of the Yoneda lemma for poset categories after choosing an appropriate functor $F$). It follows from this Yoneda lemma that two elements $x,y$ of a poset have the same downset/upset if and only if $x=y$.
What does this have to do with logic? In what follows, let $P$ and $Q$ denote propositions of classical logic.
Logical propositions form a preorder in which $P \leq Q$ holds precisely if the implication $P \rightarrow Q$ is provable in classical propositional logic. We can turn this into a poset by taking a suitable quotient (identifying $P$ and $Q$ if $P \rightarrow Q$ and $Q \rightarrow P$ are both provable). One can show that the poset $(\mathbb{B}, \leq)$ obtained this way coincides with the so-called free Boolean algebra on countably many generators.
Notice that a propositions $Q$ is a necessary condition for $P$ precisely if $P \leq Q$ holds in $\mathbb{B}$. So the set of necessary conditions for $P$ is the upset $\uparrow\! P$. Applying the Yoneda lemma, we get that two propositions $P,Q$ have the same necessary conditions precisely if $P = Q$ in $\mathbb{B}$, i.e. precisely if $P$ and $Q$ are provably logically equivalent in classical propositional logic. So knowing all the necessary conditions of $P$ allows us to pin down $P$ uniquely.
II. You ask the following question: would knowing all the necessary conditions of $P$, except for those conditions which are also sufficient for $P$, still allow us to pin down $P$ uniquely?
Notice that, up to logical equivalence, the only condition which is both necessary and sufficient for $P$ is $P$ itself. So you're asking whether we can have two different elements $P \neq Q$ of $\mathbb{B}$ such that $\uparrow\!P \setminus \{P\} = \uparrow\!Q \setminus \{Q\}$.
Your question makes sense over an arbitrary poset as well, so I will answer this arbitrary-poset version first. If $x,y$ are elements of a poset, and we know that $\uparrow\!x = \uparrow\! y$, then we know that $x=y$. What if all we know is that $\uparrow\!x \setminus\{x\} = \uparrow\!y \setminus\{y\}$? Can we still conclude $x=y$? The answer to this question is negative. Consider the free Boolean algebra on one generator $x$. The Hasse diagram of this poset looks as follows:
We can see from the diagram that the elements $x$ and $\neg x$ satisfy $$\uparrow\!x \setminus \{x\} = \{1\} = \uparrow\!\neg x \setminus\{\neg x\},$$ but it's also clear that $x \neq \neg x$.
So in an arbitrary poset, the answer to your question is negative. Since the result fails in some posets, but the Yoneda lemma is true about every poset, the result about $\mathbb{B}$ cannot just be a consequence of the Yoneda lemma.
III. So, can we have two different elements $P \neq Q$ of $\mathbb{B}$ such that $\uparrow\!P \setminus \{P\} = \uparrow\!Q \setminus \{Q\}$?
No, there are no such elements. To prove this, we need the following lemma:
Lemma 1. Let $P$ and $Q$ be propositions of classical logic, and let $x$ be a propositional variable that does not occur in either $P$ or $Q$. Then $P \rightarrow (Q \vee x)$ is provable precisely if $P \rightarrow Q$ is provable.
One direction is obvious: if $P \rightarrow Q$ is provable, then so is $P \rightarrow (Q \vee x)$ since $Q \rightarrow (Q \vee x)$ is provable and implication is transitive. I now give two very different proofs of the non-obvious direction.
Proof of Lemma 1. (v1) We argue by considering truth-value assignments. If $P \rightarrow (Q \vee x)$ is provable, then it evaluates to true under any truth-value assignment. So it evaluates to true if we set $x$ to $\mathrm{false}$. But $Q \vee \mathrm{false}$ is equivalent to $Q$, so this means that $P \rightarrow Q$ itself evaluates to true under any variable assignment.
Proof of Lemma 1. (v2) We argue proof-theoretically. Consider a proof of $P \rightarrow (Q \vee x)$ in the classical sequent calculus LK. Since $x$ is atomic, no logical rule is ever applied to $x$, so we can replace $x$ with any proposition, and this LK-proof will remain valid. So just replace $x$ with $Q$ to obtain an LK-proof of $P \rightarrow (Q \vee Q)$, which we can cut against the usual LK-proof of $(Q \vee Q) \rightarrow Q$ to obtain an LK-proof of $P \rightarrow Q$.
Using Lemma 1, we can prove that if $\uparrow\!P \setminus \{P\} = \uparrow\!Q \setminus \{Q\}$ in the poset $\mathbb{B}$, then in fact $P = Q$.
Assume we have $\uparrow\!P \setminus \{P\} = \uparrow\!Q \setminus \{Q\}$. Pick a propositional variable $x$ that does not occur in either $P$ or $Q$ (this requires working around a very minor technicality: we took a quotient when constructing $\mathbb{B}$, so the "set of variables that occur in $P$" is not really well-defined).
Notice that $P \vee x$ is a necessary, but not sufficient condition for $P$. By our assumption, $P \vee x$ is also a necessary, but not sufficient condition for $Q$. Consequently, $Q \rightarrow (P \vee x)$ is provable in classical logic. By Lemma 1, $Q \rightarrow P$ is also provable, so $Q \leq P$ holds in $\mathbb{B}$.
Since $Q \vee x$ is a necessary, but not sufficient condition for $Q$, we can repeat the argument above to get that $P \rightarrow (Q \vee x)$ is provable, and consequently that $P \leq Q$ also holds in $\mathbb{B}$.
Since $P \leq Q$ and $Q \leq P$ both hold in $\mathbb{B}$, we have that $P = Q$ as claimed.