If you have a sample space $\Omega$, then you can consider an event to be either a subset of $\Omega$, or a logical formula $\phi(x)$ parameterized by an element $x$ of $\Omega$ (here we might formally consider $x$ an $\Omega$-valued random variable, namely the identity $\Omega\to\Omega$).
These two views are equivalent; for $A\subseteq\Omega$ we can write either $P(A)$ or $P(x\in A)$; and for $\phi(x)$ we can write either $P(\phi(x))$ or $P(\{x\in\Omega\mid\phi(x)\})$. So it doesn't really matter which convention one uses, since it is easy to convert from one or the other.
The core definitions in textbooks will usually use the subset-of-$\Omega$ formalism, because we're familiar with using sets as mathematical objects everywhere in mathematics, whereas using a logical formula as a mathematical object that can be manipulated is somewhat of a specialty. Also, there are technical reasons (measure theory) why one might not want all subsets of $\Omega$ to be events, and such restrictions are much easier to express in a set-theoretic language than as restrictions on which logical formulas are allowed. On the other hand, it takes quite a bit of devious ingenuity to build a purely logical $\phi(x)$ that does not represent an event, so this is less of a problem in practice than it is in theory.
As you have found out, these reasons often do not stop textbook authors from using logical notation when it suits them, and expect readers to be able to translate into the set-based formalism.
As additional abuse of notation, it is not uncommon to switch between the two viewpoints in the middle of a formula, such as in $P(\neg A\land \neg B)$, where $A$ and $B$ might have been defined as sets, but are used in a context where one must have a logical formula. In this case, too, you're suspected to mentally insert the appropriate conversion, giving $P(\neg(x\in A)\land \neg(x \in B))$.
You see, it depends mostly on culture and translation, and also the habits. When you get used to using a notation you cannot give up on it easily. For instance, the notations you gave for derivates are both acceptable. If I am right, one of them is the notation used by Leibniz, and one of them by Newton. They can both be used and whichever comes easy, you use it. However, the upperline notation is a different case. For instance, in my country we use that as you do too, and I guess it is used in every country but it is not so popular you see. The notation itself is usually used (also in my country) in mathematical olympiads. Also if you want to learn more, I suggest you reading this article https://www.stephenwolfram.com/publications/mathematical-notation-past-future/.
Best Answer
In Jech & Hrbacek's Introduction to Set Theory, the author adopt this notation to avoid confusion about images of sets and images of elements contained in such sets. For instance, is quite common denote $f^{-1}(\{x\}) $ by $f^{-1}(x)$; in the square brackets notation we'd write $f^{-1}[x]$, which is more clean than $f^{-1}(\{x\})$ and not so abusive as $f^{-1}(x)$. Other reason is sets of sets: if we consider a set $A = \{A_1,\dots, A_n\}$ and a function $f:A\to B$ it would not be didactic to write $f(A')$ for some $A'\subseteq A$, for the elements of $A$ is also denoted by capital letters.