All of your criticisms are equally valid when applied to.. well, anything. How does a football coach know what a "formation" is, and whether it really applies to football? How does a software engineer know the difference between a "program" and the instructions executed by a computer? How does a dog know that a "frisbee" is something that you can catch in your mouth? How does a general use little flags to signify troop positions, when they are really just flags?
None of this is to say that these are not interesting questions—I personally find them quite fascinating. But saying that they are reasons not to take something seriously is rather antisocial. If a lover stares into your eyes on a moonlit night and professes his or her adoration, do you start measuring oxytocin concentrations?
I do think that many mathematicians are a bit too attached to the Cantorian or Platonist views, and have incorrectly made mathematics out to be about things which are more than what they are—and that starts many arguments unnecessarily (for example, when someone claims that a theorem is true "in all possible universes", as if that meant anything). In my opinion, topos theory provides a better foundation for mathematics in this sense, because it is easier to understand the relationship between semantics, syntax, and the ever-elusive ontology. One speaks of this topos or that topos (or "topic", if you prefer), and never needs to worry about whether something "is" this or "is" that.
One relatively recent paper which I think has helped advance this more enlightened way of thinking is the quantum mechanics paper (heavily inspired by the philosophical work of Heidegger) What is a Thing?. There it is argued that set theory has not quite succeeded in providing the proper background for interpreting the world as it appears to us. The "state space" of physics professes to arrange possible worlds into a set, and runs headfirst into various paradoxes as we realize that our experimental equipment itself changes what is being measured, blurring our picture of how things really work and necessitating the continual introduction of new concepts and interpretations.
In short: perhaps truth, in the pragmatic sense, is more sheaf-like than set-like. But I digress.
If anybody tells you that you should take math seriously because it has figured out, once and for all, the correct way to divide the abstract from the concrete, and has firmly established the foundations for rational thought, then they are too caught up in their subject and you really shouldn't pay attention to them. And, if you really want, you can simply walk away, shaking your head in disappointment that mathematicians have failed to live up to their promise.
But, however seriously you take it, mathematics remains a powerful force in the world. While we're not particularly better than anybody else at explaining what we're talking about, what we are good at is bringing disparate things together under the same semantical umbrella—to a large extent, precisely because we are given the freedom not to explain ourselves. Measure theory, for example, has allowed us to shuttle insights between discrete phenomena and continuous phenomena. Algebra has, for hundreds of years, improved our speed of numerical reasoning by a billion-fold, by knowing when to compute and when to encode. Algebraic geometry has provided a language that is equally at home with basic arithmetic, encryption, signal processing, causality, and phylogenetic trees. And so people keep finding it useful, however many students will stand up angrily in our classes and insist that they don't think it could possibly be useful because something something.
In short, mathematics saves time for certain kinds of projects. If you don't do any of those projects, then of course you don't need to take it seriously. But it's under no obligation to explain itself, particularly not to somebody who thinks he is entitled to answers and "justice". If you find the foundations lacking, then we would love for you to come make a career of improving those foundations. If you are mostly complaining however, then pardon us while we focus on our other students.
I think you got the rough idea. We have symbolic expressions and objects, which are two different things. We cannot take the objects themselves and put them on paper, but we can write symbolic expressions that refer to objects, and the objects they refer to are called their values. We might have multiple symbolic expressions that refer to the same object, in which case we say that their values are equal, or that the expressions are equivalent. This is the model-theoretic view, where we have a model that specifies what the value of each expression is.
Based on this view, the properties of equality follow naturally. We write "$x = y$" to mean that the expression "$x$" and "$y$" are equivalent (have equal value). Obviously $x = y$ iff $y = x$. Also, $x = y$ and $y = z$ clearly imply $x = z$.
If your meta-system (the system in which you reason about the meaning of equality in a formal system) is strong enough to let you talk about binary relations, then indeed equivalence of expressions is just an equivalence relation on expressions (which are just a certain subtype of finite strings). There is a one-to-one correspondence between the equivalence classes of expressions and the actual objects to which they refer.
But note that this equivalence relation may not exist within the formal system you are analyzing itself. For example if you are analyzing equality between sets in ZFC, the equivalence relation on expressions (denoting sets) is not a relation in ZFC itself unless you like a contradiction. This issue may not arise in other formal systems, but that is a different topic.
Best Answer
The $=$ sign is overloaded so we need to be careful in which sense we're using it. The first is as a definition when we give a name to some other thing. A familiar example would be $y=mx+b$ which is us defining $y$ to be $mx+b$. Sometimes the notation $:=$ will be used for definitions like this and I prefer it.
The second is as a statement. I might make the claim the $2+2=4$ and want to evaluate the truth of the statement as true or false. I could also make the statement that $2+2=5$ and want to see if that's true or false as well. This is often used in programming languages to made decisions about branching or looping and they will typically have some notational implementation to distinguish the case when it's being used as a definition and when it's being used as a logical statement such as writing $2+2==4$ instead.
Finally, and most importantly, is as an equivalence relation. To understand exactly what they are we first nee to know what a relation is but thankfully it's pretty simple. A relation $R$ on a set $A$ is a subset $ R \subset A \times A$. We say $a \in A$ is related to $b \in A$ when $(a,b) \in R$. This is a very general construction and can be used to create important relations like orderings. In a practical sense evaluating a statement about relationships like $2+2=4$ can sometimes be done by seeing if the element $(2+2,4)$ is in $R$. Lets look at what makes equivalence relations special.
An equivalence relation has three properties, which are reflexivity, symmetry and transitivity. Reflexivity means that $(a,a) \in R$ for all $A \in A$, which just means $a=a$. Symmetry means if $(a,b) \in R$ then $(b,a) \in R$ or if $a=b$ then $b=a$. Finally is transitivity. This means if $(a,b) \in R$ and $(b,c) \in R$ then $(a,c) \in R$. Alternatively if $a=b$ and $b=c$ then $a=c$.
This relation has another structure which is given by the fundamental theorem of equivalence relations, which is as a partition of a set. A partition of a set $A$ is for some indexing set $I$ we for sets $A_i \subset A$ for $i \in I$ $\cup_{i \in I}A_i = A$ and $A_i \cap A_k = \emptyset$ for all $i,k \in I$ with $i \neq k$. Basically you split the set up into bins labeled $A_i$. I don't think it's an understatement to say this is among the most important theorem in modern mathematics so it's worth knowing. It has applications everywhere.