One of the most common and powerful means of showing that a decision problem is undecidable is, as in the case you mention, to show that the halting problem reduces to it. For example, this is the basis of undecidability for almost all of the answers listed in the MO question asking for attractive examples of undecidable problems.
But sometimes people are led astray by this fact, and begin erroneously to believe that a problem $A$ is undecidable if and only if the halting problem reduces to it. That is, to use your terminology, they are led to believe that the halting problem is the easiest undecidable problem. One sometimes sees this error among mathematicians, even very good ones, who first begin to consider decidability issues in their domain. The error is to assume that because undecidability is often proved by reducing the halting problem, that this is the only way undecidability arises. But this isn't true.
This issue and its resolution are central in the history of computability theory, connected with its birth and maturation as a subject. Specifically, Emil Post asked what is now known as Post's Problem, whether there are any computably enumerable Turing degress strictly between the decidable ones and the halting problem. To use the jump notation, Post's problem asks whether there is a computably enumerable set $d$ with $0\lt_T d \lt_T 0'$. One could view Post's problem in your terms: he is asking whether the halting problem is easiest among all c.e. problems. A related version would ask whether it is the easiest undecidable problem of all.
The answer was given by Friedberg and Muchnik, who invented the priority argument method specifically to solve it, a method that has become fundamental, used in thousands of constructions in computability theory in order to reveal the nature and structure of the Turing degrees. The answer is that there are many such intermediate degrees! Indeed, there is a rich structure of c.e. degrees strictly below the halting problem. In fact, every countable partial order embeds into the hierarchy of degrees below the halting problem.
Thus, it is not correct to say that HALTING is the easiest undecidable problem. Any of the intermediate sets constructed by the priority method are strictly easier than HALTING, but still undecidable.
This is not to dispute the technical claim you made about Rice's theorem, but only your interpretation of it. The properties you consider are not actually languages, but sets of languages, or rather, sets of Turing machine programs that respect the equivalence of deciding the same set. Thus, your properties are just a special case of the collection of all possible languages. The reason the halting problem reduces to these particular sets has to do with the fact that the problem of deciding whether two programs compute the same set does reduce the halting problem. Indeed, this equivalence relation, making two programs equivalent when they compute the same function, is strictly harder than the halting problem; I believe it is a complete $\Pi^0_2$ set, which would make it equivalent to the double jump $O''$.
In general, for randomized classes complete problems tend to be either promise problems or approximation problems (which means they don't technically satisfy the conditions for being complete problems).
If you allow approximation problems, you can get complete problems in BPP. For example, for BPP you can ask: given a Turing machine $M$, approximate the probability that it accepts within $t$ steps on input $x$, where $t$ is polynomial in the size of the input. It's clear that you can approximate this probability with a BPP machine (using simulation), and it's also obvious that any BPP language can be reduced to this problem. Technically, this problem isn't in BPP since it's not a language (i.e., its output is not $\{0,1\}$), so it's not BPP-complete.
You can turn this into a promise problem by imposing the "promise" that the acceptance probability either be greater than $\frac{2}{3}$ or less than $\frac{1}{3}$.
These complete promise problems or approximation problems play the same role that complete problems play for non-randomized languages, and they really deserve more respect from computer scientists. For quantum computing, the natural complete problems also tend to be approximation (or promise) problems, and they have been quite useful in the theory of quantum computing.
There is a natural promise problem from quantum computing (stoquastic Hamiltonian) which is MA-complete (and not trivially so). However, I don't believe there are any languages (non-promise problems with $\{0,1\}$ answers) known to be MA-complete.
Best Answer
This is a very general question and difficult to answer. The kind of approach people will use is dependent on a lot of factors like the following:
If i tackle a new problem, i'm a huge fan of the common techniques used in AI and OR community, at least for a prototype-solver which will be my first step. Most of the time one will be able to implement a solver in a very short amount of time (see above: factor one) which will perform not that bad (interesting example: link). These (nearly) problem-independent solving-approaches include the following:
Some comments about using these solvers:
In theory, these approaches can handle any kind of np-complete problem, but of course there will be some differences dependent on the problem. Some hazardous simplified general statements:
decision-problem vs. optimization-problem:
discrete vs. continuous search-space:
solution characteristics:
Alternatively one can try to implement a problem-dependent algorithm with many common approaches like:
While tackling these problems one will learn that often these different approaches are somehow connected to each other (Mathematical Programming and Approximation Algorithms) or combined (MIP + CP; CP+Heuristics).
All in all: a difficult question!