I can't help you much with high-dimensional topology - it's not my field, and I've not picked up the various tricks topologists use to get a grip on the subject - but when dealing with the geometry of high-dimensional (or infinite-dimensional) vector spaces such as $\mathbb R^n$, there are plenty of ways to conceptualise these spaces that do not require visualising more than three dimensions directly.
For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images.
One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all).
It can take a bit of both theory and practice to merge one's intuition for these things with one's spatial intuition for vectors and vector spaces, but it can be done eventually (much as after one has enough exposure to measure theory, one can start merging one's intuition regarding cardinality, mass, length, volume, probability, cost, charge, and any number of other "real-life" measures).
For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable.
More generally, many facts about low-dimensional projections or slices of high-dimensional objects can be viewed from a probabilistic, statistical, or signal processing perspective.
I really don't think any good or useful could come out of such a standard...
I sat through a plenary talk in a conference with a Fields medal recipient sitting right beside me. The speaker was very much aware of his (rather imposing!) presence---we were sitting on the front row---and, after the initial minutes one could say that he was talking to the medalist. Now, after a good 30 minutes the said medalist asks me very quietly «do you know what some-concept-or-other is? I think I am being supposed to know about it...» I remember pondering at that moment the fact that I had taught a class recently to undergrads about that and, to be honest, the incident managed to considerably increase my respect for the guy.
Best Answer
In my experience, mathematicians will frequently argue (in general, not just in mathematics) by passing to an extreme case at the beginning. Non-mathematicians (again in my experience) sometimes object to such a mode of argument as invalid or irrelevant because such extreme hypotheticals are clearly unrealistic.
I think that the mathematical idea of first setting all the parameters to their maximal, or minimal, values, and understanding that case, before trying to tune them to a more realistic choice of values (and seeing how the solution/context changes with the parameters) can sometimes be valuable (even though it involves as a first step considering a situation that may be very unrealistic).