Is there an advantage to using higher dimensions (2D, 3D, etc) or should you just build x-1 single dimension classifiers and aggregate their predictions in some way?
This depends on whether your features are informative or not. Do you suspect that some features will not be useful in your classification task? To gain a better idea of your data, you can also try to compute pairwise correlation or mutual information between the response variable and each of your features.
To combine all (or a subset) of your features, you can try computing the L1 (Manhattan), or L2 (Euclidean) distance between the query point and each 'training' point as a starting point.
Since building all of these classifiers from all potential combinations of the variables would be computationally expensive. How could I optimize this search to find the the best kNN classifiers from that set?
This is the problem of feature subset selection. There is a lot of academic work in this area (see Guyon, I., & Elisseeff, A. (2003). An Introduction to Variable and Feature Selection. Journal of Machine Learning Research, 3, 1157-1182. for a good overview).
And, once I find a series of classifiers what's the best way to combine their output to a single prediction?
This will depend on whether or not the selected features are independent or not. In the case that features are independent, you can weight each feature by its mutual information (or some other measure of informativeness) with the response variable (whatever you are classifying on). If some features are dependent, then a single classification model will probably work best.
How do most implementations apply kNN to a more generalized learning?
By allowing the user to specify their own distance matrix between the set of points. kNN works well when an appropriate distance metric is used.
I expect you are talking about nominal categorical variables there? Ordinal variables with 100 levels are very strange. I have never seen a likert scale with 100 nuances or anything else that would warrant a 100 level ordinal variable. If you have ordinal variables with so many levels, investigate if you can reasonably transform them into interval variables. That can be done when it is reasonable to assume the distances between any two adjacent levels are the same across the scale.
If I had only nominal categorical data, I would first look at tree based models, that's where they naturally shine. With so many options within so few categorical variables, I would expect random forests to do better than single pruned trees. You can test both though.
Best Answer
A naive nearest neighbor implementation will have to compute the distances between your test example and every instance in the training set. This $O(n)$ process can be problematic if you have a lot of data.
One solution is to find a more efficient representation of the training data. "Space-partitioning" data structures organize points in a way that makes it possible to efficiently search through them. Using a $k$-d tree, one can find a point's nearest neighbor in $O(\log n)$ time instead, which is substantial speed-up.
There are also approximate nearest neighbor algorithms such as locality sensitive hashing or the best-bin first algorithm. These results are approximations--they don't always find the nearest neighbor, but they often find something very close to it (which is probably just as good for classification.
Finally, if you've got a relatively fixed training set, you could compute something like a Voronoi diagram that indicates the neighborhoods around each point. This could then be used as a look-up table for future queries.