Here are my thoughts on what could be going wrong:
Accuracy (what is being measured)
Perhaps your network is in fact doing well.
Let's consider binomial classification. If we have 50-50 distribution of labels, then 50% accuracy means the model is no better than chance (flipping a coin). If the Bernoulli distribution is 80%-20% and the accuracy is 50%, then the model is worse than chance.
No matter what I try, I'm not seeing better than 20% accuracy when I add a hidden layer.
If the accuracy is 20%, just negate the output and you have 80% accuracy, well done! (well at least for the binomial case).
Not so fast!
I believe that in your case the accuracy is misleading.
This is a good read on the matter.
For classification, the AUC (area under the curve) is often used.
It's common to also examine the Receiver operating characteristic (ROC) and the confusion matrix.
For the multi-class case this becomes more tricky. Here is an answer that I found. Ultimately, this involves a strategy of 1-vs-rest
or 1-vs-1 pairs, more on that here.
Pre-processing
Are the features scaled? Do they have the same bounds? e.g [0,1]
Have you tried standardizing the features? This renders each feature normally distributed with zero mean and unit variance.
Perhaps normalization might help? Dividing each input vector by it's norm places it on the unit circle (for L2 norm) and also bounds the features (but scaling should be performed first otherwise the larger numbers will spike).
Training
As to the learning rate and momentum, if you're not in a big hurry, I would just set a low learning rate and the algorithm will converge better (although slower). This is valid for stochastic gradient descent where examples are shown at random (are you shuffling the data?).
From your code I can't figure out how this happens.
Are you going one pass only through the training data? For SGD, multiple iterations are made. Perhaps try smaller batches? Have you tried weight decay as a regularization method?
Architecture
Cross-entropy as loss function: check.
Softmax at outputs: check.
Might be a longshot at this point but have you tried projection to a higher dimension in the first hidden layer then collapsing to a lower space in the next one two hidden layers?
There is also the cost in your output, I wonder if it could be scaled to make more sense. I would try to plot the evolution of the cost (log loss here) and see if it fluctuates or how steep it is. Your network might be stuck in a local minima plateau. Or it might be doing very well in which case double check the metric?
Hope this helped or generated some new ideas.
EDIT:
Example of how normalization (L2) can make things worse when features are not scaled relative to the other features. Plots for one sample:
In the left image the blue line is a vector of 10 values generated randomly with a mean zero and std of 1. In the right image I added an 'outlier' or out of scale feature no.6 where I set its value to 10. Clearly out of scale. When we normalize the out of scale vector, all other features become very close to 0 as it can be seen in the orange line on the right.
Standardizing the data might be a good thing to do before anything else in this case. Try plotting some histograms of the features or box plots.
You mentioned you are normalizing the vectors to sum up to 1 and now it works better with 10.
That means you are dividing by the 1-norm = sum(abs(X)) instead of the 2-norm (Euclidean) = sum(abs(X).^2)^(1/2). The L1 normalization generates sparser vectors, look at the figure below, where each axis is one feature, so this is a two dimensional space, however it can be generalized to an arbitrary number of dimensions.
Normalizing effectively places each vector on the edge of either shape. For L1 it will lie on the diamond somewhere. For L2 on the circle. When it hits the axis it is zero.
Best Answer
For your first question : I do not think this is normal that you accuracy can vary from 80% to 20%. How many epochs are you using ? and the learning rate ? It is possible that the loss that you are trying to minmize get stuck in a local minima.
You can set the seed so that between your different sessions the randomized batches are the same : if the accuracy is changing, it would show that there is a problem somewhere ( and not something about a local minima).
For the second question : you can use a procedure of early stopping : you separate your data in train and test. At every epoch, the performance on your test data is computed. If after a certain number of epoch, the performance on the test does not improve, the training stops. This prevents from overfitting and you gain in time of training.
For the third question : yes, the accuracy can change a lot with fine tuning parameters. I do not think that reducing the number of features will change something because NN are by themselves selecting the "good" features to use. Concerning the batch size, it changes how your loss is going to decrease : if you choose a small batch size, the evaluation of the gradient will have a high variance so the loss can "go in the wrong" direction. But if you choose a large batch size, the computation time will be higher.
I advise you to print the evolution of the training loss with the number of epochs to see if it gets stuck in a local minima. How did you fine tuned your learning rae ? You should also check the evolution of loss with different learning rate.