Solved – Training a neural network with uniform random inputs

neural networks

I'm playing with a little prediction project using a multi-layer perception (MLP) with robust back propagation. I have a variety of variables which correlate to the single output I'm attempting to predict.

On a whim, I wanted to try providing the neural network with a single input of random uniform values (scaled -1 to 1). The output surprised me.

A simple plot of the target values vs. the predicted values shows that the neural network learned the data to a surprising degree.

The images below describe what I'm observing. The scale of the response makes the plot hard to see, but it should seem obvious that the training and test data perform similarly. I can understand both results having the same distribution and scale, but cannot explain why the test data doesn't show more randomness in it's curve.

The neural network had 125 neurons in a single hidden layer and was iterated 800 times. If anything, I might have expected overfitting on the training data, which should have exacerbated the randomness in the test data's response even more.

I had hoped to use this as a baseline metric to gauge the meaningfulness of other variables in the data set, though I'm not sure how to proceed from here.

I'm sure I've stumbled upon a well-documented phenomenon / technique, here… Any thoughts?

training predictions by sorted targets of neural network using a single random uniform value for input

testing predictions by sorted targets of neural network using a single random uniform value for input

Best Answer

It would appear there's something wrong with my current methodology. Haven't figured out what, exactly, but implementing k-fold immediately demonstrated this to me, as a single random input for a value results no meaningful learning occurring in the network.

Back to the drawing board....

Related Question