Thank everyone of the input. some of it made me look at this from a different angle. I know that this is not a great use for a NN but now I'm just curious if I can do it or not
I revamped the code and I think I'm on to something. The next step is to build a quick prototype robot that can actually rotate.(unless someone informs me of some critical mistake.)
The code below takes the four "stand-in" input values from the LDR and converts them to decimals. Then it passes those to the hidden layer which have a "pseudo-randomly" generated weights. The values at each neuron in the hidden layer has a Sigmoid function applies to it so it can be a number between 0 and 1. After that the hidden layer is passed to 2 output neurons and another sigmoid is applied. The MC then decides which output is bigger and checks the NN's answer by adding inputs 1 & 3(left side) and inputs 2 & 4 together.
If output 1 is larger then inputs 2 & 4(summed) should also be larger. if this is correct then the MC activates the motor associated with output 1. it delays the difference between output 1 and output 2. Then deactivates the motor. (this hopefully gets smaller as the robot get closer to facing the light source.)
if output 1 is larger and inputs 2 & 4(summed) are smaller then the NN goes to a training cycle. in the training cycle, the difference between the outputs is set as the Error. Once that is done the MC checks if what if did on the previous training cycle reduced the error or not. If the Error was reduced it picks a random weight from the network and adds 0.1. If the error was larger, then it subtracts 0.2 from the same weight it changed on the previous cycle.
the output check also works the other way( if output 2 is larger then inputs 1 & 3 should be larger....)