Basically what was coded was the simplest form of a perceptron artificial neuron (although in the example, the bias value is -3 - honestly, to be closer to the description on wikipedia, "n" should be "n=((-1p1)+(-2p2))+3" - or something like that; there's nothing technically wrong with the way its presented, though).
Still, this is the simplest version - a real perceptron capable of doing anything useful would need many more inputs, and the calculation would be done upon an array or similar method.
I'm not really sure what the point of this posting was for, though, as anyone with a passing familiarity with neural networks would be able to devise something like this; what would be more interesting would be a demonstration of a 3-layer ANN with back-propagation or some other self-learning mechanism.
Perhaps in the form of a line follower or similar small-scale robot...