Robot (walking self-learning)

My robot:

A robot’s physical model keeps in the memory its own virtual (mathematical) model and the virtual (mathematical) model of the surrounding environment. Every time during the robot’s switching on its virtual model learns to move in the virtual environment. The robot’s virtual model uses a regulator thus securing the high speed of the search for solution in the background of relatively small sample. As in the case of biological systems, walking is not the goal of the robot’s virtual model but a tool to achieve it. Different variants for the solution of the task assigned for the robot’s virtual model are possible depending on preset parameters and limitations. Initially the results of the search for solution of the task assigned for the robot’s virtual model are unknown.

The robot’s physical model periodically receives the information on the optimal movement solution from the robot’s virtual model and executes it. Or the variant of execution is possible only after the obtaining of the acceptable results by the robot’s virtual model. The algorithm simulates the behavior of biological creatures in the sphere of the prediction of action results.

16 MHz processor is used at present. The use of enhanced processor will provide the possibility of the search for solution in real time mode thus providing the robot’s movement in the changing conditions of the surrounding environment (cross-country terrain, etc.).

The implementation of read-only memory will increase the speed of the search for solution of the assigned task in the similar conditions. The possibility of the information (experience) exchange between separate robots in real time mode will increase the speed of learning in comparison with biological systems.

The prototype of the robot's virtual model (PC-version):

Very interesting. How did you implement these virtual models? What does that look like in code?