what is an artificial intelligence robot?
Well - let's look at a couple of definitions, shall we?
http://en.wikipedia.org/wiki/Artificial_intelligence"Artificial intelligence (AI) is the intelligence exhibited by machines or software."
"John McCarthy, who coined the term in 1955, defines it as 'the science and engineering of making intelligent machines'..."
"The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects."
Ok - so basically, for a system to exhibit "artificial intelligence", it needs to be a system which exhibits "intelligence", through the application of "reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects" - that's the basics. If you system does not do that, it cannot be said to embody "artificial intelligence".
http://en.wikipedia.org/wiki/Robot"A robot is a mechanical...agent, usually an electro-mechanical machine that is guided by a computer program or electronic circuitry."
So a robot is a machine, guided by a computer (generally, but not always) running a stored program that controls and guides it.
So - what would be an "artificial intelligence robot"? Well, I would say (mashing up these definitions):
"...is an electro-mechanical agent, guided by a computer program and/or electronic circuitry which exhibits intelligence via the application of reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects..."
my friends say that it is a robot which will act according to human interaction, with the knowledge of self programming
Well - not "according to human interaction", but rather "separate or without human interaction" (maybe something got lost in translation, though); in other words, such a robot does not depend upon human interaction (with the exception of turning it on) to be able to navigate, explore, map, understand, interact with, etc - it's environment. In other words, it is an intelligent agent that exists and is aware of it's environment; it doesn't need direction or control from a human to make decisions about how to navigate or otherwise understand that environment. The program which controls the robot keeps track of what it has seen in the past, how it has handled past problems, and/or what the best methods were to solve such problems (such as an obstacle in its way, for instance).
To that end, such a program controlling the robot is "self programming" - it is constantly updating it's understanding of the environment based on the continuous monitoring of sensors versus what it's commands to it's "effectors" were. For instance, if it commands it's arm to move near an object to grab it, it needs to know whether it was successful at grabbing the object, and if not, why - so that next time it tries it will have more knowledge about the situation to try something slightly different (and hopefully be more successful).
now i am making an my first simple obstacle avoiding robot will it be an artificial intelligence robot?
If it just has a simple loop that says "if I sense the object over here, turn in the opposite direction"; then no - that isn't "artificial intelligence".
If instead, you had sensors to detect collisions, and you wrote the code to say "I sense an object in front of me, make a random decision to turn n-degrees in a random amount and continue to drive forward - did I make it? If yes, add that value (plus others - speed, distance to object, etc) to an array, plus a 'success factor' - a number between 0 and 10; the higher the number, the better the success. If not, add that to the array, plus the factor. Next time, look at the array, try something with a high score that matches something close to what you are sensing this time - if it works, increase the value of the factor, if it doesn't, decrease it. If you can't find a match, try another "random" direction selection".
Ok - now that is a lot more complex (and I have probably left some stuff out - but I hope you understand what I am getting at with it) - basically, the system knows nothing about its environment, but over time, by trying random combinations (only after consulting it's "memory" of past successful moves that matched within a certain percentage of the "current" sensing) - it builds up a knowledge base of what and how it most successfully avoided an obstacle.
Something really similar could be done to build up a "map" of the environment the robot is in (this is really complex, by the way - it goes by the acronym of "SLAM" - Simultaneous Localization And Mapping); don't think about trying to implement something like SLAM on an Arduino Uno - while you could implement something extremely simplified, it might also be an exercise in frustration. A better platform for such experimentation (if you wanted to stick with an Arduino, that is) would probably be a Mega2560 with a SRAM expansion board.
if not could you please explain me explain me a bit more about this artificial intelligence robot
I hope the above helps you understand the difference between a robot that is simply programmed by its builder, and one that - while it has a program - is capable of learning about its environment by trial and error, building up a knowledge representation map of these interactions. From there, things can get complicated very quickly.