Robots, their AI and mapping the environment

Aloha,

So I have a simple little bot now that can trundle about and react to some basic stimuli.

It currently consists of a primitive tri-wheel base, steered by powering the front two wheels in various combinations of forwards, backwards, etc.

Sensing is from a PING))) ultrasonic rangefinder.

Can anyone recommend any particularly good resources for getting started with programming the AI for this sort of thing? I only want something simple to start with, for example Bot will roll around until it nears an object, then back off, or something. No environment mapping required here, just "keep moving until you (almost) hit something".

Eventually I'd probably want some sort of mapping ability so Bot can navigate a space. I assume for this I'll need something with a finer narrow beam of range-finding, rather than the quite-broad ping? Also, am I correct in assuming that I would need a way of measuring my motion quite precisely, so encoders on the wheels and/or some sort of directional 'sense', perhaps one of those nice gyro shields?

Just looking for some general tips really, so I know what lies ahead of me in the scale of the thing.

just "keep moving until you (almost) hit something".

I might be missing something here, but, aren't you already doing this? If not, what's the purpose of the ping sensor? Where does AI come into play?

Eventually I'd probably want some sort of mapping ability so Bot can navigate a space. I assume for this I'll need something with a finer narrow beam of range-finding, rather than the quite-broad ping?

I don't see why. The robot might need to dance around a bit to create a good sense of the size of the object in the way, but a broad beam sensor seems fine to me.

Also, am I correct in assuming that I would need a way of measuring my motion quite precisely, so encoders on the wheels and/or some sort of directional 'sense', perhaps one of those nice gyro shields?

The encoders will provide information about how much each wheel has turned. If the number of turns of each wheel is continually monitored, and both wheels are EXACTLY the same size (a lousy assumption even for precision machined wheels, which yours likely are not), then the position and direction of the robot can be known precisely. The math is not trivial, though, and may exceed the capabilities of the Arduino at anything greater than a crawl.

Gyros tend to drift, so you need some means of recalibrating (often) in order to maintain any level of accuracy.

Storing your map is going to present the biggest challenges. Even the Mega has only 8K of storage space. That space is not persistent, so turning the Arduino off means you start over.

PaulS:
I might be missing something here, but, aren't you already doing this?

Yes, that's what I have so far. What I was after really was some pointers to algorythm info about the best way of handling this though. A simple forward-then-stop process is quite harsh, and primitive. I assume there are resources out there where people already describe this sort of thing.

PaulS:
Where does AI come into play?

You may be defining AI a lot more hard-core than I am. AI is just any decision making process that affects the behavior of the bot based on environmental conditions.

PaulS:
The encoders will provide information about how much each wheel has turned. If the number of turns of each wheel is continually monitored, and both wheels are EXACTLY the same size (a lousy assumption even for precision machined wheels, which yours likely are not), then the position and direction of the robot can be known precisely. The math is not trivial, though, and may exceed the capabilities of the Arduino at anything greater than a crawl.

I'll not be tracking over any sort of distance where that sort of thing will be a problem. I think. It's mostly so I can know that I've turned a known angle. I think I'll abandon this though, in favour of just sweeping the sensor - much easier to control :slight_smile:

PaulS:
Storing your map is going to present the biggest challenges. Even the Mega has only 8K of storage space. That space is not persistent, so turning the Arduino off means you start over.

For the quite coarse mapping I was thinking of, that would be plenty. I think it's all a bit theoretical though until I get there, so I'll forget about it for now - it's too implementation-dependent.

A simple forward-then-stop process is quite harsh, and primitive.

If, by this, you mean move, stop, scan, repeat, I agree. (I wonder if I could have gotten more commas in that sentence...)

Scanning while moving is possible. Then, continue, turn, or stop based on the current scan. That's how people do it.

Commas are under-rated and under-used :slight_smile:

I'll have a play with my algorythms at the weekend (there goes the rock-star image...) and see what happens. It's mostly a software thing, rather than a hardware thing, at this point, so I'll have a poke about the internet for some "natural movement" algo's and see what I can find.