robot with knowledge of location

I want to construct a small mobile robot that discovers its environment, eg: a room, and 'knows' then where it is within that environment. Does anyone have any suggestions for technology to use to map and then find position?

Tricky, very tricky.

Describe your time and budget constraints.

There is no time constraint but there is a limited, yet unfixed budget. Some compromise between making it a lifetime's work and delivering something this year would be good.

This is surely a common question for robots and I expect that if you asked on a robotic forum you'd find a lot of people have been there and done that. I haven't, but my initial reaction is that it's going to take a significant amount of data to hold even a simple map of the environment and this will probably be the biggest hurdle. Perhaps you could afford to hold it in EEPROM but even so I see the storage constraint as the biggest problem. The other issue is that to be practical I would expect to need this robot to cope with a dynamic environment i.e. obstacles being moved around it. You could deal with that using crude volatile wall following type algorithms but your mention of the robot knowing where it is and learning about its environment suggests you are looking for something more sophisticated. In that case RAM and EEPROM size restrictions would be my biggest concerns. This might mean you need to look at extra onboard storage and/or RAM, both are available although none of the options I've seen are exactly generous.

You could try a simple occupancy grid, but even a coarse 32x32 is going to take at least 1K of RAM.

AWOL:
You could try a simple occupancy grid, but even a coarse 32x32 is going to take at least 1K of RAM.

Interesting - I would've thought it would only take 128 bytes of memory; simple "occupancy" would be a yes/no value - so unless you have a particular reason to allocate a byte per grid location, a 32 x 32 grid could be fit into a much smaller space.

What might be better, though (although much more complex to code), would be something like a segmented representation; 8 bytes could represent x,y values for four coordinates of a "square" surrounding the robot (or 16 bytes if you needed more area or accuracy). As the robot "ping's" (ie - measures the distance around it, or in front of it), you would break the lines up into separate vertices (rough to a point). There would have to be some intelligence to know when/what a convex/concave surface/object is present or has been "made" (or probability of) - and "auto-connect" vertices as they are made/encountered.

Measurement of distances on the map would be via line intersections of course; essentially in the end you would have something like the map style in memory like Doom, you might even have to utilize BSP trees or such in dealing with such a map (thinking about it, this kind of system might be outside of the range of a regular Arduino UNO or the like, if only because of processing requirements - unless you limited the map to simple object representation - such as vertices on a defined grid, instead of arbitrary vertices on an "infinite" cartesian system).

Something to think about, at least...

:slight_smile:

Most popular method I know is triangulation. For 2D flat space two beacons 'd be necessary. RF , may be ultrasound, more complex - optical/visual, probably overkill for small robot. Memory shouldn't be a problem with SD shield, in case of big area or high level detailization.

When I said simple, I meant Bayesian.
I suppose you could do it with fewer than eight bits, but not as few as one.

Thank you for that very interesting discussion about representation of the map within memory.

What would be the best way for the vehicuke to measure distance travlled. If orrdinates are to be stored they are going to have to measured first, distance travelled in x, y.

Have you looked at using Dead Reckoning yet?

Done using encoders.

Hi

What is the robot going to use the knowledge of its location to achieve? I know it was briefly touched on above, but if it's to find its way to a specific place reliably, there are other methods too.

The robotic vaccuums find their way back to their charge stations by looking for and navigating towards an infrared beacon for example (details http://electronics.howstuffworks.com/gadgets/home/robotic-vacuum2.htm). The hardware available to the Roomba is far more capable (and expensive!) than an Arduino and yet even they didn't map the room to do this functionality.

Perhaps there's a simpler way to do what you're after. What's your plan for this room map?
Geoff

This is a nontrivial project. Luckily, there are 100s of papers on occupancy grids. For background
info, also try a google search on SLAM [simultaneous localization and mapping]. Check out anything
done by Sebastian Thrun, who is/was at Stanford, and his book Probabilistic Robotics [expensive].

For more background, download Where Am I?

http://www-personal.umich.edu/~johannb/

I finally got all of the sensors on my small robot working this morning. Among other things, I have
wide-beam and narrow-beam Maxsonars, plus Sharp IR-ranger [GP2D12], all mounted on a pan'ntilt
servo pod, and am going to attempt to learn something about occupancy grids myself.

I think most people doing this will stick encoders on their drive motors for good position-change
accuracy, but I plan to try doing it by using sonar distance change measurements. Maybe.

I did get 8k byte serial RAMs for 35 cents each from Futurlec (Cali company but the shipping is from China). IMO the SD card is better as the 'map' would survive power-off.

How detailed a map will it need anyway? Will it need to deal with things moving around the room?

With the right kind of time-pulsed emitters on the ceiling perhaps there would be a way to make indoor GPS?

I should think an occupancy grid with 12" squares would be adequate.

I got my first sensor scan this evening, you may find it of interest. I have a Maxsonar EZ0, EZ4, and
Sharp GP2D12 mounted on a pan'n'tilt servo pod. Made a 180-deg panning sweep broken into 24 positions.
I wrote a simple graphing program that displays during the sweep. Two different scans are shown, and
they are very consistent.

Except for the #2 GP2D12 reading, which is flukey for some reason [probably a calc overflow], the
GP2 readings are very close to the actual measurements. The wide beams of the sonars really
wash things out. Even the EZ4, which has about 20deg beam, isn't all that great for definition.

Readings #6 - #9 represent a 14" space, which is about 38" deep, between 2 objects which are approx
15" away from the robot. The GP2D12 easily picks it up, the Maxsonars wash it out. I can see that room
mapping with sonars is not super accurate.

Servo Tank Program ... test2 ... IDE 1.0

L=wide-beam  R=narrow-beam  G=GP2D12
-----------  -------------  --------
(distances in inches)
dist=18 22 25                  L   R  G
dist=16 17 392               LR                                  G
dist=16 16 23                LR      G
dist=16 17 20                LR  G
dist=15 16 20               LR   G
dist=19 16 28                R  L        G
dist=18 16 36                R L           G
dist=12 12 37            LR                        G
dist=12 12 38         LR                         G
dist=10 12 16          L R   G
dist=11 12 14           LR G
dist=11 12 14           LR G
dist=12 12 10          G LR                                
dist=15 15  7       G       LR                                
dist=15 10 20          R    L    G
dist=15 11 24           R L        G
dist=15 10 25          R    L         G
dist=15 10 24          R    L       G     
dist=15 10 23          R    L       G
dist=21 21 23                     LR G
dist=21 21 24                     LR  G
dist=19 22 24                   L  R G
dist=21 22 25                     LR  G
dist=21 25 26                     L   RG


dist=21 21 24                     LR  G
dist=16 17 321                LR                                  G
dist=16 19 23                 L  R   G
dist=16 17 21                LR   G
dist=15 16 20               LR   G
dist=18 15 30               R  L           G
dist=15 19 38               L   R               G
dist=12 12 36            LR                       G
dist=12 12 38            LR                         G
dist=11 12 15           LR  G
dist=11 12 13           LRG
dist=11 12 14           LR G
dist=12 12 10          G LR
dist=15 15  7       G       LR
dist=15 11 20           R   L    G
dist=15 10 24           R  L        G
dist=15 11 25           R   L         G
dist=15 11 24           R   L        G
dist=15 10 24          R    L        G
dist=19 22 24                   L  R G
dist=21 21 24                     LR  G
dist=21 22 24                     LR G
dist=21 25 26                     L   R

What I would like to do is create small independant vehicules that interact, modelling forms of social behaviour.

I envisage a vehicule to be able to explore and 'map' its surroundings, a room primarily, together with any permanent obstacles, furniture for example. The vehicule will 'know' or be able to find out where other vehicules are through other means, as yet undecided, and the knowledge of the environment will be key to working out the best path to get to the other vehicule.

The position resolution wouldn't need to be more than the size of the vehicule, some 30 cms perhaps, and it must be possible for the robot to explore any space, ie not be dependant on external mechanisms to measure distance.

What I would like to do is create small independant vehicules that interact, modelling forms of social behaviour.

I envisage a vehicule to be able to explore and 'map' its surroundings, a room primarily, together with any permanent obstacles, furniture for example.

Cool, not only have you chosen a first project that will require a lot of work, but now you've
double-downed and added another fair share to it. 8) In academics, people have spent their
entire careers working on one project or the other, with the help of many grad students.

As noted already, there are tons and tons of research papers on these topics. You might check
Maja Mataric's website at USC for swarm robots.

Joe Jones has a couple of good books on "Robot Programming". His 2004 book by that name makes a
big point about the unreliability of most available sensors. For my part, I've been taking more sensor
scans, and finding the sonars are extremely susceptible to ground clutter, due to their wide beams,
when used on a small robot. Finding out what works well in practical situations is a nontrivial task.

oric_dan(333):
I should think an occupancy grid with 12" squares would be adequate.

I got my first sensor scan this evening, you may find it of interest. I have a Maxsonar EZ0, EZ4, and
Sharp GP2D12 mounted on a pan'n'tilt servo pod. Made a 180-deg panning sweep broken into 24 positions.
I wrote a simple graphing program that displays during the sweep. Two different scans are shown, and
they are very consistent.

Except for the #2 GP2D12 reading, which is flukey for some reason [probably a calc overflow], the
GP2 readings are very close to the actual measurements. The wide beams of the sonars really
wash things out. Even the EZ4, which has about 20deg beam, isn't all that great for definition.

Readings #6 - #9 represent a 14" space, which is about 38" deep, between 2 objects which are approx
15" away from the robot. The GP2D12 easily picks it up, the Maxsonars wash it out. I can see that room
mapping with sonars is not super accurate.

Servo Tank Program ... test2 ... IDE 1.0

L=wide-beam  R=narrow-beam  G=GP2D12
-----------  -------------  --------
(distances in inches)
dist=18 22 25                  L   R  G
dist=16 17 392               LR                                  G
dist=16 16 23                LR      G
dist=16 17 20                LR  G
dist=15 16 20               LR   G
dist=19 16 28                R  L        G
dist=18 16 36                R L           G
dist=12 12 37            LR                        G
dist=12 12 38         LR                         G
dist=10 12 16          L R   G
dist=11 12 14           LR G
dist=11 12 14           LR G
dist=12 12 10          G LR                               
dist=15 15  7       G       LR                               
dist=15 10 20          R    L    G
dist=15 11 24           R L        G
dist=15 10 25          R    L         G
dist=15 10 24          R    L       G     
dist=15 10 23          R    L       G
dist=21 21 23                     LR G
dist=21 21 24                     LR  G
dist=19 22 24                   L  R G
dist=21 22 25                     LR  G
dist=21 25 26                     L   RG

dist=21 21 24                     LR  G
dist=16 17 321                LR                                  G
dist=16 19 23                 L  R   G
dist=16 17 21                LR   G
dist=15 16 20               LR   G
dist=18 15 30               R  L           G
dist=15 19 38               L   R               G
dist=12 12 36            LR                       G
dist=12 12 38            LR                         G
dist=11 12 15           LR  G
dist=11 12 13           LRG
dist=11 12 14           LR G
dist=12 12 10          G LR
dist=15 15  7       G       LR
dist=15 11 20           R   L    G
dist=15 10 24           R  L        G
dist=15 11 25           R   L         G
dist=15 11 24           R   L        G
dist=15 10 24          R    L        G
dist=19 22 24                   L  R G
dist=21 21 24                     LR  G
dist=21 22 24                     LR G
dist=21 25 26                     L   R

Dan, please could I see your code for the occupancy grid? Thank you