Redesigning my path planning bot - any suggestions?

Hey everyone,

In undergrad I had a very simple bot that basically just avoided obstacles but didn't really do any planning. My setup was pretty much just a Lynxmotion tank base and two 80cm range finders - one for the left and another for the right. That was 3 years ago and worked really decent for what I was trying to accomplish, but now I'd like to try and tackle something a little more advanced and would love some input from the community here. I still want to stick with autonomous path planning.

Here are the materials I have currently:

  • Lynxmotion tank base
  • Sabertooth motor controller
  • Upgraded motors - 12v 253rpm
  • Arduino Diecimilia
  • Sharp G12 80cm range finder (x2)
  • Hitec servo
  • LCD
  • digital compass

So... the plan I have for now was something along the lines of mounting one of the range finders to the server and making my own ghetto laser scanner. The problem I've thought of so far with this is that the range finder only polls every 40ms; the fastest my servo can scan is about 1deg/3ms, so about 1080ms for a full 180-sweep and back (0 to 179 -> 179 to 0). Because of this, I think I can only poll every 10 degrees or so.

From here, I would read the signal and update an array that maintains a map of what's in front of me. An issue I've thought of for this is that the array won't be able to keep a history (i.e., objects beside and behind me) because I don't have an accurate way of measuring speed... I'm unsure if an encoder exists for my motor (and I'm really not sure how to use one anyways).

Once the array has been updated (e.g., after a full sweep of the range finder) I would run some sort of algorithm on it to determine the vector to use for my motors (not really sure what to do here yet - been looking up some path planning algorithms to help me out but still a little lost).

So basically, the plan is:

Do a single sweep of the range finder --> Update array/map with readings --> Adjust motors based on vectoring algorithm --> Repeat

One of the major issues I've considered with this is the time for a single loop. Obviously, I'd like the bot to move as quickly as possible. However, with a single sweep taking 1000ms+, I can't imagine it'd be safe to go any faster than a few feet a second (if that!). My top speed currently is around 5ft / second.

Any suggestions for how to improve my plan? Or what I could use the additional range finder/compass for?

Eventually I'd like to do some neat displays on my LCD - maybe showing obstacles as they appear in front of me... but that's farther down the road. :slight_smile:

I am working on a autonomous bot using an old wheelchair as my base and drive, I am using a Sabertooth 2X60 to drive the motors I also got the Kangaroo X2 and that hooks right up to the quadrature encoders I put together where the brake used to be . Visual odometry and path planing could be done with this neet new toy coming out... http://www.kickstarter.com/projects/254449872/pixy-cmucam5-a-fast-easy-to-use-vision-sensor
I am using a Asus Xtion for S.L.A.M.
Here is a link to my bot...
http://mybot.compugeekz.com/ottis/
I got some of it shown here..

duki:
Any suggestions for how to improve my plan?

Well, my "goto" suggestion is always SLAM, but not many here seem to like the idea because:

a) it's hard
b) it's difficult to implement in a small memory space (though it can be done, according to different papers I have seen)
c) did I mention it's hard?

Basically, the concept of SLAM is to build up a probability map of obstacles and such, so that as the sensor reads the environment, the robot is able to (with a certain amount of error - you'll never hit 100% accuracy) determine it's location within the map it has built up over time. Once you know where you're at, and where you want to go, you can generally reduce the map down to something like a grid, add various waypoints, then run a planner like wavefront or A* on it.

If you are interested in this concept, check out the Udacity CS373 online course (it's free - but it ain't easy) - if SLAM is good enough for Google to use for their self-driving car, it should be a good enough concept to at least learn the basics of for your needs.

cr0sh:
b) it's difficult to implement in a small memory space (though it can be done, according to different papers I have seen)

cr0sh, do you have some pointers to these papers ?

RbSCR:
cr0sh, do you have some pointers to these papers ?

Here's one:

http://cs.krisbeevers.com/thesis/

Here's another that at least points to the idea of making a small SLAM implementation:

Thanks, I've browsed the thesis defense slides of Kristopher R. Beevers -- Mapping with limited sensing and it looks interresting.

I'll be in my reading chair for the next period.

Can someone explain SLAM in laymen terms.

Am i right in thinking you provide your bot with a map, and tell it to go somewhere & it finds the best or most efficient route to get there ?

Am i right in thinking you provide your bot with a map

No. It examines it's surroundings, moving as needed, and defines its own map. Just like a new puppy does.

tell it to go somewhere & it finds the best or most efficient route to get there ?

Adapting as required, and updating its map if something moved or something was added or deleted.

This http://www-personal.acfr.usyd.edu.au/tbailey/papers/slamtute1.pdf is an introduction to SLAM and it's history, but not exactly in laymen terms.

This Simultaneous localization and mapping - Wikipedia is very brief and in laymen terms

(Hijacking unintended)

Is there a rule as to what is constituted as a boundary? Can i use 4 corners as markers for a boundary, and apply a rule that limits said working area within these 4 points of reference, or must it be an open area and only plot a specific volume of area?

Is there a rule as to what is constituted as a boundary?

A boundary is the edge of the space you are mapping.

This would be beyond me to write as code.
But today i thought of some ideas for sensors and collecting data.
I have done, a little work in GIS management, so please here me out :slight_smile:

A car type bot, has a fixed body shape and perimeter (fixed length) from the centre it is possible to be positioned in any direction, therefore we ascertain the cars total perimitary circumference.
As we all know, not every surface is flat. A mercury type level (bubble) sensor, faceted with multiple switch combinations could be used relatively cheaply to provide input for the vehicles current inclination.

Within a preset boundary the bots total pom (points of measurement) are predefined by its circumference, and total volume area, therefore the resolution of accuracy is determined by what degree each area of pom overlaps another.

Each designated pom can be translated to a contour / vector point.

A car type bot, has a fixed body shape and perimeter (fixed length) from the centre it is possible to be positioned in any direction, therefore we ascertain the cars total perimitary circumference.
As we all know, not every surface is flat. A mercury type level (bubble) sensor, faceted with multiple switch combinations could be used relatively cheaply to provide input for the vehicles current inclination.

But, how accurately can you set or know it's current orientation (which way it is going)? How accurately can you tell how far the bot has travelled?

How many points on the map can you store in the limited memory the Arduino has?

All these are factors that influence how useful SLAM is.

Didn't realise the acquired data could not be saved to sd ?

So i ask, what are the rules? What is allowed? and What is not allowed?
What is the dimensions of the vehicle?
What is the size of the area to be mapped?

And does the area have to be a fixed shape?

Didn't realise the acquired data could not be saved to sd ?

It could. But that adds a whole level of complexity. Imagine walking through a shopping mall, blindfolded, looking up where to go, what is in your path, etc. in a book printed in braille.