Robot Localization Indoor

Hi,
I am building a robot and I will use the Wavefront algorithm for the navigation on a predefined map, and

ultrasonic for sensing the environment. I am struggling on how can I make the robot knows where it is in

the environment (Localization).

I have many ideas but i do not know if it is going to work or not such as putting landmarks on the

predefined map so from the ultrasonic readings I can tell the robot where it is at, OR by using the

accelerometer and compass?

Please help me if you have any idea?

This is a very, very difficult problem, and lots of people are working on it. Spend plenty of time with Google and read everything you can find, to get an idea about what other people have tried.

The combination of accelerometer and compass does not work well indoors for many reasons (such as stray magnetic fields and noisy sensors). Read more about the difficulties with accelerometers here: http://www.chrobotics.com/library/accel-position-velocity

Thank you for your cooperation.

that's right I have read about the combination between the accelerometer and the compass are very hard and you get a lot of errors when you use them indoor.

From your experience, how would you do the localization indoor when you have a predefined map?

The map doesn't solve the problem of localization, or even help much. You still have to find the position of the robot within the mapped region.

The easiest way (used by many people) is to use an overhead video camera connected to a separate computer, which can relay position and orientation information to the robot via radio.

thank you jremington,

I agree with you about the overhead camera, but I am trying to avoid using the camera.

You'd have to go with an on-board camera and fiducial markers on the floor/walls/ceiling (hint: reacTIVision). That would require something like a Raspberry Pi to do that processing though. There's also the CMUCam which will do color blob identification that you could use as fiducials.

I second the latter suggestion. The new CMUCam (also called PIXY) supports up to seven colored markers called signatures, and further extends this to thousands of possible "color codes" which could be used as guideposts or fiducials. http://cmucam.org/projects/cmucam5

I am trying to avoid using the camera.

You might put some type of beacons in the environment which the robot can detect and use to calculate its position of the robot in relation to the beacons.

Couldn't the robot tell where it is, by calculating the distance moved by its wheels rotation and applying this to the predefined map?

Sure, if you have a method of measuring the distance that the wheels move, and the wheels don't slip (which is almost never the case).

Chagrin:
You'd have to go with an on-board camera and fiducial markers on the floor/walls/ceiling (hint: reacTIVision). That would require something like a Raspberry Pi to do that processing though. There's also the CMUCam which will do color blob identification that you could use as fiducials.

Thank you all guys for your cooperation.

Many people are suggesting me to use the on-board camera, but I am still searching for a way that helps me to do the localization without using the camera.

zoomkat:
You might put some type of beacons in the environment which the robot can detect and use to calculate its position of the robot in relation to the beacons.

well, I have not thought about the beacons, I will start searching on them.

Read about SLAM (Simultaneous Localization and Mapping) - make sure your skills are good in linear algebra and statistics/probabilities, or you'll never really "get it". If you need a good beginner's intro to SLAM, two good sources are the Wikipedia article (mainly the link to the "SLAM for Dummies" paper, near the bottom):

...and the best, IMHO - the CS373 MOOC from Udacity:

Which is taught by Sebastian Thrun - probably one of the best instructors out there for AI and how it is applied to robotics and self-driving vehicles; he was one of the main guys behind Google's original self-driving car, and also lead a team to win DARPA's Grand Challenge.

This course will teach you everything you need to know in order to understand what kind of a problem you are facing, what the potential solution is, and how to implement it (albeit in a simple form - I would not trust the code to drive a full-sized car!).