Position updation..?

hi people,
I am building two bots that can communicate between each other… i want to update their positions to each other… how can i do it??? this is totally indoor so i cant use gps… suggest me some algorithms or ideas…

Will you have obstacles in the way, or can you put a “satellite” in each corner of the room and use some sort of RF to work out the distance to each fixed point?

some sort of RF to work out the distance to each fixed point?

“Some sort” covers a multitude of sins, have you ever tried doing this? You can’t.

This is a much harder project than you know. What makes it difficult is there is no fixed point of reference and the boats can be in any orientation.

You don’t say what the boats will do with this information when they have it. You might be able to think round this requirement.

What makes it difficult is there is no fixed point of reference and the boats can be in any orientation.

That’s why I asked about putting devices at fixed locations in the room.

use some sort of RF to work out the distance to each fixed point

Audio might work (and even then multipath will be an issue), but RF, very unlikely.

How about celestial nav, with LEDs on the ceiling?

Wouldn’t audio be reflected by surfaces within the room too easily?

This may (I stress, may) do what you need:


Otherwise, you are going to need something like a magnetic position sensor system (that is wireless, too), from a company like Polhemus or Ascension (big bucks there - plus issues with magnetic interference - but the dollar figure will give you a heart attack long before that becomes an issue!)…

Back in the 1960s, Ivan Sutherland (creator of the “Sword of Damocles” VR/AR rig - the first such implementation of such a system, in the late 1960s; IIRC, the computer it ran on had less power than an Arduino and was the size of a fridge - that may be a slight exageration, but not by much) created an optical tracking system that scanned light beams (-not- lasers) where sensors on the wearer’s body picked up the signal, and by reading the timings of the beams (generated by a rotating disc in some manner), the angle of the disc determined the location (three such beams projectors - X/Y/Z). Good luck finding an explicit description, though - I doubt it exists outside of the academic institution historical archives where he taught at.

You could use outward looking cameras, looking up at a grid of LEDs on the ceiling, activated one at a time? This is the basis behind such a tracker IIRC called “EagleEyes”…it’s patented, of course.

How about a camera pointing straight up, tracking the ceiling surface irregularities? Maybe you could use a USB optical mouse sensor, flipped over, with a lens system to focus the ceiling surface properly on the sensor?

If you can have an “off-board” system communicating with the robots via RF, you could set up multiple cameras looking at the robots, give each robot an LED or something the camera can pick up, the robot’s can each only turn on their LED when the other hasn’t (round robin token passing via inter-robot comms?), then ask the off-board computer “where am I” - it, using vision algorithms, checks multiple cameras and computes a 3D location based on its knowledge of the individual camera positions, and relays that back to the robot (OpenCV, among other possibilities, could be used here).

You could put in an onboard laser scanner head (homemade or COTS) on each robot, and use it to build a map of the room, then use dead-reckoning to determine position on the map, adjusting the map as needed to match as errors accumulate).

Plenty of possibilities, all of them highly difficult to implement, some, if not most, expensive to implement…



How about a camera pointing straight up, tracking the ceiling surface irregularities?

Close but I would use a camera pointing at the boats. Have the boats fitted with an IR LED and put an IR filter over the camera and track them.

this is totally indoor so i cant use gps…

Why not?
Is it because you can’t use re-raditating antennae, or because the accuracy of GPS is inadequate?

(BTW, are they “bots” or “boats”? - there seems to be some confusion.)

camera with arduino parsing the image? has that been done yet?

No that is not on. You would need the camera connected to a PC doing the image tracking and transmitting the information to the things (bots or boats, at least boats is English)

Is it because you can’t use re-raditating antennae

Er, if you use a re-radiating antenna you’ll know where the re-radiating antenna is, not where the receiver is…


Can you put markings on the floor? If you use a grid system and know where the bots start, they would be able to keep track of what coordinate they are at, and broadcast it upon request.

I have seen a thread on IR motion tracking in these forums however that was awhile ago. Neat idea if you can implement it.

hi guys,

i am not using any camera or IR here… i have my bots fitted with rf modules on it… i give a task to them and they have to communicate between each other by updating there positions… so i request you to suggest me a algorithm r any other method my bots can update its positions between each other and the base which gives the task… help out guys…


the bots means a small robo sort off…
i have a moving vehicle(like a small car) wit d modules on top of it…
so the 2 bots must update there positions…
so i need some algorithm n anyother procedure which updates the position.


Without using any cameras (on board or external), or another form of external location (beacons, etc), your only other option would be to use dead reckoning.

Essentially, the idea is to have a quadrature encoder on each wheel so you know when it is rotating and in what direction, then using this information to track how far each wheel moves to determine the distance and direction traveled over time. Once you have this information, and a known starting point, each bot can work out its location relative to its initial starting point, then transmit the info to the other bot(s).

You might also be able to use the sensor from an optical mouse in this manner as well (looking down at the floor - which may be more accurate than the quadrature sensor method).

The problem with such a system is that over time, due to slippage and other inaccuracies, where the robot thinks it is may not match up with reality. This may be possible to reduce by moving slower, making sure there is a lot of traction between the wheels and the ground surface, among other methods, but it will always be there. You may be able to have the bots communicate together in such a way as to double check each bots position (?).

Forget trying to use this method on anything other than a robot in a controlled indoor setting on a smooth surface; without some form of external reference, using it in an outdoor vehicle on normal terrain (even on a road) will result in inaccuracy in a very short amount of time (such a system is typically used with D/GPS for higher accuracy in autonomous vehicle systems, using the D/GPS to update the dead-reckoning algorithm with more accurate location information, with the DR system used for finer accuracy between updates as well as when the D/GPS system is unavailable, such as under bridges or in tunnels).