Maze mapping by robot

Hello everyone,

I need a little bit of guidance with my project in which my robot goes through the whole maze. But I don´t know how to calculate movement of the robot to coordinates x and y.

Thank you in advance.

Google Micro mouse, this is a whole culture on it's own.

Do you have a "robot" allready? Since you say you want to calculate movement of the robot....

Bringamosa:
Do you have a "robot" allready?

Seems to me s/he does:

Thalorn:
... my robot goes through the whole maze. .

("goes" as opposed to "is supposed to go" :wink: )

So perhaps s/he wants the robot to record the intersections and walls, so that later on it can zoom straight to a point x,y since it already knows the intersections and walls and can think ahead as to how to get there.

I already have the robot and I want to draw map of the maze on screen. Im getting distances of the walls by ultrasonic sensors but I dont know how to transfer movement of the robot to 2D.

Be sure to not give us any details at all.

We love guessing games, and long, protracted threads trying to tease information out of you.

Thalorn:
I already have the robot and I want to draw map of the maze on screen.

Does that mean your robot can solve a maze already, perhaps with the left hand rule?

(Then I'm wondering what strategy is required to map (as opposed to solve) a maze, since solving (getting from entrance to exit or the centre) doesn't necessarily take you everywhere in the maze, and the map would at best be of the part you covered.)

Ok then

The robot is made from Mega 2560 with built in esp8266, motors are controlled by L293D motor shield, the distance measurement of the walls in the maze is done by HC-SR04 ultrasonic sensor. My goal is to send the robot through the whole maze and draw map of the maze on computer screen. Robot sends coordinates of the walls to the database from which program written in C++ more accurately in QT widgets draws the map. But I dont know how to transfer the movement of the robot to coordinates x and y.

Not sure why you bother with a Mega in the robot when there is also another more powerful processor available, the esp.

Post links to the components you are using (links that can be clicked), pictures of the robot, the maze, schematics and all the code (Arduino and the C++code on the pc, in code tags).

Thalorn:
But I dont know how to transfer the movement of the robot to coordinates x and y.

Nor do I, can't help there*, but I'm wondering what your strategy is to do this:

Thalorn:
send the robot through the whole maze

  • I reckon as a minimum, encoders on the wheels to measure distance travelled, and some kind of compass to tell direction. Than at each intersection do a 360 scan to determine where the exits to that intersection are?

PaulRB:
Not sure why you bother with a Mega in the robot when there is also another more powerful processor available, the esp.

My bet the esp here is the esp8266-01 module just to add wifi. Not much other use with only 2 gpio pins available

The real question here is, what's the mapping strategy? No amount of coding will work if there's no strategy. So OP, how would you do this manually? If you were to walk around a maze, or let's say your house or office, how would you go about calling out to your buddy acting as scribe, what you see. What would you record at each step along your journey?

How would you know you had visited every possible node?

Then, assuming that leads to a map, how would you use that map manually, on foot, to get to point x,y from the entrance.

Don't put the coding cart before the design horse.

Bringamosa:
Not much other use with only 2 gpio pins available

That's all you need for an i2c bus...

But the OP did say it was built into the Mega, so the esp may not even be programmable. A strange beast.

neiklot:
encoders on the wheels to measure distance travelled, and some kind of compass to tell direction

Not sure I would. If the maze is built on a grid, like the MicroMouse mazes, then using an approximate speed for the robot, you can estimate the position by timing. Changes in the distance sensor readings can then be used to recalibrate the position and speed. And compass direction wouldn’t be needed either (and probably won’t work indoors anyway) because the robot can only move in 4 directions. You can decide to call the initial direction “north” and when the robot turns by 90 degrees, you can that “east” or “west” and so on.

PaulRB:
when the robot turns by 90 degrees,

Yeah I shouldn't have used the word "compass", I meant some means of telling it's turned 90 degrees, and of keeping track of nominal "north".

I was reluctant in fact to answer the question leading to me mentioning encoders and compasses, since my real point here is, any talk of hardware and software to do this in the absence of the OP coming up with some kind of strategy (or approach, philosophy, paradigm, call it what you will) is crazy.

If I was teaching a course about this, just as a for instance, a team's first "hand-in" would be some kind of written description of how they would walk around their house and map it....

Well my initial thought was to go through the maze and on every crossroads remember my position and which direction will I go, decided for example by left hand rule and if I encounter ded end simply turn around, go to the nearest cross and choose diferent path. Dunno if this would work because I don´t know how to differentiate cross from each other. But for now I would rather figure out how to transfer movement of the robot to cartesian.

For example in the sketch of the test piece the robot goes through the corridor and sends coordinates of the walls but I dont know how to get robots position.

Thalorn:
but I dont know how to get robots position.

This has been mentioned at least twice in the various replies.
One proposed method is to put encoders on the wheels so that you can measure how much each has turned.
Another proposed method is to time your movement and when your sensors detect a wall, use the time of travel to calculate the position of the wall. This will work best if you know that the walls will be set on a regular grid.

How many distance sensors does the robot have? Just one, or three, perhaps even four?

In the MicroMouse maze, the robot is normally placed in a dead end in one corner of the maze, with its back to a wall, facing along a corridor. Then a button is pressed on the robot to tell it that it is in the corner of the maze facing “north”. Its not true north, just a nominal “north”. The robot can then call that position X=0, Y=0. It then starts its motors running to move forwards, which is North. After perhaps 1.5 seconds, it may have traveled to the centre of the next grid position in the maze. If the robot has forward or reverse facing distance sensors which are in range of a wall, it can estimate it’s speed and distance traveled as it moves.

In your picture, the robot can use its forward distance sensor to know when to stop and turn, but also to estimate it’s speed when moving, as it approaches the wall. After turning, it could use its rear facing distance sensor to measure how far it has moved away from the wall behind it. When in a long corridor, all it can do is assume it is traveling at a constant speed, based on its last estimate speed when the forward or rear facing distance sensor could detect something.