Hello. I am building a robot with an arduino mega(Clone) that is meant to go through a maze that I will build. I would like to know if there is a way to let the arduino 'know' where it has already been (without a GPS because that could possibly be wrong and then make it crash) so it can tell which direction to go. I have seen compass sensors on the internet, would one of them do anything to help me?
Wheel encoders will give you distance traveled from point to point. Combine that with steering information and as long as the wheels don't slip, you can calculate reasonably accurate 2D locations. This is called "dead reckoning".
Thank you, that is helpful. I hadn't thought of that. But I can not find where I could buy just the encoders. All I can find is where I would have to buy completely new wheels that have an encoder in them or with them. Do I have to have a particular type of Wheel that goes specifically with that encoder?
You might also be able to create an acoustic version of a Loran system with one sensor on the robot and three more around the perimeter of the maze area. This will give you continuous absolute position data.
SteamTooth72:
Thank you, that is helpful. I hadn't thought of that. But I can not find where I could buy just the encoders. All I can find is where I would have to buy completely new wheels that have an encoder in them or with them. Do I have to have a particular type of Wheel that goes specifically with that encoder?
You can make your own encoders:
(that's just one of many tutorials found about the subject)
That said - dead reckoning will only get you so far. You can make it more accurate, but you will -never- get complete accuracy; wheel slip and sensor noise (and lack of accuracy) will always introduce error into the system. In short, you can never know -exactly- where you are located, unless you are in an extremely controlled environment.
For instance, there was a company once that made a pen and paper technology that could tell where the pen was on a sheet of paper; this allowed for making a pen that could know where it was to introduce all sorts of additional "smart" functionality (games, education, digitizing of data, etc). For it to do this, what it used was a small camera (similar to that in an optical mouse), and a specially printed piece of paper. The paper had small dots on it in a varying, but unique to the location, pattern which the camera could read, and know exactly where the pen was on the piece of paper. Not only that, but the pattern was so special, that according to the documentation (and patents) for the system, the dot pattern was unique enough to be able to print it on enough paper to cover the surface of the earth, and it would still be unique (so, in theory, not only did the system know where on the page it was - but -which- page it was on!).
That is a "controlled environment".
While you could do this with your robot, it wouldn't be very easy outside of a specially constructed "lab-like" environment. Even then, there would likely be issues with positioning (which I am sure still existed with the pen device I described above).
So, how do you fix this problem? The simple answer is - you don't. Instead, you work with it. In other words, since you can't know exactly where you are located at any one time, you instead say "where am I probably located?". You do this by first using a set of sensors to measure distances to objects in a room (you could use an ultrasonic sensor or sensors for this, or a 2D LIDAR unit, or even bump sensors if you wanted). As you move and measure, you build up a "map" of your surroundings. This map is only a rough estimate - it will never be perfectly accurate, but you don't care. After moving, you measure, and you compare - then you figure out a probability of where you are at based on your current readings compared to your prior readings. You'll never get it to be "100%" - but you could (with enough processing power) get it down to the high ninety percents. This in general, is "good enough" - if you are off from your measurements by 7-15 cm, it doesn't likely matter much (add in some bump sensors, and integrate the data into your measurements - all sensor data should be integrated anyhow - it will make the probability calcs better).
This (albeit simplistic) explanation has a name:
...aka, SLAM.
If you want to understand how this works in practice, in a manner conducive to learning it and that is somewhat hands on - then I suggest you try out the following MOOC:
Professor Sebastian Thrun is one of the world's leading experts on machine learning and artificial intelligence, and is (was?) behind much of the technology used by Google's self-driving vehicle system. You can't get a better instructor for such a course (his peers of Peter Norvig and Andrew Ng - well, those three together teaching an AI course would be a "to die for" experience - seriously, these guys know their stuff, and are great instructors to boot).
Note, though, that the above course is -not- for the faint hearted: You are going to want to have a familiarity with Linear Algebra, Basic Probabilities/Stats, and know how to code in Python at a basic level in order to be successful in the coursework.
But if you come out the other side of it - you will have a much better understanding and knowledge of how SLAM - among other interesting topics - works.
If you're running a "maze", you might look at the MicroMouse competitions. I believe they use mainly wheel encoders. The basic strategy is to break the maze up into a grid of numbered blocks, and keep track of their location by counting up and down while traversing the blocks.