Go Down

Topic: Indoor mapping for a Robotic 1/10 scale RC car (Read 5754 times) previous topic - next topic

kenjones1935

I have built an Arduino Nano controlled robotic 1/10 scale radio control model car.  I inserted the Nano between the radio receiver and the steering and drive wheel controls.  The sensors are ultra-sound sonars pointed forward and to the right.  Here is a video of the car racing around an empty room.

http://www.youtube.com/watch?v=ZGbzyg2eEcc

The C-code is a simple state machine.  It has issues of control loop bandwidth.  I would like to add mapping to my system.  I believe I need to add some kind of sensor that allows me to calculate relative velocity.  Does my Nano have enough program memory?

I am a retired digital communications software engineer.  I need some references and maybe a connection with someone who has already done this.  The code for my present car can be found at:

http://www.employees.org/~kjones/Robocar%20Course%20Curriculu1.doc

zoomkat

Interesting that the car seems to hit the two exact same places on each lap. One spot looks like a piece of cardboard (with what appears to be an overhang possibly affecting the ultra sonics), and the other the following flat wall. Is the floor slicker in those two areas?
Google forum search: Use Google Search box in upper right side of this page.
Why I like my 2005 Rio Yellow Honda S2000  https://www.youtube.com/watch?v=pWjMvrkUqX0

keeper63

Program memory, maybe - RAM for the map - unlikely. Of course, you didn't say whether you plan to keep the map on-board or not?

If you really want to understand how to do this kind of thing - you might check out and take the Udacity CS373 course:

https://www.udacity.com/course/cs373

...though I must say for an indoor and fast robot, ultrasonic sensors probably won't cut it, but you may be surprised once your learn the techniques from the above course. If you wanted to stay with an indoor system, though - going with a vision processing or grafting the LIDAR sensor from a Neato vacuum robot on the top of your robot might be a better option.

As far as the controller is concerned, again if you wanted to stay with on-board mapping and processing, you are likely going to need something much more powerful - likely a Due or a Raspberry Pi, or some other similar high-power controller. Some of the processing you could offload to the Nano you already have (basically you might treat it like a subsumption architecture or something of that nature, where the Nano could handle certain low-level and messaging duties, while the larger controller did its calculations and decision making).

If you wanted to go off-board, you might still need a larger Arduino (something with more RAM and program space) to allow you to use a 2-way wireless communications system along with the sensors and such you have (which is going to require much more memory and such than probably what the Nano will allow you); again, you could do a subsumption architecture thing here, too - it'd just be triple-layered, with an external PC doing the processing and mapping duties, with the larger middle layer controller doing communications and pre-processing duty, and the Nano acting as a simple controller, etc.

If you do go with some form of vision processing for use as a mapping sensor, you will likely need to off-load that to a PC (or a very powerful on-board controller - a Rasperry Pi or some other Arm based controller running Linux or Android would be best); while there do exist some simple vision/video processing boards and software (such as the Nootropic Design Video Experimenter shield), it may not be the best thing suited for your needs.
I will not respond to Arduino help PM's from random forum users; if you have such a question, start a new topic thread.

kenjones1935

The car hits some walls more than others.  I discovered that different wall material affect the cars behavior.  The sound reflection characteristics differ.

The purpose of these 'toys' is to entice 7-12th grade Science, Technology, Engineering and Math teachers/students to build them and race them.  For some reason today my how-to "Robocar Course Curriculu1.doc" is not uploading correctly to www.employees.org.  The whole thing is done by Velcroing a small solderless prototype card to the top of the car.  I provide a wiring list and a schematic.  I use the original 7.2V battery.  The car contains a Voltage Regulating system that brings that voltage down to 5Volts and eliminates the spikes caused by driving the DC motor. 

Here's a picture.
https://picasaweb.google.com/kennjones1935/NikonTransfer2?authkey=Gv1sRgCMX2p-_i5J6Fdw#5853846271760415986

kenjones1935

Sorry Gang,

I had used up my disk space allocation on the employees.org server.  Their Admin gave me more this afternoon.  If you want to see my writing on this STEM education subject as well as a pointer to the C-code that drives the car download: 

http://www.employees.org/~kjones/Robocar%20Course%20Curriculu1.doc

Ken

kenjones1935

I would like to construct a robocar that learns its environment (race track) by being shown an ideal one lap path under the control of the radio transmitter then, when the transmitter is turned OFF, 'races' on its own.  (My car already gives up autonomous control when the transmitter is turned ON.  It samples then passes on the steering and wheel driving PWM signals.)  I am asking advice on how to create such a map then follow it.  Do any of you know where I could find a flow diagram or a code structure that would accomplish this?  What sensors would be needed besides or in addition to my two ultra sound sonars?

zoomkat

How is the car to know where it is on the track?
Google forum search: Use Google Search box in upper right side of this page.
Why I like my 2005 Rio Yellow Honda S2000  https://www.youtube.com/watch?v=pWjMvrkUqX0

keeper63


I would like to construct a robocar that learns its environment (race track) by being shown an ideal one lap path under the control of the radio transmitter then, when the transmitter is turned OFF, 'races' on its own.  (My car already gives up autonomous control when the transmitter is turned ON.  It samples then passes on the steering and wheel driving PWM signals.)  I am asking advice on how to create such a map then follow it.  Do any of you know where I could find a flow diagram or a code structure that would accomplish this?  What sensors would be needed besides or in addition to my two ultra sound sonars?


See the link I posted above.

One of the segments of the Udacity CS373 course was to implement a simulation of a "car" driving around an oval "racetrack" - this was all done in Python; along the way you learn the how's and why's of PID controllers. Seriously - everything you want to know and do is located in that course (well - it will make a great base for expanding on; ultimately the course culminates in a simple form of SLAM - it is mentioned that SLAM is by no means a solved problem).

You're wanting to build essentially a "self-driving vehicle" - so, why wouldn't you invest the time (or have your students invest the time) to learn from the guy who developed not only Google's self-driving car, but also developed a vehicle that won Darpa's Grand Challenge (both off-road and city driving versions)? He's giving away this course - take advantage of it!
I will not respond to Arduino help PM's from random forum users; if you have such a question, start a new topic thread.

kenjones1935

Thank you for your encouragement.  I have decided after reading a dozen pages from

http://robots.stanford.edu/papers/thrun.mapping-tr.pdf

that the system required to know the location and velocity of my Robocar at any given moment is beyond the capabilities of our 7-12th grade Public School STEM students.

As you can see in the video, above, my self guided Robocar goes pretty fast.  The code is a state machine based on moment-by-moment readings on the forward pointing and right side pointing sonars.  The kids can manipulate the five parameters in the C code that dictate at what distances and in which state the car should turn, brake, reverse, go fast.

Thanks again!

Ken

keeper63


Thank you for your encouragement.  I have decided after reading a dozen pages from

http://robots.stanford.edu/papers/thrun.mapping-tr.pdf

that the system required to know the location and velocity of my Robocar at any given moment is beyond the capabilities of our 7-12th grade Public School STEM students.


I agree that it is quite advanced; still, you might mention it in passing to your students - there may be one or two in your classes that have the desire to tackle the course. I know that when I went through it (as well as other courses like it in 2011), there were more than a few high schoolers taking the classes. I don't know how well they did, or if they struggled, or whatnot - but even if they did, it probably primed them for finding out more, and maybe taught them some new things as well. Perhaps the same could happen with your students?

Really, the most difficult thing about the course (in my view) was understanding probabilities/statistics and how they work in such systems as the course described. I found that I was actually lacking in those areas (and it's been a couple of decades since I was in school of any sort). The linear algebra needed, though, shouldn't be beyond the capabilities of some of your students, I would think (I do recall having been taught such math when I was in high school - it was something I actually enjoyed learning, as I could apply it to the 3D graphics programming I was doing at home on my 8-bit machine - heh).

At the very least, they would at least have an understanding that there is some fairly complex things going on to give a robot vehicle the ability to know where it is in an unknown environment, and that it can build up an understanding and knowledge base of that environment without needing to be explicitly programmed with that knowledge. That alone can lead to interesting thoughts and discussion, I would think...
I will not respond to Arduino help PM's from random forum users; if you have such a question, start a new topic thread.

Go Up