Solar Powered Coke Fetching Robot - Advice Please

Hey guys,

I am considering a (rather ambitious for me) project that, as you might have guessed from the title, will ultimately produce a robot that can, from the same starting point, fetch a coke from the fridge and bring it back to said starting point at the push of a button. I think I've got a good basic sense of what needs to be done and the tools available to get it to the fridge and open the door and I think I can even work out how to program it to go to autonomously go to the same sunny spot and recharge. How on earth I'm going to get it to recognize and grab a coke is beyond me. The easy thing is to always have the coke in the same place so it's an automated program with no variables - but I'd like to see if there's a way to get it to recognize and grab a coke from, say, anywhere on the bottom shelf. Let me be clear: All of this is way beyond my current skill level (hence the challenge of producing a robot that can get it done) and so I really need to get some community input on the matter. This is going to, for me, represent a dramatic uphill climb on the learning curve so I could really use some help planning the parts and materials for the project. I don't "know" how to do any of this off the top of my head, but I figure we'll take it one mini-project at a time.

I'm giving myself 2 years to get this done and I don't think cost will be prohibitive.

Anyone willing to help me avoid early mistakes via their experience is more than welcome to chime in.

Thanks.

but I'd like to see if there's a way to get it to recognize and grab a coke from, say, anywhere on the bottom shelf.

What does the robot need to be able to distinguish the coke from? Beer? Pepsi? Lettuce? Ketchup?

Distinguishing a coke from a beer is not as difficult as distinguishing a coke from a Pepsi, since beer cans are not the same size as soda cans. All soda cans are the same size, though.

How on earth I'm going to get it to recognize and grab a coke is beyond me.

Well, some type of image recognition would be a place to start. Perhaps google is your friend in studying facial recognition or other types of object recognition. You might google roborealm for some software to start. There is also some open source software. Most of this is pc based and well beyond the arduino world.

PaulS:

but I'd like to see if there's a way to get it to recognize and grab a coke from, say, anywhere on the bottom shelf.

What does the robot need to be able to distinguish the coke from? Beer? Pepsi? Lettuce? Ketchup?

Distinguishing a coke from a beer is not as difficult as distinguishing a coke from a Pepsi, since beer cans are not the same size as soda cans. All soda cans are the same size, though.

At this point I think I just need it to have enough torque to be able to open the refridgerator door and then I'll need it to properly position a grasper arm to find and grap a coke (always) from the bottom shelf. Recognizing a coke from a pepsi (which I think I can probably do with some sort of color recognition) is beyond the scope of version 1.0.

The big issue for me is trying to make sure I've got the right boards, power supply, grasper arm, engine, platform, etc from the get go. Expect to be flooded with programming bugs and queriers after I've got the right hardware chosen. :smiley:

The big issue for me is trying to make sure I've got the right boards, power supply, grasper arm, engine, platform, etc from the get go. Expect to be flooded with programming bugs and queriers after I've got the right hardware chosen.

A robotics reality check indicates you should really solve the most difficult part of your project first before buying a lot of supporting hardware (unless you have excess $$$). If you can't solve the object recognition issues, all the equipment listed above will be useless (but good for a donation to a school or similar unless you find another project).

zoomkat:

How on earth I'm going to get it to recognize and grab a coke is beyond me.

Well, some type of image recognition would be a place to start. Perhaps google is your friend in studying facial recognition or other types of object recognition. You might google roborealm for some software to start. There is also some open source software. Most of this is pc based and well beyond the arduino world.

The roborealm stuff was great - and they have a module for interface with Arduino Mega. I think the software package there is definitely robust enough to get the job done. Very helpful - and exactly the kind of info I was hoping to find when I started the thread.

Thanks!

zoomkat:

The big issue for me is trying to make sure I've got the right boards, power supply, grasper arm, engine, platform, etc from the get go. Expect to be flooded with programming bugs and queriers after I've got the right hardware chosen.

A robotics reality check indicates you should really solve the most difficult part of your project first before buying a lot of supporting hardware (unless you have excess $$$). If you can't solve the object recognition issues, all the equipment listed above will be useless (but good for a donation to a school or similar unless you find another project).

Are you suggesting to me that I have the code written before I have the hardware to test it on? Seems a bit backwards, no?

Are you suggesting to me that I have the code written before I have the hardware to test it on? Seems a bit backwards, no?

No, you probably could get a $10 USB cam and the roborealm software to start testing. Develop your recognition program to the point to where it can tell the difference between a coke and a bud lite when they are held in front of the cam in differening conditions and positions. Simple, right?

zoomkat:

Are you suggesting to me that I have the code written before I have the hardware to test it on? Seems a bit backwards, no?

No, you probably could get a $10 USB cam and the roborealm software to start testing. Develop your recognition program to the point to where it can tell the difference between a coke and a bud lite when they are held in front of the cam in differening conditions and positions. Simple, right?

Good point. I see what you mean now. Not a bad idea.

Thanks.

phear_sc:

zoomkat:

Are you suggesting to me that I have the code written before I have the hardware to test it on? Seems a bit backwards, no?

No, you probably could get a $10 USB cam and the roborealm software to start testing. Develop your recognition program to the point to where it can tell the difference between a coke and a bud lite when they are held in front of the cam in differening conditions and positions. Simple, right?

Good point. I see what you mean now. Not a bad idea.

Thanks.

If you really want to do this -right-, take this class (advanced version - not the basic - if you -really- want to understand things):

An example of what someone was able to do with this class:

http://blog.davidsingleton.org/nnrccar

I just completed this class (the inaugural version); a new one is starting up soon:

http://jan2012.ml-class.org/

You still have time to enroll. It takes about 2 months (weekly sessions) to complete, and if you are serious about robotics and machine learning, it is well worth it, believe me. It is an excellent program, well worth the time.

cr0sh:

phear_sc:

zoomkat:

Are you suggesting to me that I have the code written before I have the hardware to test it on? Seems a bit backwards, no?

No, you probably could get a $10 USB cam and the roborealm software to start testing. Develop your recognition program to the point to where it can tell the difference between a coke and a bud lite when they are held in front of the cam in differening conditions and positions. Simple, right?

Good point. I see what you mean now. Not a bad idea.

Thanks.

If you really want to do this -right-, take this class (advanced version - not the basic - if you -really- want to understand things):

http://www.ml-class.org/

An example of what someone was able to do with this class:

How I built a self-driving (RC) car and you can too.

I just completed this class (the inaugural version); a new one is starting up soon:

http://jan2012.ml-class.org/

You still have time to enroll. It takes about 2 months (weekly sessions) to complete, and if you are serious about robotics and machine learning, it is well worth it, believe me. It is an excellent program, well worth the time.

Enrolled.

Although, as an MIT alum (Finance - so not as cool as it sounds), it pains me to enroll in a Stanford program. Oh well, at least it's not Cal Tech.

phear_sc:
Enrolled.

Good luck with the course!

If you are taking the advanced track, some words of warning and advice:

  • Allot enough time each day to watch the videos; don't try to cram them all in one sitting
  • Note the amount of time for all videos, and divide them over the week; I liked to divide them such so-as to give me Saturday to do the review questions, and Sunday for the Octave coding
  • You might want to get Octave set up on your machine -now- and perhaps play a little with it...
  • How is your linear algebra? If it's been a while, brush up on it. Oh - and vectors, matrices, etc (vectors and matrices are primitives in Octave)...
  • How are your programming skills? If you aren't a person who lives and breathes code, you may need to do the homework over the course of the week, instead of leaving it for last
  • Remember to take plenty of notes (lecture notes are provided, but I found taking the notes myself was more helpful for me)
  • Some weeks may be "double-unit" weeks - pay attention to how things go; in the inaugural class, you sometimes didn't find it was such a week, until a couple of days in! On one, I didn't find out about it until Friday or Saturday (talk about a cram session). So - pay attention to the syllabus, and the web page.
  • Make use of the ml class sub-reddit, and other discussion resources
  • There may be meetups in your area, if you need such a resource
  • Stats and probability isn't as heavily needed, but it couldn't hurt to brush up lightly on them if it has been some time
  • Above all, if you get stuck or have questions, ask for help in the discussion areas; there are tons of people taking the class, and tons helping out, and many are very helpful. Just be aware of the Stanford honor code when posting, etc...
  • Finally - just have fun; this isn't for a grade, but try to do your best, and think about how you can apply what you are learning to problems you currently have, or may have in the future (with regard to robotics, or anything else).

I hope that helps; I had a fun time with the class, and did fairly well overall. If you've ever used Matlab before, then Octave is very, very similar (from what I have been lead to believe). I have a ton of plans for what I learned, that I can apply to my robotics experimentation (I just gotta get to a rolling platform, first - you wouldn't think that actuating the steering of a Powerwheels vehicle would be difficult on a budget, but it hasn't been a cakewalk for me so far).

:smiley:

Or you could simply mount a color sensor and somehow program it to make your manipulator arm grab anything that is bright red when it is in the general vicinity of your refrigerator.

Whippet:
Or you could simply mount a color sensor and somehow program it to make your manipulator arm grab anything that is bright red when it is in the general vicinity of your refrigerator.

...and how does this supposed robot manipulator arm even -know- where "bright red" things are? A color sensor isn't going to give anywhere -near- enough information to be able to position the arm.

Try putting on a blindfold, and some over mitts, and stand in front of the refrigerator. Have someone else give you yes/no answers when you hand comes near "bright red" in the fridge. Try not to make a mess.

You can't just set up a color sensor and "somehow program" things to get the job done; robotics doesn't work this way. Maybe if there were a designated coke can dispenser always in the same spot in the fridge, and the position of the fridge never changed, and there were some other "guidance" sensors and such to guide the arm/gripper/manipulator into position (and similar stuff on the floor, etc to guide the robot's chassis) - you -might- be able to get away with it (something like early pick-and-place industrial robot systems - where if you can control for every possible variable, you can drop the error rate down low, but you'll never get to zero).

Ultimately, you need a little bit of such a system, coupled with vision and machine-learning software to make an effective system. It really isn't as easy as you think (otherwise, we would have already had the home helper robots we've been promised for the past 50 years already).

cr0sh:

Whippet:
Or you could simply mount a color sensor and somehow program it to make your manipulator arm grab anything that is bright red when it is in the general vicinity of your refrigerator.

...and how does this supposed robot manipulator arm even -know- where "bright red" things are? A color sensor isn't going to give anywhere -near- enough information to be able to position the arm.

Try putting on a blindfold, and some over mitts, and stand in front of the refrigerator. Have someone else give you yes/no answers when you hand comes near "bright red" in the fridge. Try not to make a mess.

You can't just set up a color sensor and "somehow program" things to get the job done; robotics doesn't work this way. Maybe if there were a designated coke can dispenser always in the same spot in the fridge, and the position of the fridge never changed, and there were some other "guidance" sensors and such to guide the arm/gripper/manipulator into position (and similar stuff on the floor, etc to guide the robot's chassis) - you -might- be able to get away with it (something like early pick-and-place industrial robot systems - where if you can control for every possible variable, you can drop the error rate down low, but you'll never get to zero).

Ultimately, you need a little bit of such a system, coupled with vision and machine-learning software to make an effective system. It really isn't as easy as you think (otherwise, we would have already had the home helper robots we've been promised for the past 50 years already).

As I said - I expect it to take 2 years.