Attempting to Identify Targets using Binocular Vision w/ Ultrasonics

Hey there,

I am designing a robot that resembles a vacuum/roomba. It is essentially a car with a roller at the front that is designed specifically to intake 4 inch diameter balls that I will have released in a square, walled off area. The physical design of the hardware and roller is complete, but I am working on automating the robot so that it doesn't need me to control it via RC, and instead can "see" a ball in front of it, move towards it, and intake it.

The way I am accomplishing this is through my own "binocular vision" system that involves two ultrasonic sensors arranged with angles pointing at each other (I have already researched how to prevent interference between the two sensors so that won't be a problem). The two sensors are from VEX, and the link is here. The positioning I am using is based on a little simulation I coded (screenshot attached).

My original logic was that the two sensors have an overlap and can also tell the distances away an object is. Based on the different distances the two sensors read, I could identify where the object was to a pretty high degree of accuracy. Basically, there are 4 cases I need/want to be able to identify between: stationary ball, slowly rolling ball, wall, or 2+ balls in a random pattern.

For the stationary ball, my logic is that the two sensors will determine where the ball is using the relative distances each sensor records. Then, depending on where exactly the ball is, the robot angles itself and moves forward to grab it. This is the most basic goal of the entire sensor system, and I am reasonably certain my setup can accomplish this.

For a slowly rolling ball, my plan, of which I am ~90% sure, is to take multiple readings per second, faster than the ball will be moving, and then treat the rolling ball as "stationary" as the robot moves towards it and intakes. I do not think this should be too hard either.

The first complicated step, and one which I am looking for alternative ideas on, is wall identification. My identification method is based on the idea that the two sensors will be receiving two different values from the closest points. Unlike ball identification, in which the two values will be within a certain range according to the dimensions of the "sweetspot," if there is a wall, the two values from the sensors should be pretty far off from the sweetspot range. This would mean that there is a wall present, and I think I might even be able to calculate the angle the wall is at based on the two separate distances.

And finally, the step I am most unsure will work, is identifying multiple targets in one scan and focusing on a single ball from the noise. If there are two separate balls, I want to be able to distinguish that the two "blips" showing up are individual balls, and NOT a wall. My thinking process is to have the robot rotate slightly and take new readings from an angle, and then compare the new readings to the previous. If it really was a wall, then the readings should remain constant after adjusting for the angle change. If, however, the readings changed significantly, this would suggest that the two blips were balls that need to be chased down, and then the robot could move for the closer one of the two.

This is by far the most complicated programming endeavour I have undertaken, and so I would really appreciate any help that you could provide. What are your thoughts/tips on identifying the balls from the surroundings? Specifically, is there a better method I could use to identify a wall and multiple balls?

Any assistance you can provide would be greatly appreciated!

usnetlogo.JPG

You make a couple of assumptions, which you can confirm yourself. I'd assume that anything beyond the first echo is discarded, unless you implement quite a sophisticated US hardware and library.

If these are ultrasonic sensors, then each one gets back an echo and determines from how far away that echo came. But if there are two objects in the field, as mentioned above, I think the sensor will just distinguish the first echo. There will be no way to analyze one echo, then a second one, from a single sensor pulse.

I played with several ultrasound sensors, and I found that when there were multiple objects in the field, I usually got poor answers, sometimes just nonsense. I think if you ginned up your own ultrasound system, you might make a go of this. You would send out a pulse, and use a sharply tuned receive circuit to record more than one returning echo. That would give you more data about what was out there, but I think the algorithm for actually determining that there are two objects, and where they were, especially if they're moving, would be quite an accomplisment.

Alright thank you to both of you for your advice! Unfortunately, my sensors are in the robotics room at my school and so I cannot test anything quite yet. It would definitely seem as if detecting multiple targets will not be possible with only these sensors, and so I have decided to attempt to use the blindspot between the sensors to my advantage.

Since I know I can change the angles of the sensors and can also manipulate their exact position while building the robot, I think it might be possible to have the blindspot of the sensors lie directly in front of the robot. The primary concern and reason for trying to solve the problem of 2 balls was that in the case that my intake mechanism knocks the ball forward a bit before intaking it into the robot, the first ball might interfere with the sensor readings. Now, I think if I have the blind spot right in front of the robot, the sensors won't pick up the stray ball and can focus on the ball in front.

I would appreciate if you could tell me if you think this technique might work. Also, do you think I could use a 3rd sensor to get better readings in some way? Thanks again for your input!