I’m interested in determining the orientation of a pellet, completely autonomously, using an Arduino Uno, (or other microcontroller?), and a camera shield. One approach might be to:
take a picture of the pellet,
write the image to an SD card,
convert the SD card image to an array,
examine the array location (equivalent of where the red dot would be), and,
using the value found in that location, determine the orientation of the pellet, by answering the question “Is the pixel in that location ‘light’ or ‘dark’ colored?”
If ‘light’ colored, it’s pointing “up”, if not, it’s pointing “down”
Because the process may be repeated hundreds of times, looking at hundreds of pellets, writing to an SD card would probably take too long, so avoiding that image transfer is desirable. With the limited memory of Arduino, I don’t think loading the entire image directly to an array is workable.
Is there a way I can look at a fixed set of coordinates, (e.g., the red dot in the picture) without having to load the entire image array from an SD card?
My very limited understanding of the Pixy is that someone has to manually mouse point to an area in the image to identify an object. If it could automatically look in the corner and tell if it's air or lead, (light or dark), that might work. I don't think it can.
Echoing what others have said, image processing tends to the a lot more processing power than a basic Arduino has available. That said, If you can guarantee that the pellets will always fall in one of two precise orientations, then it will be less onerous. Nonetheless, I'd certainly want something that has enough RAM to hold a single image with ease.
You show a color image. Is this what you are proposing? Or is a black/white image within the limits of what you want to build? Have you ever done anything software-wise with images?
Individual pellets will drop, one at a time, from a single hopper, into a chamber that will orient them to either the up or down position. A picture of the pellet while in the chamber should then limit the image to be either one, or the other, of the images I submitted above.
Pellets are always loaded into a magazine with the dome facing forward, skirt facing rearward. They are small and frequently dropped before loaded properly into the mag. If accidently loaded backward, they must be removed and correctly oriented. Trying to remove a mis-loaded pellet can be trying without other pellets already in the mag falling out and having to start the whole process over.
The sorted pellets will be transferred to a single cylindrical tube (the size of a pencil??), and dispensed into the mag, without having to address which way they are pointing.
Then you are well prepared. You have not told us if you are using color or black-white images.
A two dimension array will allow you to look for edges where one pixel is black and an adjacent one is white. when you discover two such in a line, then you have an edge. The index values for the line will give you an orientation. Experiment!
Could you just put a couple ( say 4) of light sensors on the front of the monitor screen and make a decision based on their output ( light or dark ) so no processing.
That should work in your red dot scenario if the background colour is a good contrast