Detect corner points of rectangle

Hi everyone,

I'm working on a student project where I have 24 light sensors spaced 6 cm apart above a small conveyor belt. Boxes of various widths, lengths, and angles will pass through the sensors.

I'm considering using a 24x50 array to represent the detected shapes. When a box's leading edge triggers any sensor, a "1" is inserted at the corresponding position in the array. With scans every 20 ms, a 1-second interval would provide a snapshot of the object's shape.

Initially, I thought about using nested loops to identify the first, second, third, and fourth corners of the box. Then, I'd calculate the length and width based on the coordinates of the end points and use arctangent to determine the angle based on the line's slope.
length = sqrt((x2 - x1)^2 + (y2 - y1)^2)
angle = arctan((y2 - y1) / (x2 - x1))
However, I realized this approach might not be optimal, as variations in package angle could lead to inaccurate corner detection and unreliable results.

Instead, I'm looking for better solutions. Would anyone be able to guide me towards more robust methods for shape identification and analysis?

Please do not post in "Uncategorized"; see the sticky topics in Uncategorized - Arduino Forum.

Topic moved.

I think you better show us at least of a couple of an example snapshot (a text representation is enough, like 50 text rows each one containing 24 0/1 digits), together with how you represent the 24x50 values inside the code, to le us know how handle it. E.g. a 24x50 char array? Or an array of 50 unsigned longs (as a bit mapped with mask 0x00FFFFFF aka with the first 24 bits)?

And are all the boxes rectangular (including squares), right? Because if you detect leading edges only, I think there's no clue to tell if it's rectangular or triangle as an example.

PS: looks like a 2D Hough Transform... Or AI shape recognition... :wink:

Sounds like a job for HuskyLens
See the video from this page:-

True, except for the fact he already has the sensor array (and I don't know if he wants/needs/can change it).

A more robust approach would be to collect the data as a 2D image array (with time points being one axis) and analyze the image for corners, or for general shape. That way, all the data contribute to the analysis.

Does it have to be an Arduino with light sensors?

A camera connected to a Pi running image recognition software sounds like an easier solution to this problem.

As far as I can understand, he already did it. The scanner seems to be alineear one with have 24 sensors in line, and the program takes 50 readings (20ms per reading, for 1 m of the underlying belt movement), with a "1" when a change has been detected, creating in fact a kinda "edge detection".
But unless the OP will show us how he's actually getting the data together with a practical result and its current storage method, I think we can make a lot of congectures and hypothesys and suggest a different hardware, but without solving his problem...

I'm getting this kind of representation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
18 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
22 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
23 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
25 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
26 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
27 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
28 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
29 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
30 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
31 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
32 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
33 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
34 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
35 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
36 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
37 1 1 1 1 1 1 1 1 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
38 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
39 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
40 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
41 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
42 4 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
43 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
44 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
45 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
46 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
47 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
48 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
49 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
50 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Probably the simplest approach once you have the image array is identifying the leftmost, rightmost, topmost and bottommost '1', perform edge-detection and then traverse the edge pixels between those 4 points. This should sort of work for an angled rectangle as is, but for one lined up with the array you'd need to also look for the NW, NE, SW, SE-most pixels as well, and see which works best.

If you have more processing power to throw at the problem you could try cross-correlation with suitable kernels, such as a rectangular corner, at all angles and positions.

This looks like a pentagon. If it is a parallelogram, was it completely inside the belt? (assuming that the LED array covers the full belt width)

If the object is known to be a square or rectangle, part of it is outside the range of the scanner. You could estimate the position of corner four by calculating the intersection of the two adjacent edges, then calculate the area, for example.

That is the point of taking all the measurements into account.

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.