Re: Anyone do computer vision?

If you have access to Matlab, it is quite easy to process an image, identify geometric objects like circular features, find the feature centers and evaluate the area for color, etc.

A well aligned photo taken from directly overhead would reduce distortion, which is not required but might help.

Below is a random example. I it wrote some years ago, to evaluate the emission color ratio of fluorescent bacterial colonies on plate, starting from two images of the same plate, taken using blue and green filters.

Matlab's documentation includes countless examples, so it is pretty easy to get up to speed with arcane function names.

ImageJ from the NIH does a lot of this same stuff, but with a completely different user interface, and is free/open source.

clear all; close all;
% read in blue and subtract background
disp('Select blue image');
[blueframe, pathname]=uigetfile('*.jpg','Select an image file');
s=sprintf('File selected: %s ',blueframe); disp(s);
I0b = imread(blueframe);
I0b = I0b(:,:,1);

% calculate background by wiping out disks smaller than 35 pixel radius

bg = imopen(I0b,strel('disk',35));
i0b=I0b-bg;

% scale to maximize contrast

imadjust(i0b);
imshow(i0b); title(blueframe);

%read in green and subtract background

disp('Select green image');
[greenframe, pathname]=uigetfile('*.jpg','Select an image file');
s=sprintf('File selected: %s ',greenframe); disp(s);
I0g = imread(greenframe);
I0g = I0g(:,:,1);
bg = imopen(I0g,strel('disk',35));
i0g=I0g-bg;
imadjust(i0g);

%sum the two images and find colonies

sumI=imadjust(i0b+i0g);
level=graythresh(sumI);
bw=im2bw(sumI,level);

% find blobs with > 500 pixels
bw=bwareaopen(bw,500);

%find connected regions and label them in array "labels"

[labels, NumObjects]=bwlabel(bw);

% sum up intensities in each of the labeled blobs, on each image

for j=1:NumObjects
    indices=find(labels==j);
    blue(j)=sum(i0b(indices));
    green(j) = sum(i0g(indices));
end

% calculate ratio of green to blue for each blob

ratio=[green] ./ [blue];

% generate zero image of the same size

img=zeros(size(bw));

% fill in blob areas with appropriate ratio values

for j=1:NumObjects
    indices = find(labels==j);
    img(indices)=ratio(j);
end
colormap(jet);
image(img,'CDataMapping','scaled'); title(blueframe);

Ten minutes of editing produced this using Matlab. For efficient blob identification, a photo from directly overhead would definitely be more useful.

Good quality and consistent lighting are the key things to consider.

I have used open CV on a a raspberry pi, I was detecting the colours printed on the end of pegs to make a physical sound sequencer. But it wasn't as easy and consistent as I thought it would be.

However I would recommend you have a look at the Husky Lens, you can train it to recognises images in a much simpler way than the other so called smart camera, and it is easy to interface presenting an I2C, SPI or serial interface you your Arduino.

This was the physical percussion sequencer I talked about in my first reply. I used percussion sounds because they are easy to recognise. The video starts by showing some still pictures of the construction of the variable height tripod I made to hold a normal USB web cam.

Then you see it in "action". I show my hand in front of the peg board, to first disrupt the sequence, and so change the sound sequence, but also to show the frequency of the image update. It was a balance between updating the image and actually generating the sounds.

It shows the scanning lines across the image as vertical green lines. I wrote the music for the background sound. I call it "The Debate", and I wrote it as an exercise in seeing how I could take Garage Band sounds and forge them into something. It matches my interest in minimalist abstract sounds.

I was planning to do a whole lot more image recognition project, like drafts and chess, but lost interest due the difficulty of using Open CV with the lack of processing power of the Raspberry Pi at the time (2016)

https://vimeo.com/manage/videos/137979485

P.S I don't think I have viewed that video for ages.

1 Like

You need to see all the pipettes and be able to visually tell without doubt if there is liquid left (as a human) from that picture. (did I get right you want to know if the pipettes have liquid or just that if holes have a pipette in it?)

If you can do that then it’s very likely openCV + Python will do it too.

Could you share such a picture and document what you see ?

if the goal is to identify the holes with a pipette inside, then 20 min of work gets me this (python + openCV)

if the picture was taken straight from atop there would be better result so I did not invest much more time as distortion does not help.

happy to share the ~60 lines of code - including spitting out red circle coordinates — but would be easier to work from the top view which would simplify the code probably.

Pipette detected at: (1234, 1297)
Pipette detected at: (1127, 1295)
Pipette detected at: (490, 847)
Pipette detected at: (1231, 1423)
Pipette detected at: (1129, 1045)
Pipette detected at: (629, 1597)
Pipette detected at: (638, 1147)
Pipette detected at: (1018, 1165)
Pipette detected at: (484, 1600)
Pipette detected at: (1331, 1175)
Pipette detected at: (628, 1738)
Pipette detected at: (644, 865)
Pipette detected at: (773, 871)
Pipette detected at: (1130, 914)
Pipette detected at: (772, 1442)
Pipette detected at: (902, 1027)
Pipette detected at: (1115, 1669)
Pipette detected at: (496, 1292)
Pipette detected at: (907, 893)
Pipette detected at: (1333, 1297)
Pipette detected at: (484, 1750)
Pipette detected at: (1234, 1678)
Pipette detected at: (488, 1912)
Pipette detected at: (1238, 791)
Pipette detected at: (1003, 1676)
Pipette detected at: (1126, 1421)
Pipette detected at: (899, 1303)
Pipette detected at: (776, 1159)
Pipette detected at: (880, 1172)
Pipette detected at: (776, 1292)
Pipette detected at: (770, 1583)
Pipette detected at: (1133, 782)
Pipette detected at: (875, 1579)
Pipette detected at: (1037, 1433)
Pipette detected at: (1237, 1048)
Pipette detected at: (773, 1012)
Pipette detected at: (1022, 1036)
Pipette detected at: (494, 1456)
Pipette detected at: (643, 1306)
Pipette detected at: (1132, 1168)
Pipette detected at: (1010, 790)
Pipette detected at: (781, 739)
Pipette detected at: (614, 1430)
Pipette detected at: (748, 1732)
Pipette detected at: (1234, 1175)
Pipette detected at: (1129, 1552)
Pipette detected at: (1337, 1058)
Pipette detected at: (914, 1714)
Pipette detected at: (1237, 1549)
Pipette detected at: (1240, 922)
Pipette detected at: (881, 1420)
Pipette detected at: (1006, 919)
Pipette detected at: (1037, 1297)
Pipette detected at: (1300, 1895)
Pipette detected at: (1340, 812)
Pipette detected at: (500, 1156)
Pipette detected at: (1016, 1565)
Pipette detected at: (499, 1003)
Pipette detected at: (1339, 935)
Pipette detected at: (610, 1001)
Pipette detected at: (893, 778)

the algorithm goes like this:

Load the image.
Convert the image to grayscale.
Improve contrast with histogram equalization.
Apply Gaussian blur to reduce noise.
Detect circles with a radius in a given range
Filter out overlapping circles, keeping the smaller ones
Calculate the average color in a region of interest for each circle (inset)
Color circles green if inset is blackish and red otherwise.
Display the original and result images side by side.

I agree that Python and OpenCV is a better and infinitely cheaper way to go than Matlab.

In @J-M-L 's example, to convert "pipette detected" coordinates to tip holder cell coordinates simply requires finding the tip holder corners and working out a transformation.

Which is not quite so easy with an oblique view.

I’m on the go with very limited bandwidth (sailing) - will share tonight when I come back to my Mac.

Really the library does all the hard work - then it’s a bit of fine tuning

I’m pretty sure that if you can take a picture from atop the circle detector will work even better otherwise I need to modify the code to go for an ellipse detection - might work too

I think it’s actually easy if the camera does not move across shots and the container is always in the same location - you could almost hardcode the geometry. But I think ordering the circles’ centers in a grid will let you map to how those tools probably refer to slots (is that a sequential number or an (x,y) position ??)

I too would like to see how you went about it. I've done a couple of simple OpenCV+Python applications, but am still learning Python.

Hi - just hitting home.

that was the code I ran on your picture

import cv2
import numpy as np

image = cv2.imread('pipette.jpg')

if image is None:
    print("Error: Unable to load image. Check the file path.")
else:
    result_image = image.copy()
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    equalized = cv2.equalizeHist(gray)
    blurred = cv2.GaussianBlur(equalized, (9, 9), 0)

    circles = cv2.HoughCircles(blurred, cv2.HOUGH_GRADIENT, dp=1.5, minDist=20,
                               param1=50, param2=30, minRadius=50, maxRadius=55)
    non_overlapping_circles = []

    if circles is not None:
        circles = np.round(circles[0, :]).astype("int")
        mask = np.ones(circles.shape[0], dtype=bool)

        for i, (x1, y1, r1) in enumerate(circles):
            for j, (x2, y2, r2) in enumerate(circles):
                if i != j and mask[j]:
                    distance = np.sqrt((x1 - x2) ** 2 + (y1 - y2) ** 2)
                    if distance < (r1 + r2) / 2:
                        if r1 < r2:
                            mask[j] = False
                        else:
                            mask[i] = False
                            break

        non_overlapping_circles = circles[mask]

        for tip_counter, (x, y, r) in enumerate(non_overlapping_circles, start=1):
            inner_radius = max(r - 10, 0)
            roi = image[max(0, y - r):min(image.shape[0], y + r), max(0, x - r):min(image.shape[1], x + r)]
            mask_circle = np.zeros(roi.shape[:2], dtype=np.uint8)
            cv2.circle(mask_circle, (r, r), inner_radius, 1, thickness=-1)
            average_color = cv2.mean(roi, mask=mask_circle.astype(np.uint8))[:3]

            if average_color[0] < 80 and average_color[1] < 80 and average_color[2] < 80:
                color = (0, 255, 0)
            else:
                color = (0, 0, 255)
                print(f"Pipette detected at: ({x}, {y})")

            cv2.circle(result_image, (x, y), r, color, 2)
            cv2.putText(result_image, str(tip_counter), (x - 10, y + 10),
                        cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)

    else:
        print("No circles were detected. Try adjusting the parameters.")

    side_by_side = cv2.hconcat([image, result_image])
    original_height, original_width = image.shape[:2]
    resize_height = 1600  
    resize_width = int((original_width * 2) * (resize_height / original_height))
    side_by_side_resized = cv2.resize(side_by_side, (resize_width, resize_height))

    cv2.imshow('Original (Left) and Color-Coded Tips (Right)', side_by_side_resized)
    
    cv2.waitKey(0)
    cv2.destroyAllWindows()

this was typed quickly so can probably be improved.

the circles are not ordered but as I said, coordinates can help build them into an 8x12 grid

circle detection could probably be improved by playing with the image a bit more.

I had named the image pipette.jpg and placed it in the same directory as the python code. I launched the code with python3 analysis.py

note that a grid based approach might lead to incorrect results as the image is not taken from above and the pipettes stick out

I asked python to draw a grid based on the coordinates of the centers of the 4 corners

so probably circle or Ellipse detection is the way to go

yes, if the camera does not move, the geometry is quite stable and you can have canned positions for the 96 centers (mind that pipettes stick out) and you just check if there is more white than black in a small circle around that center.

that would work (assuming lighting is also stable as you'll need a threshold)

Another idea for image processing before performing the circle detection would be to use a manually processed picture of the holder without any pipette, taken exactly from the same position and use it as a mask to cut out unnecessary artifacts. That would give you a cleaner image to analyze, removing things like the variability in background lighting and a more focused area.

1 Like

OK - once you have one (ideally with no pipette and one with pipettes) - feel free to share.

interesting, yes if the image is stable pretty easy to check for spots..
worked on this a bit, fun..
that's allot of spots, have to set them up..
but yes, if image is stable should work..

test app, can capture from any capture device in windows, ex webcam..
once capturing, click detect to grab a frame and check for spots..
use the load button to load a jpg, used that to load up your image and detect..

let me know if your interested i'll drop everything into a git..
could add a remote trigger, maybe udp or serial??

just quick and messy, sorry..

fun stuff.. ~q

The revised image is pretty complex, and circle finding did not work well in my hands, using Matlab. So I tried cross correlating an example of a filled cell, namely this:
tip0

with the excised image of the entire tip holder:

The cross correlation searches for examples of the small image in the larger one. A surface plot of the cross correlation function result looks very promising (note that the image indexing is flipped, top to bottom):

The color coding is misleading. A side view of that surface shows a remarkable signal to noise ratio.

Matlab code:

ref = rgb2gray(imread('tip0.jpg'));
target = rgb2gray(imread('tipsC.jpg'));
c = normxcorr2(ref,target);
surf(c)
shading flat

Similar functions to the above are available in OpenCV.

1 Like

Hmm, I just noticed that the pipette tips on the right edge of the holder are not completely surrounded by the holder, due to the oblique angle of incidence, so they aren't picked up by the cross correlation search.

It will take some fine tuning to make this approach more reliable.

1 Like

rotated the image..
had to set the spots back up..
the tray seems shiny, lighting is a bit less..

evacuating in the morning, Milton..

hope to see you all again soon..

be safe.. ~q

with the revised image, my original python Script finds all the wells (some misaligned but close enough)

the color detection does not work well, but it's just a bit of fine tuning

I'll give it a try