Reading Camera Pixel Data?

Hello everyone,

I am attempting a project for an upcoming science fair at my school that involves reading light signatures off of a spectroscope. My plan is to line up a camera attached to an Arduino to capture multiple rows of RGB pixel data and average them to get accurate spectroscope scale color data. I am wondering, since I believe it isn't possible to do this from a JPEG image file, how can I use a camera to capture pixel data directly from the camera and then process this information possibly as an array? If that isn't possible I'd like to then know how I can save the image as a file capable of storing RGB data for a few hundred pixels and then read it so it can be processed. Basically I'm trying to develop a program that reads spectroscope spectrums pixel-by-pixel.

I'm new to Arduino so any advice helps, thanks!

Maybe instead of a camera, try a single row ccd detector, like you'd find in a scanner. How wide is the spectrum you are trying to capture?

I couldn’t find a decent CCD sensor that would work with color images, but the sample size width doesn’t matter. The spectrum would range from violet to red. I think the more important thing that I would like to know is how to break down images from the camera into pixels.

Do you want to "break down a camera image into pixels" or read the values of a spectrum as intensity vs wavelength? The first one is not a good job for Arduino, it lacks the memory and speed to do that. The second is an Arduino-doable job, but you need to rethink how to do it.

how can I use a camera to capture pixel data directly from the camera and then process this information possibly as an array?

Pick a camera that allows access to the raw image data, and follow the directions.

However, keep in mind that getting absolute color/intensity information out of the individual RGB values per pixel requires a serious effort at calibration.

I don't think being new to Arduino matters, as I think you are on the wrong tram anyway. I believe this calls for some serious image analysis, and a PC is the place to do it, so you might as well send the image there in the first place. This means leaving Arduino out of the game, the money being better spent on a camera suited to your needs. If you don't already have a suitable camera, note that, even with the eye-watering memory burden, it was common for older digital cameras to have RAW, BMP or TIFF output. The latter formats may suffice and be easier, as RAW was sometimes proprietory.

MATLAB,IMAGE PROCESSING

This sort of thing can easily be accomplished on the Raspberry Pi. That is getting an image into a program and reading pixels. The Pi has its own high resolution camara or you can use a plug in usb web cam.

However I think the whole project is doomed because of many factors. The main one being is that despite what you might think you will not get consistent results, due to things like stray light and variable / automatic camera settings.

Grumpy_Mike:
This sort of thing can easily be accomplished on the Raspberry Pi. That is getting an image into a program and reading pixels. The Pi has its own high resolution camara or you can use a plug in usb web cam.

However I think the whole project is doomed because of many factors. The main one being is that despite what you might think you will not get consistent results, due to things like stray light and variable / automatic camera settings.

Your concerns Mike, can be addressed by hacking an unused flatbed scanner for it's 800(or so)x1 light sensor array. A grating or prism can be focused onto the linear detector, and repeatedly sampled to obtain an average position of bright and dark bands. A slit or pinhole can exclude stray light and sharpen narrow bands into sharp lines. A cylindrical lens over the aperture will allow aiming errors as long as the source is the brightest object in the field of view.

I think an Arduino can process this data in real(ish) time.

Yes I am sure that using a one dimensional sensor array like that would do but it is not a camera and the OP might not be up to writing a driver to interface to it.

My concerns come from a project I did involving a physical music sequencer using pegs with colours tops. Sounds simple until you try it, light levels were a nightmare.

Do you actually “need” readings off the spectrometer or reading off the sample?

A single color sensor “viewing” the sample would be able to detect changes and do whatever fun stuff you want done while your spectrograph simultaneously allows people to observe the scientific reasons for the color change.

Surely everyone has seen the Public Lab Spectrometry page!