HELP project with 300+ photoresistors!

hey guys,

i've just been getting into the arduino and there are numerous questions i'd like to ask. hopefully someone can give me suggestions on how to work this project.

i'm building a small labyrinth for a hamster or mice to run in. the flooring of the labyrinth would be tiled with photoresistors. each one would trigger a 1 sec soundclip when the hamster is on top of the photocell. i know i'd be needing the wave shield, and i've found a analog shield from applied plantonics.

presently i'm not very sure about this analog shield. correct me if i'm wrong, i have no experience with this multiplexer. 48 pins can let me use 12 multiplexers = 192 inputs??

is there another the way to do it? pleases give me suggestions! i'm not very well experienced with the arduino. feel free to comment :)

this post confused me a lot i'm not clear so as what you want to do so far what i have underood is that you want to play a sound clip whe n a photosell is blocked but i cant associate it to a rat running :( well if you just want to sense if the rat is running ot know it would be a lot easier ;)

thanx for your reply!

sorry for the confusion, but yes, the soundclip is to be played when the photocell is blocked from light. the project is for the hamster to generate a piece of music trying to come out from the maze. so just sensing the rat’s movement is not enough :wink:

hmm you could try out building a HUGE matrix keypad 8-) on that case it will be more easy aswell you can play different music for diffenet key's

Some multiplex chips like below controlled via parallel or serial serial chips could be used to sample the photo cell outputs.

http://www.mouser.com/ProductDetail/Texas-Instruments/CD74HC4067E/?qs=sGAEpiMZZMutXGli8Ay4kIw6S1qvfJXbnY%252bcJhbp9Pk%3d

For the amount you want to read from, I think a setup up like this is what you need. A 1 of 16 data selector is used to select which of 16 data selectors to read from.

With address lines A7-A4 at 0000 and Enable High, S1 of U1 will go high to Enable U2. Then the state of photoresistor selected by A3-A0 is passed from U2's S1-S16 to the D output to be read by the arduino. Disable U1, change A3-AO to the next output, re-enable, and read state of next photoresistor. Continue for A3-A0 from 0000 to 1111. Then update A7-A4 to next address 0001 and continue with reads from U3, then U4, etc. up to U17 (only U2-U9 partially pictured). Thus 8 address lines (sourced by a shift register from the arduino) and 1 Select line can be used to read the state of 16 x 16 = 256 photoresistors. Need more inputs? Add another Select line from the Arduino to another bank of 16.

You haven't given any info on the photoresistor/photodiode, so I didn't show any here. I imagine they'd have voltage applied to one side and the other side would go to the multiplexer, so you'd be scanning for a change in voltage as the rodent passed over them? http://datasheets.maxim-ic.com/en/ds/DG406-DG407.pdf

Hmm, reading the data sheet some more, I don’t think U1 will work like that - a standard demux like a 74LS154 would probably be needed instead - only it has an active low select vs active high, so a set of inverters would be needed on its outputs.
Logic levels aside, the 1 of 16 selecting from 1 of 16 to be read is the same concept.

Just a thought:

The WIImote camera (pixart sensor) is an awesome device- it gives xy pairs for the top IR sources in its viewfield. Now, you can do one of two things: attach an IR LED to Mr. Whiskers, but that might make him cranky.. or...

Drive not 300 sensors, but 300 IR LED's.. and do it row/column scanned so that only one is lit at a time. There's code to do this around here.. I believe they call it Charlieplexing. The key is that they are scanned through, and one of them is going to be covered by Mr Whiskers... so the wiimote camera will come back with a fat old goose egg for that location.

The point is, at any given time you will know which LED obstructed by seeing which LED doesn't return a blip. Your output comes from the pixart sensor in an easy to translate x/y pair.

The advantage of this is that the maze can be changed entirely, the output is x/y mapped. In addition, the sensor is capable of a much larger maze, or a much tighter spacing, if for some reason you want it.. That, and I have to imagine LED's are cheaper than LDR's or phototransistors..

One last variation on that same theme: though you have to map your own instead of getting nice premade x/y pairs, simply place an infrared phototransistor as the sensor above the maze. As you scan through the LED's, one's not going to make a blip on the phototransistor...because it's covered. It just seems to me that the design would more flexible using emitters below the mouse and a sensor above...

The idea is based on a project I did a million years ago- a light pen for an Atari computer. Faster than your eye can see, a location is lit on the monitor, one at a time. When the pen (a phototransistor) is over the location when it is flashed on, it "sees" the flash, and the result is you know WHERE that transistor is. Same basic idea... except you are placing the phototransistor much further away, so your working "resolution" is a single on/off.. but that's all you need.

The WIImote camera

This also seems similar to what some people are managing to accomplish with the open-source drives for the XBOX "Kinect" vision processor.

It's a bit (or a LOT) scary that the state of computing might have reached the stage where vision processing might be easier and cheaper for analyzing rodent movement than 300 individual sensors, but ... there you are.

If you want to stick with sensors, I would think you can significantly simplify the overall design by relying on the fact that only one sensor can be active at a time.

"If you want to stick with sensors, I would think you can significantly simplify the overall design by relying on the fact that only one sensor can be active at a time. "

That’s assuming the sensors are spread far enough apart that the rodent can only block the light to one at a time.

USF, what overall size are you planning for the labyrinth?

Okay, here's the part I really wanted for U1 - the MAX6921. 20-bit shift register, can do the 4 address lines, the 16 select lines, easy interface via shift clock/data interface to the arduino. Outputs update when Load line is toggled. Can request free samples also. http://datasheets.maxim-ic.com/en/ds/MAX6921-MAX6931.pdf

Install this on a breakout board from schartboards, wirewrap to the demux chips, wire out to the sea of photoresistors, do some coding to scan them on a big loop and send out the sound request when you see one ...

open it here, otherwise it pulls up the old picture for some reason http://www.crossroadsfencing.com/Big_data_select.jpg

Feed Dout to Din on a second one if you need more demultiplexing.

Both the MAX6921 and DG406 have a list price about twice that of the AVR used in an Arduino. You might be better off connecting 50 arduino-like CPUs to your sensors (6 sensors each, without needing to get clever at all) and using some sort of digital bus for communicating with your main CPU. If you can manage to connect the sensors to the AVR digital pins using one of the normal techniques, you should be able to get 16 or so sensors per AVR, cutting the total count down to about 20 chips... You can use them in "minimal" mode (internal oscillator, no crystal, no support circuitry to speak of...)

I don't know why everyone is so positive about the idea of multiplexing MANY sensors or outputs to a single CPU, even after the cost of the sensors and multiplexors goes WAY over the cost of microcontrollers. Part of the point of microcontrollers is that you plunk them down everywhere, replacing traditional electronics with SMART circuits that behave more appropriately to your application.

last but not least, how about this: process NTSC video

The mouse will need to contrast to the background.. white mouse, black floor to the maze, etc. We'll need a clean signal I would think, contrast will enhance this signal.

Somewhere on this site is a project where an LM1881 (or similar) video sync processor was used to grab vsync. After you grab vsync, all you need to do is measure how long between vsync and and the voltage level change (of the luminance level change due to our contrast, above) to know where Mr. Whiskers is located within the image. The video is scanned left to right, top to bottom- and you only care about finding the first hit on a large contrast (voltage) change.

Digikey sells the chip for a little over three bucks:

http://search.digikey.com/scripts/DkSearch/dksus.dll?Detail&name=LM1881N-ND

I'm sure you can find a video camera of some sort with NTSC out around someplace.. you likely have one or can borrow one. Black and white is fine, you only care about Luma, not Chroma.

Look for the project on the boards here. Even as a hack, I think he was able to get 16x16 resolution "vision" without too much trouble-- that's 256, so 300 isn't much more... 20x20 sensing gives 400... and the bottom line is that you don't need to mount and wire hundreds of sensors, just re-use some code and get a three buck chip. Time, effort, and parts savings...

As far as I can see, given that the camera is likely to be gotten essentially for free, this would be the cheapest (and easiest) route..

thanx for the help everyone :)

i think the ntsc idea is great, but being a newbie, i kind of want to stick to the original idea and learn more about multiplexing from hands on experience from this. i have a weekend job that was intended to fund my arduino hardware stuff.

as to more microcontrollers, it just came across me of using the jeenode. does anyone have much experience on this? http://jeelabs.com/products/input-plug . how does this work? i'm having a hard time understanding where's the ''wireless'' part.

and lastly to crossroads, how would the specs of photoresistors affect that much on the project? i'm thinking just standard ones from anywhere.. i'm not an electromagnetic guru, so ignore my ignorance.

"just standard ones" http://search.digikey.com/scripts/DkSearch/dksus.dll

Okay, they vary in cost from $1.58 each upt $2.79 each (and you are talking a large qty, correct?) They vary in size, they vary in home much resistance they have when illuminated to when light vs dark. As the maxim chips are kind of pricey it seems, perhaps a design that is better suited to creating a digital on/off signal would better.

This part is not bad, $1.58 unit, should drop with qty.

http://search.digikey.com/scripts/DkSearch/dksus.dll?Detail&name=PDV-P9001-ND

Light-on resistance is 4-11K, with a 100K pullup resistor to 5V, the low voltage out should be 5V*11,000/(11,000+100,000) = 0.495V which would be a good logic low. As the light starts going away, once the resistance gets to 20K should see a logic high 5V*20,000/(20,000+100,000) = 0.833V.

Now if you have a bunch of shift registers such as http://www.nxp.com/documents/data_sheet/74HC_HCT597_CNV.pdf which are $0.31 each and have 8 inputs, you can read in long words and then use software to go thru and find the bit that changed to represent the note/tone/sound whatever.

As a hardware guy, I prefer the method I started describing, where you read the data from a specific location, but that is coming from a speed required background. That way you need 17 chips for 256 bits. Using inexpensive shift registers, you need 32 chips to read the same bits. The other thing I also consider in one-off projects is how much wiring is needed. Obviously you will be spending more time just connecting +5 & Gnd to all the additional chips, mounting 2x the wirewrap sockets (lot easier to fix mistakes or tweak the design that way), etc. For a layout, you could have 2 chips on a board with the pullup resistors & a decoupling cap and 2 leads off to each of 16 photoresistors assuming you will need the photoresistors spread out every couple of inches along the maze path (or laid out in a big matrix where the maze could be re-layed out in any manner),build up as many as you need for the amount of photoresistors you use. Then spread the boards out some, with the +5/Gnd, data latch, and shift clock signal coming from a central point to all boards and data in/out daisychained along. Then you could split up the assembly work also. Or maybe someone has shift-in register boards built like these already and you just need to add your photoresistor and pullup resistor.

If you're trying to keep costs down, I would suggest something like the RBBB http://shop.moderndevice.com/products/rbbb-kit that you could assemble yourself as well (won't take long).

The jeenode takes an arduino and organizes the output into ports, then you get some plug in modules to different things.

Hey Rich-- I found the example I had been thinking of:

http://www.davidchatting.com/arduinoeyeshield

which is the entry for "Images- Video" in the Input section of the Playground: http://www.arduino.cc/playground/Main/InterfacingWithHardware#Input

I think (at least it makes sense to me, in theory) that what I threw out there might be even easier.. in that we would be looking for the first contrast (voltage) change since the vsync pulse, which would allow the position to be calculated by the delay time between frame start and the contrast change. Since he's getting decent grid output from the code he's using (which is much more complex), just finding that first transition should be a lot easier... he's grabbing it for an entire sensing matrix, just getting the first transition might be even done with a comparator and an interrupt? In fact, since we are dealing with known voltage levels, a/d sampling might obviate the need to have a hardware sync chip at all. Think of the reverse of the TVout Lib. Sync on one pin, 300 ohm resistor to set that voltage.. video on another, 1k on that one to set that level. With two pins (two voltage levels) we get NTSC composite. It makes sense to me that the reverse ought to be possible, using set-point resistors to detect sync and signal respectively, using two digital input lines. Digital input can be sampled TONS faster than an analog in, and we only care about monochrome video signal (not even greyscale- 1/0 is all that's needed) so no RC timer thing is needed for level detection. From there, it's just a matter of time-since-vsync to locate the contrast (voltage) level change of Mr Whiskers against the background.. and that's a speed range (60Hz for the full screen, just slice that up as you wish) that Arduino can handle easily. At least that's what I see workable in my head... and, reasonably, I can't see why you couldn't use several thresholds, one per pin. By doing this, you could get as many grey detection levels as pins you are willing to dedicate. Thoughts, anyone?

I never cease to be amazed just how much can be done with the Arduino and a little bit of creativity.. and someone who doesn't know any better than to do something that "can't possibly work"... Take a look through some of those COMPUTE! and other magazines from the late 70's and early 1980's... I cut my teeth reading those, and still remember some of the little tricks that allowed z80's and 6502's to pull off small miracles. Radio Shack ate my allowance, to buy a phototransistor to make a light pen for an Atari 400... or an LED blinky POV array connected to Joystick ports on a C-64...

Dang, that is clever for an 8-bit processor! I saw somewhere else, can't find the link, where someone made an art exhibit kind of display where video was processed and turned (triple sided?) pieces of wood to show a picture of what the camera took in. More than 8x8 in size, wish I could find that again, think it was this summer before I started arduino-ing.

Found it! http://www.smoothware.com/danny/woodenmirror.html

Looks to require a bit more processing than an arduino is good for tho.

Hmm. Dangit, now I'm going to have to add that to the list.. a single-chip arduino video digitizer. Monochrome first, of course. It just seems that reversing the process for the TVout lib ought to be possible, at least from the 10,000 foot view. You won't get chroma, but the voltage should give average luma, even though we don't get the finer-detail signal to decode chroma... it works for generating video, so I can't see why doing the reverse will be any more (I would think less) CPU intensive. "complex" video images with many brightness levels would be problematic, requiring a lot more finesse- but if all we are looking for is a Luma "there/not there" voltage level at a partcular time (since vsync) compared to the "background" voltage (black level), that should be pretty easy. Accuracy could be taken to ridiculous levels by not using continuous sampling, but instead (as we are looking for a state change!) using INTERRUPTS driven from state change. The internal timers give way more resolution than the signal capture would need. If I remember correctly, isn't there a built-in comparator in the ATMEGA328 for the expressed purpose of state-change interrupts?

Dunno, going to have to do a little background on the porch signals and sync timing.. but considering NTSC signal generation still leaves enough oomph in the processor to run an analog sampling loop 128 times and then a Fourier transform then a series of 128 line draw calls (and that code even runs bounds-checking against the generated video resolution), ten times a second or so, as in my spectrum analyzer jobbie... I'm going to weigh in that the Arduino does have the execution overhead to do it, with a little creative coding... but, enough hijacking the Mouse maze thread, I'll post to let y'all know if it indeed looks workable after a little more research.. though I think I remember seeing a post on the thread for the TVout lib that a video overlay has been accomplished using TVout and sync detection.. which requires all the moving parts minus actually sampling the video signal voltages..

My wealth of useless information is ever expanding.

I shall need to try this when I get my good black and white camera back.

Analog video runs from 0.5 to 2vpp and should be possible to scale and buffer it to feed the arduino ADC but I'll have to figure that out. It will alow 'white' detection.

These will generate horizontal and vertical sync pulses which will allow the location to be determined from the video when it goes white.

http://www.national.com/JS/searchDocument.do?textfield=lm1881&x=0&y=0

http://www.intersil.com/products/deviceinfo.asp?pn=EL1883

It will be interesting.