I'm looking to embark on a project to automate a ventrilloquist dummy.
The plan:
2 fold
1 - Automate mouth to move as sound is heard. (if this is even possible without a delay)
2 - Automate eyes to blink by setting random time from 1 - 20
Questions:
Is sound recognition going to be possbile / latency if so what kind of physical hardware do I need?
What kind of Actuators etc would I need to pull strings attached to each element. (I don't know whats out there, they need to pull approximately 2 - 3 cm of string and release.)
Is sound recognition going to be possbile / latency if so what kind of physical hardware do I need?
Yes, but this will no doubt require you to learn the basics of DSP (Digital Signal Processing), FFT (Fast Fourier Transform), audio circuit design, and other advanced signal processing concepts. Don't get me wrong - you can totally do it and I'm not trying to discourage you - it's just I think you need to be warned as to what it might take to complete this part of the project.
For sound recognition, you might want to use a Raspberry Pi instead of an Arduino due to the RPi's speed and processing power.
Venomouse:
2. What kind of Actuators etc would I need to pull strings attached to each element. (I don't know whats out there, they need to pull approximately 2 - 3 cm of string and release.)
0-180 degree servos will do the trick. Just google "SG90 servo".
It's a simple project but its 'difficulty' depends all on you.
One important detail you are missing is, where is the sound going to come from? Sounds like you may want a remote sound sensor, so it's not as simple as a single unit.
I don't see the point if the mouth moves whenever there's sound only near the dummy.
If this is prerecorded sound, tape the sound on one stereo channel and DTMF tones on the other.
You will have to do this simultaneously.
While listening to the sounds, you will have to generate the tones manually as you want them.
Lets say you are using the 16 DTMF tones. #1 starts the mouth moving #2 starts the head moving #3 starts both the mouth and head moving. #4 blinks the left eye #5 blinks the right eye #6 blinks both eyes #7 lifts the left hand #8 lifts the right hand #9 fifts both hands #10 turns all the motors #11 blows smoke out the ears #12 #13 #14 #15 #16
Eye think you get the eyedea.
Next play the stereo sound back with the sound going to an amplifier.
The DTMF tones go to a DTMF to binary converter, then to the controller, then to the motors.
The motors will be in sync with the sound each time the tape is played.
Thanks all for the feedback. I should have clarified. For the mouth movement the sound source would be my voice. I would hope to be able to set the sensitivity to my voice only or perhaps only from a direct 'live' input.
I appreciate everyone's input. Just not sure what add ons were available for arduino. Such as the SG90 servos.
I figured the mouth automation could be an issue but I'm happy to go manual for now. It's only when the puppets won't be large enough for my hands I'd like to look at those options
If it is to be activated by your voice, won't it always be moving regardless of what you're saying?
Get a working system where you have to push a button. Could be remote. Once you get to that point, then you can figure out an alternative method but most of the work is done.
Thanks all for the feedback. I should have clarified. For the mouth movement the sound source would be my voice. I would hope to be able to set the sensitivity to my voice only or perhaps only from a direct 'live' input.
You could wear a throat microphone so that the dummy only heard you speaking. You could even put a speaker in the dummy but I suppose that would be cheating as would having an assistant remotely control the dummy.
How about putting control switches in your shoes though?
You could tap out signals and send them wirelessly to the dummy.
Hidden wearable computers like this were used to try and cheat casinos;