Thanks for prompt responses. I am experimenting with The vOICe which is a well known sensory substitution system for blind users. The system is capable of great subtlety of interpretation..
I wish to take it on at the minimum level of resolution and simplification, to see how much functionality might remain at the lower ends.
I would hope to see it employed as a wearable collision avoidance/reaching/aiming device adjunct to the familiar white stick.
I am aware that Raspberry-pi developers are applying themselves ,via Android 4.0, to creating a truly wearable and low-cost ,full-performance version.
This being so, I still need to find out how to sample a B/W camera output , pixelate it at reduced resolution, encode the audio, and read out the mixed audio chord as a left-to right scan of the picture columns.
The whole process is repeated at one frame per second.
I would appreciate some ballpark figures for the possible bandwidths, storage and speeds required for a device as described. Is it really beyond the power of one of the bigger `duinos? Thanks...
(There is an excellent information website at seeingwithsound.com ,if you are interested.)