Bat vision

This post is about feasibility on a very general level.

I had this idea about bat vision. The projects I've seen are mostly about buzzers that alarm when an object is near. My idea would go further. The user would wear IR goggles (with a smart phone). A distance sensor would be attached, pointing straight ahead. The distance would calculate into a red spot on the phone (both left and right). The farther, the dimmer. A gyro and an accelerometer would sense the movements of the user, which would make the spot move accordingly in the phone, while new spots would appear all the time.

The distance sensor would provide the data to an Arduino, which would send it further to the phone. I guess a phone has better processor power to perform all 3D calculus than the Arduino. Probably the gyro and the accelerometer needed would also be the ones in the phone, not any sensors attached to the Arduino. The only thing the Arduino would do is scan the distance and send the data to the phone. Perhaps one or two extra distance sensors could be worth adding.

Even though one would only use one distance sensor, I bet the view would still be 3D. The distance would translate into the dots being nearer or farther from each others on the left and right screen halves. And when the user moves and turns his head, the 3D geometry calculus places the old dots on new places on the screen halves.

What I've understood, this would resemble what bats actually "see". Bats move their ears and their head. The user of these bat goggles only moves his head. The "image" that bats paint in their head probably stays there for a moment. They don't just "see" one single dot. If my bat goggles would only draw one dot and not have it move around staying in memory, the user might still (after some training) develop some kind of image memory. But having the phone screen remember the dots as long as they stay in the visible sector kind of resembles the bat brain "image" processing thing.


The user would wear IR goggles

Did you mean VR goggles?

Yes, VR goggles.

In theory you can create 3D "picture" when properly analyzing the signal received (from 2 receivers). I have no idea how difficult (if possible at all) is this with Arduino but some crude image should be doable.

How many different projects do you have going at the same time?

Slow down…

Tom… :slight_smile: :o

How many different projects do you have going at the same time?

Slow down...

I know it's a bad habit not to finish one project before developing new ideas. But ideas don't give an excrement whether I have finished anything or not. They pop up anyway. And I need to write down and discuss them.

In theory you can create 3D "picture" when properly analyzing the signal received (from 2 receivers).

I only need one receiver. This is not a camera in a traditional sense, which needs two objectives resembling left and right eye. The one and only receiver just measures the distance to one spot right ahead. That spot becomes a dot - well, two dots, on the phone screen. As I wrote in the OP, the calculations have to be performed on the phone. The program receives the distance data from the Arduino and stores the dot as 3D coordinates relative to the bat goggles. The coordinates are saved in a 3D array (matrix, whatever) and the phone continuously updates the left and the right screen halves plotting all points in the array according to present position and direction of the goggles. Points disappearing from the view disappear from the array.

So you're going to fill the whole thing one dot at a time? Think about how long it would take to fill that array. Bats have two ears and a brain to do the signal processing so they can put the whole picture together with one ping.

Does the sensor move back and forth and up and down? Or does the person wearing the headset only get information about what is in the one spot directly in front of them and they have to move their head around to get more? How will they know to hold their head still between the ping and response so they know they're getting the right distance?

You're right. One ping from the bat probably adds a lot more than just one "dot" in the image. It probably hears echoes from different directions, which add to the picture.

The sensor doesn't move. Although I could use more sensors. Or if it were an infra red laser sensor with a twistable mirror, it could scan very fast a larger sector. My idea was that the user moves his head. He just aims at spots he want to be added to the image - he's looking at things he hasn't yet seen. I have no idea about at what rate the data can be transfered, received and processed.

The head doesn't need to be still, as long as the response is received. At ping time, the position and direction of the goggles is recorded. At receive time (which is 18 ms later at 3 m, which probably is the farthest an ultrasonic sensor can measure) the head hasn't moved a lot and the sensor can probably receive the signal quite well. The distance should be calculable (is that a word?) for the ping time and the dot can be projected to the phone screen.

Maybe the analogy to bat vision is not that comprehensive. Nevertheless, I believe the goggles could work.

Do you actually think you are the first person to think of doing this?

Do you see any such products on the market?

Ask yourself what that tells you about the feasibility of having a successful project.

I never heard anything about the presumed mental picture a bat creates with its echolocation. But I love the creatures, seen a few hunting just last night.

Their pulses are not that frequent - about two a second - but can go much faster if they're near an object (to be able to "see" that mosquito clearer, closer by, and then eat it). Considering the way they fly they also must be able to detect much further than three meters away. For example, bats fly at greater height than that while following rivers and roads (they do that as navigational aid).

I'm quite sure that they create a complete 3D picture of their environment in just a single ping. Not dot for dot or 2D. Otherwise flying at high speed in those swarms of bats would simply be impossible. Oh, and in the meantime they have to filter out the pings of thousands of other bats within earshot.

Bat brains are of course wired to navigate a 3D world (we humans are mostly navigating a 2D world - forward, backward, left, right - going up and down isn't that easy for us) and to decode echolocation signals. On top of that they're considered pretty intelligent creatures, so they have quite a bit of brainpower to do all that. An Arduino can not come close to that.

Incidentally bats turn off their sonic radar when they are in an environment that they know. Hence if you are caving a bat can, and often will, fly right into you. No doubt to the surprise of both. It surprised me anyway.

The other thing I have noticed is that if you have a bat detector, that converts the pings down to audio frequencies, and you have it on a speaker turned up, when the bat hears itself it will make evasive maneuvers.

I haven't taken my batscanner into bat homes (I know some pre-WW2 era underground tunnels and rooms that are home to bats - had bats fly around me at high speed without hitting me - while there was really little space left in that narrow tunnel), but in the pitch dark they have to do something to know where they are. Dead reckoning gets you only that far. It was close enough for me to feel the wind of their wings.

The bat can likely indeed hear the output of a bat detector but I doubt they realise it's their own sound. Too different frequency, and a bit delayed (due to the conversion).

My interest is purely academic. I see no market value whatsoever in this idea. It doesn't solve any problem that could be solved more conveniently and better using some other technique. Like an IR night vision camera.

Years ago, before the era of PC:s, someone developed a strange vision aid for blind people. It was a camera and some electronics that transferred the video signal to a matrix of small pegs with solenoids. The blind person had the matrix "blanket" pressed against his back and the pegs kind of pressed the camera view onto his back. He said the equipment created "images" in his brain. The human brain has a large capacity for image processing and if the stimuli are something else than conventional eye sight, the brain starts to do its best to create the images. My guess is that this bat goggle thing could make our brains build up a sense of the environment, an image more complete than just the discrete points appearing on the phone screen.

I don't think nonexistence of such device is proof of impossibility of such thing. I guess it will be quite inconvenient to use something like that and maybe it is out of reach of Arduino and cheap sensors but in theory something like that is possible - as bats show. I still don't understand why you don't want use two receivers to get 3D image in one ping. It needs a lot of processing and deep understanding how to analyze the results but if you want to make it usable it is the only way.

I still don't understand why you don't want use two receivers to get 3D image in one ping.

Perhaps it is because:-

It needs a lot of processing and deep understanding how to analyze the results

and a bit delayed (due to the conversion).

No delay because the conversion was a heterodyne.

How much did you study the echlocation of bats? Did you read they use a sweep of frequencies to identify objects and their surroundings. Not just a single frequency. What sounds like a single beep to us is a whole range of frequencies.


A frequency sweep would need two receiving sensors to sense phase shift to determin the direction of the different frequencies. And the ping would have to be distributed to a much wider sector than what a typical ultrasonic distance sensor normally is capable of. As I already stated, my idea is merely a pale image of real bat vision.

I still don't understand why you don't want use two receivers to get 3D image in one ping.

My idea is to measure the distance to the nearest object right ahead. Two receivers won't help me in any way. Other than what I wrote above about determining the direction of the echo from a wider ping.

I've heard several stories about blind people learning to use echolocation to sense their environment - usually by making clicks with their tongue. Far from as detailed as actual vision but it seemed to work quite well for them.