This post is about feasibility on a very general level.
I had this idea about bat vision. The projects I've seen are mostly about buzzers that alarm when an object is near. My idea would go further. The user would wear IR goggles (with a smart phone). A distance sensor would be attached, pointing straight ahead. The distance would calculate into a red spot on the phone (both left and right). The farther, the dimmer. A gyro and an accelerometer would sense the movements of the user, which would make the spot move accordingly in the phone, while new spots would appear all the time.
The distance sensor would provide the data to an Arduino, which would send it further to the phone. I guess a phone has better processor power to perform all 3D calculus than the Arduino. Probably the gyro and the accelerometer needed would also be the ones in the phone, not any sensors attached to the Arduino. The only thing the Arduino would do is scan the distance and send the data to the phone. Perhaps one or two extra distance sensors could be worth adding.
Even though one would only use one distance sensor, I bet the view would still be 3D. The distance would translate into the dots being nearer or farther from each others on the left and right screen halves. And when the user moves and turns his head, the 3D geometry calculus places the old dots on new places on the screen halves.
What I've understood, this would resemble what bats actually "see". Bats move their ears and their head. The user of these bat goggles only moves his head. The "image" that bats paint in their head probably stays there for a moment. They don't just "see" one single dot. If my bat goggles would only draw one dot and not have it move around staying in memory, the user might still (after some training) develop some kind of image memory. But having the phone screen remember the dots as long as they stay in the visible sector kind of resembles the bat brain "image" processing thing.