Multi-touch Input Sensor based of Arduino Mega

So i've been working on this project for a while now as part of my thesis and independently and work is still continuing. So just an overview the project is a multi-touch and multi-modal input sensor that can be utilised behind traditional LCD panels utilising a new method to detect inputs. By using a large IR sensor array consisting of 128 sensors behind the LCD panel and IR light source in front of the Panel we are able to augment the display with the ability to sense a variety of objects near or on the surface; including fingers tips and hands and thus permitting us to enable multi-touch interaction. The inherent nature of the senors allows us to create a low cost high fidelity image sensor allowing us to take advantage of optical sensing which also allows other physical items to be detected, and thus permits us to develop multi-modal interaction schemas.
The Videos below are of a prototype hardware unit that i created. utilising Rev 17 of the sensor board that i developed and created.

i will be releasing more information about the hardware and its constructions and its upgrade ability soon on my blog

Hi I would like to know how you did it because I have a similar project and I've tried but I can not program the arduino and to interface with PC

basically i'm collecting sensor data from IR sensors then using a threshold im filtering the values and then passing them to the computer the pc via serial, then on the pc im reconstructing the b/w 12bit frame and then using Java Advance Imaging Libraries im doing computer vision.

I have updated my blog with the full information about the system its schematics, board diagrams, and processing sketches. As well as general information about the prototype module. http://mtaha.wordpress.com

this is a really cool and creative project ... and it looks like it actually works really well... hm... this is on my list of things I want to try as well now... think I could learn a lot from it...

anyway, thanks for posting this...

So i've been continuing on with my work on this project... So just a brief update nothing much has changed on the firmware side of the project, have optimized some of the communication that happens with the host and sensor board. Most of the changes i've made at to the tracker it self where i have used standard bicubic interpolation to scale the raw 16x8 output to 160x80 i.e. a factor of 10 using OpenCV and i have further smoothed out results by applying a Gaussian filter. More Information is available on my blog http://mtaha.wordpress.com

Generally speaking the sensor resolution is quite low but through interpolation and filtering i am able to generate a relatively rich image.

This is an older video i made public recently its basically inverse depth mapping

It's cool. Hope to get it at a store :slight_smile: