Context Communicator - 5 arduinos & lots of LE

Hey,

I've got this project you may find interesting. It's a remote visual context communicator. Hold on: It's an abstract concept :).

A short description of the concept:
The non face-to-face communication of social and emotional experiences between people usually happens through phone or other media like e-mail, IM (instant messaging or webcam (e.g. Skype). The context in which the experiences were experienced plays an important role. Neither the technology nor our way of describing enables us to communicate this context in a way it can be “experienced” by the other person. There still are a few layers of formulation & interpretation in between: you can only imagine.
This project focuses on the design of a system that is able to communicate the real-time context of a remote user so that the receiving person is able to “ feel” as if he/she is there without the translation steps that are required when describing an experience.
Emphasis is on the visual element in experience and thus imaging technology. The final concept is a modular system of connectable triangles that can be mounted to the wall and can project a real-time abstract display of a remote visual context.

Videos and more information on the project: click
More information on the tech: click

Some photos:

Different visual contexts:

Some tech pictures:
Every two triangles have 1 arduino, one home-etched circuitboard with 4 TLC5940s and 18 groups of 2 RGB LEDs.

The software (Flash)

Okay, you picqued my interest. That probably means I've misunderstood your whole project. This misunderstanding, of course, will result in me asking stupid questions and you rolling your eyes in disgust.

Aren't emoticons an attempt to send some of that context information? And if there is no context information shared, doesn't the person on each end create (in his or her mind) a "picture" to fill in the missing information based on what he or she believes is happening? Is the real context information any more accurate or relevant than the imagined one? And consider two people who don't speak the same language trying to communicate. They can pass contextual information through body language, etc., without the actual understood communication.

Or am I missing the whole point?

I think you've got a different interpretation of visual context.

The idea is that when you're on the telephone with somebody and you describe your visual experience at that moment, you have to translate that experience into words. e.g.:"Nice yellow beach with the sun in the blue sky" etc. The other person will have to make an image of those words in his head. There is a level of translation and therefore some richness gets lost. In this project I've tried to get rid of this translation step. (the other person is wearing a camera)

The system is not meant as a direct focus communication tool. The receiver should be in contact with the system and thus the remote person throughout the day. With a phone when you pick up there's a 100% focussed connection and when the you hang the phone the focus is gone. This system is meant to give information more via the periphery of the attention.

I've not been able to conduct a test for like 3 months to see what value people can really find in a system like this but some interesting things came up. One of them is the interpretation of the image. It's an abstraction of the visual context of the other person. If you know in what situation the other person may be at a certain moment it's easier to make sense out of the signal. it may even be possible to try to learn patterns (see videos on website).

Value that people could find in these kind of systems is generally describes as connectedness and creating empathy. For example when somebody is a away for a longer period of time.
Note that it's an explorative project in which issues like privacy have been put a side for now.

Nice love it!!!
Is there any way of you posting a schematic for a smaller version?

See, I told you I probably missed the whole point.

Cybot: A smaller version wouldn't do you any good since you still need the software and a wireless link to a camera to make it work.
If you want to make a stand-alone version as a mood-light thing you can just take any TLC5940 schematic.

Chilinski: No problem :slight_smile: I've had more people who didn't fully understand the project at first. I must say that for me there is much more to explore within this area. People should not see this prototype as a ready-for-market proto. It's just a small part of a much bigger iterative research process.

I meant for a version with smaller sections ie making them thicker to hold more but take u less room on a bedside table or something...

hi there im verry impressed with your work, im new to arduino and tlc5940 but im looking into using these to create a rgb lighting system for my home. the main problem i have is i have no idea how to make a software interface i know what i want just have no idea how to do it, would you be able to share some tips where to start? idealy i would like to use flash like you have done.

If you would like to make an interface between flash and arduino you can use serproxy. Flash is not able to write to a serial port directly therefore you need a proxy that can relay data from the xmlsocket to the serial port.
I used a very simple protocol where I send a string of "255" + 64 bytes. (one arduino per 2 triangles).
Once the full string is received by the arduino, the arduino writes the 64 values to the TLC's