eye blink sensor

My friend has ALS, locked in with only eye movements to communicate. After looking at many sites we bought a high tech Tobi PCeye tracking device but his breathing mask and glasses interfere too much.

We are both hams and learned Morse code years ago. We both have some years of experience with arduino and other electronics programs. All I need is a simple sensor to detect blinks...say left for dots and right for dashes. I could interface to a morse to text converter and display on the computer screen.

We are hurting here...I thought the other stages were bad but not being able to communicate is the worst. So I'm quite desperate. Anyone who has an idea, i've seen many sensors: strain gauges and IR sensors and even proximity switches on a guys eyelid...so much to try. Someone must have done this by now. Simple is the key. Must be minimally irritating of course.

Thank you.

I have some ideas:

  • Have electrodes detect the EMG signals to close the eyelid
  • Mount a sub-miniature camera on the breathing mask and do some processing, probably to much for an arduino but there are other boards

You know that really sounds great but I just can't imagine inserting electrodes into the muscle fibres around the eye area.

I had an EKG a few years ago and asked to keep the sensors. I wonder if those could pick up the signal. I think nerve signals are in the neighborhood of 10 to 50 hz. Have you ever done anything with like that?

We tried the camera already...its just too darn fiddly. The key is simple, reliable, durable, repeatable.

Do not get ALS

I'm a great fan of Jason Becker so i know about ALS. What i was thinking about is using EEG type adhesive sensors. Regarding the camera what is fiddly? The camera itself or the software not working?

I think the reflectivity of eyelid and eyeball are very different Would a tiny light source shining on the eye, with a suitable detector work Shine at an angle so vision isn't affected Maybe IR would work too HTH

There are some low cost usb endoscope camera units available. The flexible endoscope would allow the camera itself to be mounted on a bedhead, chair back, etc. and then bent around the head to view the eye.

I guess separating deliberate winks from reflexive blinks might be a bit of an issue, but I'm sure it can be worked round.

I gather you've investigated other methods, like sip/puff, or tongue movements. I do think the camera or IR emitter/sensors might be the best way to go. If the contrast isn't enough, a little eye shadow would help.

I would go for a web cam and some simple object recognition / object tracking algorithm to detect eye open/close states. Probably you'll want to write your own application using OpenCV or similar, but the OpenCV examples would take you a long way.

Putting the processing on a PC also means that your application has the resources to do whatever else it might want, such as steering cursors around screens, translating morse-to-text-to-speech, interfacing with chat applications etc.

I remember that when another forum member asked for help relating to a disability, it turned out that there was a nationwide organisation of volunteers that provided a lot of tech support for getting projects like this to completion. Unfortunately I don't remember the name of the organisation, or the forum member who was involved with them, but if you feel like telling us which part of the world you are in perhaps somebody will be able to suggest where to go for help.

This also strikes me as the sort of project which people in the Gigs and Collaborations section of the forum might be willing to donate time to.

PeterH: I would go for a web cam and some simple object recognition / object tracking algorithm to detect eye open/close states. Probably you'll want to write your own application using OpenCV or similar, but the OpenCV examples would take you a long way.

If the patient has limited eye movement, that is, if the eye does not move far enough left/right or up/down to bring the iris or pupil into an area covered by something like a webcam, the amount of actual image recognition and resultant need for large amounts of processing can be bypassed. If that's the case, the webcam need not even be focused, and the processing can just check a small amount of the image for brightness; sort of a poor-man's light sensor not requiring anything more than ambient light.

mmcp42: I think the reflectivity of eyelid and eyeball are very different Would a tiny light source shining on the eye, with a suitable detector work Shine at an angle so vision isn't affected Maybe IR would work too HTH

This is what I've decided to try. I bought an Ambi-light sensor from Modern Device. I will fashion a small cone to block stray light and at least I can try it on my self first without causing too much trouble for him. Yes, IR might work but I like the optical better...like you say there should be a big difference in reflectivity of shiny cornea vs lid skin.

Thanks

PeterH: I would go for a web cam and some simple object recognition / object tracking algorithm to detect eye open/close states. Probably you'll want to write your own application using OpenCV or similar, but the OpenCV examples would take you a long way.

Putting the processing on a PC also means that your application has the resources to do whatever else it might want, such as steering cursors around screens, translating morse-to-text-to-speech, interfacing with chat applications etc.

I remember that when another forum member asked for help relating to a disability, it turned out that there was a nationwide organisation of volunteers that provided a lot of tech support for getting projects like this to completion. Unfortunately I don't remember the name of the organisation, or the forum member who was involved with them, but if you feel like telling us which part of the world you are in perhaps somebody will be able to suggest where to go for help.

This also strikes me as the sort of project which people in the Gigs and Collaborations section of the forum might be willing to donate time to.

I don't want to bleed here in front of all but I can't stand seeing him hurt like this. I would give anything...I do have $$ to spend so no problem. If you could please help me find support I would be grateful. Days turn into weeks and I can tell you, here in NZ ... he has certain options and I know he won't be with us long if he can't communicate. I never saw tears until this stage.

I have been all over the net...we droped 4 grand on a Tobii PCeye tracker... just not practical for him...he is nearly prone, the angle to the screen is not right but there a other reasons why it won't work. The vision is just not what it used to be. So they don't want to try anymore cam's. Plus I am pretty much a hack so simple has a chance of working.

Thanks to all of you for your suggestions. I'm going to try a reflective light sensor. The Ambi-sensor is perfect for the Jeenode. I'll set it up and try on myself and post the result.

but really if there is a group...I sense that we could give him years vs the will to get through christmas to say goodbye. hate to be so gloomy but it is what it is.

ALS sucks, if you studied for a lifetime you couldn't come up with a better slow torture.

lar3ry:

PeterH: I would go for a web cam and some simple object recognition / object tracking algorithm to detect eye open/close states. Probably you'll want to write your own application using OpenCV or similar, but the OpenCV examples would take you a long way.

If the patient has limited eye movement, that is, if the eye does not move far enough left/right or up/down to bring the iris or pupil into an area covered by something like a webcam, the amount of actual image recognition and resultant need for large amounts of processing can be bypassed. If that's the case, the webcam need not even be focused, and the processing can just check a small amount of the image for brightness; sort of a poor-man's light sensor not requiring anything more than ambient light.

He has full eye movement. Can move his head somewhat with effort...the least effort is a blink.

I tried using an Ambi-light meter today. but it doesn't look good. Just when I think I'm detecting a blink, a cloud goes over and the whole baseline shifts 50 or more units. I do think I can see a change of some 20 or more units while shining a small green led at my eye. ..I'm doing this at the side of my eye...I am wanting to set it up so he can fix his gaze on the screen as he blinks out a message.

Look he has tonnes of web cams and stuff like that. I'll have a pick through and see if I can come up with something approaching your suggestion. Thank you.

Perhaps we're thinking the wrong way around. Maybe the idea would be to place an LED on the eyelid itself. There are some VERY small LEDs (think SMT; Surface Mount), and some very small wires. Could you use a little tape to place the LED on the eyelid, and check its position with an optical sensor? I know it sounds a little invasive, but the LEDs I mentioned really are very small and very light. It would be shining outward, and may reflect off his glasses,but perhaps it could be positioned pointing as far to the side as it can be, and still produce enough motion to sense.

The other thought I had was that the LED could be moved by something like a thin, somewhat stiff piece of plastic in the general shape of a brush bristle, again, either attached to the eyelid, or anchored nearby and only pressing against the lid.

I hope I'm not sounding too 'out there' to you, but I hear the desperation in your postings, and it sounds like you are running out of ideas.

A web cam really feels like the best way to go, then. I don't have any connection with the people behind this video, but it demonstrates the sort of thing that I had in mind - I think it ought to be feasible to recreate that with OpenCV or similar and I wouldn't be surprised if there are people who have already done so and would be willing to share..

https://www.youtube.com/watch?v=eBtpKAja-m0

Also see https://www.youtube.com/watch?v=V1FWZSnyPCY

YouTube shows several other links for similar projects.

ETA: This might be the starting point for a working solution:

http://romanhosek.cz/android-eye-detection-and-tracking-with-opencv/

People who do motion capture for animation will often put markers on a person’s face to aid the process of capturing the position of that individual’s expression. I know they have used retro-reflective adhesive dots, but that will probably not be viable for the eyelid. A possible alternative would be a small amount of clown makeup. It might be colored black, or perhaps white, what ever helps to make the blink more detectable. Ambient light sources can be reduced by using an IR light source and sensor.

Perhaps the ambient light in the room can be modified. It might be simply turned off, or moved. Or perhaps make a simple hood from a cardboard box, that partially shields the face from ambient light.

Something that can actually detect where the eye is looking, is certainly a superior option if it can be made to work. As for the Tobii PCEye. Perhaps a fresh look at it is viable. I assume you have already spoken with the manufacturer for their advice. Can the orientation of the sensor be changed, or even modify the breathing assistance device. Is reflection from the face mask a problem? Then, paint it, or cover with black cloth.

If reflection from the lens get in the way, perhaps it can be improved; Use a polarizing filter on the camera to reduce ambient reflections. (if you rotate a polarizer, you can reduce many reflections, since reflected light tends to be polarized)

There are many ways to reduce stray light sources from being detected. Another method is to use a colored light source, perhaps even coming from a LED on the eyeglass frame. Then, a correlating color filter is placed over the camera. That color can be a visible color, but ideally it would be infrared, and used with an infrared camera. Baby monitors often have IR cameras, and have IR LED light sources. Even a cheap web cam can be converted to be IR, since the image sensor is already infrared sensitive. They have a IR filter that can be removed to convert it to be sensitive to IR. A IR filter over the top of the camera lens can block the visible colors.

-Joe Dunfee

First thanks so very much. I really appreciate your thoughtful suggestions. We made a list and are looking at what we think is doable for us. One breakthrough is that we can put a reflector or even led at the side of his eye and get enough movement to detect a change. When he blinks (we learned that he can only blink both eyes at the same time now) there is several mm of movement between the upper cheek muscle and temple area. This is huge because we can mount the supporting bits on his glasses frame and minimize stray light between the dot and the detector.

I think we are going to get this knocked now. We can use something like the Button example as a start to get data input. Also I just bought MagunoLink which has got to be the best deal going for 20 bucks US$. I had button on/off data logging/graphing in no time. It is very much like the Arduino IDE to work in.

Now one idea we had was to use Texting type protocols where he blinks to index through sets of letters:

blink = ABC blink = DEF pause blink d blink e pause E is selected

and so on.

Question: Should this be done in an Arduino sketch or in the PC with Magunolink , basic or some other PC-based language? Where would a proper programmer do the grunt work for selecting letters?

Thanks again to all. Hope is everything

oh right, the tobii eye thing is ...he needs to have his glasses on to see the screen. combined with the bad angle, the mask and need to have glasses...its too much too fiddly. The product is good. A normal person sitting upright in front of a screen has no problem. But we have too many confounding elements and in the end we just got exhausted. Guys ALS , let me tell you, it aint nothing like Steven Hawking (a great man, yes) sitting in a wheel chair with a talking computer. Everything, including breathing is a battle. Imagine not being able to swallow and having no control of your saliva AND having a lung-full of air being forced down your nose and out your mouth 17 times a minute. Sorry but I need you to understand the absolute need for simplicity because he is against the wall now. If we don't get this going for him soon I just know what will happen. Locked-in vs Opt-out. Everyone needs to communicate. All the other stuff is just suffering...but locked in is intollerable. OK enough, i'm sorry, yet i don't care about pride now...please stay with me. Its the program now. like a phone texting sms system. I think I can write it but perhaps something is already out there? any thoughts could be so helpful. of course will share/work with others in need too.

So glad to hear you found a way to detect the blinks!

Your question... it partly depends on what you are most comfortable writing in. The Arduino is certainly capable of doing it, but then, so is C#, C++, VB.Net, etc. I can't speak for Magunolink, as I have not used it, but at first glance it may well do the job.

Of course there's nothing stopping you from doing part of the job one place, and the rest elsewhere. I look at it this way. The GUI itself has to be on the PC, and the blink detection has to be on the Arduino, so the real question is "Where is the demarcation?".

Do you only send in detected blinks, and do all the processing on the PC, or do you do some of the processing (say timing between blinks to signify pauses or no pauses), or do you do the entire selection in the Arduino and send the data in as characters? A lot of the answers are heavily influenced by where your skills lie; where you are most comfortable doing each part.

I'm not surprised that he can only blink both eyes at once. When you first talked about using Morse Code, I tried to blink out SOS (it's the only sequence I'm familiar enough with to do it fairly quickly), and found out that it was very difficult for me to blink that using left for dots, right for dashes, or even using one eye for dots and dashes (short/long blinks. I found it quite a lot easier to use both eyes; short/long blinks.

If your friend has sufficient control over the length of his blinks, Morse Code might still be the way to go. It depends on how effectively he can send using either scheme.

Good luck, and don't hesitate to ask more questions.

Edit: Just thought of something else. What is the fastest rate (approximately) he can comfortably and reliably blink? And what would you say the length of a short blink is, and the length of a long blink, again, keeping in mind comfort, strain. stress, fatigue, and the ability to reliably differentiate between the two. The reason I ask is that you should really have some relatively easy way to 'back up' if you are using the texting scheme. Think one long blink to back up one level, one longer one to start over (or two long ones), or perhaps 4 or 5 rapid short blinks for restarting a selection.

OK, I just grabbed Nick Gammon's Debounce without delay code, and did some mods on it.. changed some variable names, added some variables, added some Serial print stuff, and came up with something that might help you get started, and if nothing else, will give you a feel for blink lengths and time between blinks.

/*
  Most of this code was cribbed directly from
  from Nick Gammon's "Debounce without delay" code
  on his switch tutorial at
  http://www.gammon.com.au/forum/?id=11955
*/

const byte blinkPin = 12;              // change this to fit your circuit
byte oldBlinkState = HIGH;         // assume eye open (or supply init/calibrate routine)
const unsigned long debounceTime = 10;  // milliseconds
unsigned long blinkStartAt;        // when the blink started
unsigned long oldBlinkStartAt;  // when the previous blink started
unsigned long unBlinkStartAt;   // when blink ended
unsigned long tweenTime;          // time since last blink
unsigned long blinkTime;          
unsigned long blinkChange;

void setup ()
  {
  Serial.begin (115200);
  pinMode (blinkPin, INPUT_PULLUP);
  }  // end of setup

void loop ()
  {
  byte blinkState = digitalRead (blinkPin);
  
  if (blinkState != oldBlinkState)
    {
    if (millis () - blinkChange >= debounceTime)
       {
       blinkChange = millis ();  // when blink state changed 
       oldBlinkState =  blinkState;  // remember for next time 
       if (blinkState == LOW) {   // change to LOW if your sensor is LOW when blinking
            blinkStartAt = blinkChange;
            tweenTime = blinkChange - oldBlinkStartAt;
            oldBlinkStartAt = blinkStartAt;
            Serial.print ("Blink at: ");
            Serial.print (blinkStartAt);
            Serial.print ("   Time since last blink: ");
            Serial.println (tweenTime) ;
          }  // end if switchState is LOW
       else  {
          unBlinkStartAt = blinkChange;
          Serial.print ("UnBlink at: ");
          Serial.print (unBlinkStartAt);
          Serial.print ("    Blink length: ");
          Serial.println (unBlinkStartAt - blinkStartAt);
          }  // end if blinkState is HIGH
           
       }  // end if debounce time up
        
    }  // end of state change
     
  // other code here ...
   
  }  // end of loop

I was also thinking about what could be sent to the PC, to make it simple and easy to write. If we do a little pre-processing, we can just tell the PC program single characters, something like:

S - short blink (choose) L - long blink (? - this and S would do for Morse code, or it could be signals for U, R, D, or others) P - pause U - up (back to previous selection R - restart from top (group selection) D - Delete previous character

Procyan: Question: Should this be done in an Arduino sketch or in the PC with Magunolink , basic or some other PC-based language? Where would a proper programmer do the grunt work for selecting letters?

Since you have a fairly capable chip in the Arduino I'd do at least the timing critical stuff on the Arduino. It would be frustrating to have even slight delays which could occur because your process gets interrupted by some other process on the PC.

Since you are using a PC for the main user interface, I would put all the logic there and just use the Arduino for the real time blink detection. In other words, the Arduino would send the PC events like "the eye was closed for 150 milliseconds" and it is up to the application on the PC to work out what that means in terms of letter selection etc..