Complete Gesture Control - Possible?

Hello everyone.

I recently had an enlightening meeting with a colleague about deaf people. How secluded their experiences are and how many difficulties they have to face just to be able to communicate. This is especially worse in my country, Pakistan, where out of 2.5 million deaf people, only 1% have the resources and wherewithal to pursue education.

I want to help by creating this glove which has sensors to sense what gesture is being displayed by the wearer. These gestures (based on Sign Language) would then be processed for their meaning and then sent to a speaker which will output the corresponding word in either the local language or English.

For this project, the gesture control would need to applied on all fingers and the palm as well. I also plan to join these gestures with facial recognition software to allow a more complete transfer of though. So in essence, you wear the glove, you look toward the camera and you sign into it. The output would be sentences in a language the hearing can understand.

From my understanding, all these gestures would have to be relative to something stationary or in a virtual space. When speaking in Sign Language, the entire arm down the shoulder is used for communication which may imply that these stationary or origin sensors would have to be mounted on the shoulder or the back (if all the computing is in the backpack).

Is this possible to compute in Arduino and what sensors will I require per glove?

Thank you for reading.

(deleted)

This is not an assignment and I don't know about the Indian people you're talking about lol.

Its a personal project, its not for my university or for a client or anything if that, at all, helps.

I want to know more about the type of sensors needed (accelerometers or bending/strain sensors), the alternates, the amount of computing power needed to perform this etc.

I am not very experienced in Arduino, the most I've done is a DIY CNC machine for which the code I found online.

Hope this helps you and the future commenters.

The other posters with similar projects were not able to answer the question of how a glove could record the movement of the arm and similar operations. Have you actually studied the sign language gestures for your language? And which version of English are you contemplating? Sign language sign language is not the same the world over.

Paul

Yes I have studied Sign Language. The one I’m familiar with and would be using is PSL (Pakistani Sign Language).

The hardware for this project seems very doable from my limited perspective. An alternate to accelerometers would be to use individual flex sensors at all joints in the finger(including the knuckle) (The DIP, PIP and MCP joints for the finger). This should allow for an accurate representation of how the finger is well, flexed and the outputs would be in resistances which allows for easier computing.

Although, this method would require 3 flex sensors for each finger and because of the way the thumb can move, we’ll need multiples at the base of the thumb to allow for accurate motion sensing.

Hope this clarifies what I have in my mind.

Please do ask if anything is unclear, I’ll do my best to clarify. Thank you.

Indeed there have been several questions over the last few months for recording sign language through some kind of bionic glove. None seem to have progressed beyond the conceptualisation stage, though. You’re the first that comes with facial recognition (that’s absolutely out of scope of Arduinos) and the idea to use the whole arm, not just the glove.

The best an Arduino could do is read flex sensors and accelerometers and pass that data on to a (much) more powerful computing platform, which takes care of the rest.

Ah I see. So it is possible to do using flex and accelerometers. As for the computing, perhaps using a raspberry Pi would be more reasonable.

So for a final verdict, it really is possible to create a bionic glove using just flex sensors and accelerometers.

Any other helpful tips that I should be aware of when embarking on this? My current goal is to just create bionic glove without adding the arm for relativistic motion. Just the bionic glove would allow deaf people to finger sign and it would also be less challenging than doing the complete thing at once.

As always, thank you for your replies.

One of the other unanswered questions: How to identify the beginning of a gesture and the ending of the gesture.

Paul

Paul_KD7HB:
One of the other unanswered questions: How to identify the beginning of a gesture and the ending of the gesture.

Paul

^ this..
not only that.. I would imagine you would need some sort of huge 'look up table' to be able to check the 'gesture' against whatever is supposed to be out put.

Paul_KD7HB:
One of the other unanswered questions: How to identify the beginning of a gesture and the ending of the gesture.
Paul

Feeding the raw position data to a PC should allow a neural network to be trained to recognize continuous gestures. Not something that could be done on an Arduino, but a possible solution.
Perhaps this paper called "Flex Sensor Based Hand Glove for Deaf and Mute People" will have some useful data:

Try Google: 'Arduino flex sensor glove' for more examples.

Syed_Saad:
I am not very experienced in Arduino

this is a really really advanced project for a beginner. Not to discourage you, but you might shelve this project for a little while and learn a bit more about the platform before you end up just frustrating yourself. You don't learn to swim by trying to cross the English Channel on your first try.

It should also be noted that this idea is already scooped. These sign language gloves exist already. I've seen them on the news already. They are probably expensive, but even at that will be cheaper than building your own.

Thank you all for the replies. Ill answer each query down below.

"How to identify the beginning of a gesture and the ending of the gesture."

I imagine the glove to be in on standby until the user activates it. When the glove is active, it will read the position of the hand (multiple times in a second) and relate that to a library of already established gestures. Certain gestures will be simple, only requiring one motion. For gestures requiring more than 1 motion, the Arduino will be programmed by nested loops where the first motion triggers the search for the next motion and so on. When the entire motion is done, the glove can be brought to a specific motion which signifies the end of the signing.

"I would imagine you would need some sort of huge 'look up table' to be able to check the 'gesture' against whatever is supposed to be out put."

This is very true. A huge library of words with their related gestures would be needed before this glove could be used for basic communication. I am a realist so I'll be content if the original prototype can recognize 10-20 words and string them along for a sentence with reasonable accuracy. However, this is easy to combat with a few of these devices, all uploading individual gestures into a shared data bank.

"Feeding the raw position data to a PC should allow a neural network to be trained to recognize continuous gestures. Not something that could be done on an Arduino, but a possible solution."

First of all, thank you for the research paper you provided. It gave some much needed insight into the practical application of a flex glove.
Onto your suggestion for a neural network, I agree with you a 100%. A neural network which trains itself to how people actually sign to increase accuracy will be crucial to have (instead of just adding a static +-1000 ohms to a resistance value). Although this would be something for later.

"This is a really really advanced project for a beginner. Not to discourage you, but you might shelve this project for a little while and learn a bit more about the platform before you end up just frustrating yourself"

Thank you for the advice but I wont be doing this project alone. I have friends and colleagues to help troubleshoot if needed. Although I am comfortable with coding this, so it shouldn't be as much of a leap as your swimming analogy :wink:

"These sign language gloves exist already. I've seen them on the news already. They are probably expensive, but even at that will be cheaper than building your own. "

Well one of the few advantages of living in my country is that cheap labour makes for extremely cheap prices. A glove I make will easily be half of what the market probably has. And the journey itself would be enlightening as for improvements and the hurdles we need to cross. Price will be a major factor since the deaf community in Pakistan is poorer and more illiterate than people who can speak.

Thank you again for all your replies. Ill be doing more research into the inner workings, and its coding.

Cost is not in the making of the glove.

Cost is in the enormous amount of labour and computing power that goes into the neural network that can recognise the gestures (you probably have to think of Siri + Alexa on steroids), especially if you think of making it self learning even. That's highly advanced technology, and is going to cost you no matter where you are.

The neural network is something I'll be thinking on adding on once the glove reaches a certain stage. Honestly I think it may be easier to hardcode all the gestures in. Don't know yet, will cross that bridge when we come to it.

Recognising the gestures is likely not something you can hardcode, as it's going to be a fine play between the states of a number of sensors. After all, it's the full image that determines what the gesture is.

But anyway a good first step would be to actually get a glove working, and get reproducible readings from its sensors. That's going to be pretty hard. Making that glove durable is likely going to be a whole different challenge on top of that.