In your last post, you have started to lay out a step-by-step plan for your project. Let me flesh this plan out for you a bit more.
- connect flex sensors (study sensor data sheet)
- convert the sensor output values to a table of gestures. The software will need to somehow recognize the beginning and end of each gesture. i.e. if the input values do not change significantly for longer than 0.5 s, then the position of the hands is probably the actual sign language gesture.
- map the gestures to a table of words. (The table of words is a dictionary)
Gesture recognition is not trivial. Maybe you will have more success if you transfer the sensor values directly to a more capable processor (smart phone or PC or raspberry PI etc.), and let some type of pattern recognition/machine learning software do the recognition. Google will find several open source projects for you.
Judging from your previous posts, you haven‘t done any projects of this complexity so far. Be prepared to work several months or years on this, if you seriously want a working product in the end.
I‘m not sure if you have googled the topic at all? Spend a few hours reading about strategies of other people. You are not the first
Maybe join an existing project.
If you really want to take this on, I would recommend to start with the most difficult part, which is gesture recognition. First choose a suitable software, and then decide which processor can run this software.
If you have mastered gesture recognition, then the rest is downhill riding.
Keep us posted on your progress!