Guiding motion with Arduino

Hi everyone. I'm new to Arduino and to this forum and I need some help. I'm in my last year of electronics-ICT and I'm working on a project that should make it possible to guide physical movements with a smart textile suit, using an Arduino controller. I have been working on this with a team of product designers and it there have been some delays because our supervisor (also a designer) wanted us to focus on the product design more than on the technical part. However, I would like to make at least a simple prototype of the system and for that there are only six weeks left.

The goal of the project is to develop a wearable electronic system built on the Arduino platform that will make it easier to learn physical activities that depend heavily on muscle memory, such as martial arts, dancing or playing musical instruments.

The prototype should consist of a number of actuators such as LEDs or vibrating elements on strategic places of the body which will stimulate the user to move in a certain way. The way they achieve the stimulation should be made as intuitive as possible, to prevent that a user needs to learn how to work with the system. It should be made so that people who have no prior experience with the system are able to put it on and immediately make the intended movements. The actuators will be driven by an Arduino microprocessor platform. The necessary movements will be written in a language independent library and passed to the software that runs on the processor, where they will be converted in order to drive the right actuators.

Does anyone have experience with something like this? Can you offer some advice on which actuators would be best to use and where I can find the right resources? Perhaps there are some existing applications that do more or less the same thing? Usually I would do most of the research myself before asking questions, but since I'm really limited in time I need to conclude the research as fast as possible and move on to the development of the system.

Interesting project!

Two main questions come to mind:

  1. What are your first thoughts about what the actuators will 'mean' to the wearer? You mention both visual and tactile actuators. Seems like only the hands and forearms are easily visible. How many actuators do you envision??

  2. Do you envision having Feedback about the actual body position?? That may be more complex. If so, how many sensors? What type/technology??

Thoughts:

  • The motion description 'language' could be precompiled into some small intermediate code (like Pcode etc) to be interpreted by Arduino.
  • TIMING may be an issue. Once a user starts a sequence of motions, how does he/she stay synchronized?

Keep us up to date...

Thanks for the swift reply! Let me answer your second question first: at this time I am not planning on any kind of feedback since I fear I will not have enough time for it. My team wants to include a heart rate sensor and communication with a mobile device by Bluetooth on top of the project I defined and I'm doubtful we will even get to that. Feedback about motion is something I'm interested in, though, and I'm thinking of perhaps doing a PhD on motion capturing next year. I think this project would be a great starting point for that and I'd love to do both this and my future work with Arduino since it's open source, relatively powerful and affordable on a student budget.

As for your first question: tactile actuators seems the most intuitive to me, but my team members are more inclined towards visual actuators, despite the objection you have brought up (only certain parts of the body are visible). At the moment I am imagining a prototype that would guide the movement of the left arm from shoulder to wrist. I'm thinking of putting eight vibrating elements on the arm: four on each side of the upper arm, four on each side of the lower arm. I imagine the starting position of the user would always be the same (because the system initially won't be aware of it's position): the arm lowered next to the body, fingertips pointing downwards. The vibrating actuator on the biceps of the upper arm (which is now facing forward) could be made to give a pulse, indicating the user should move their arm forward and upward. Then a pulse could be given at the actuator on the inner side of the lower arm (which is now facing right), to indicate the user should bring their wrist towards the chest. Timing is indeed an issue, and as I see it now there could perhaps be a series of three short pulses to indicate the end of a movement. However, this is not intuitive anymore since the user would have to learn that three pulses means that he should stop moving, so I will have to find a better way. If I were to use LEDs, they would simply be placed on the same positions as the vibrating elements and follow their patterns.

I've also found some research indicating that stretching of the skin is more effective than just vibrating: Penn Engineering | Inventing the Future. It might be something to consider, but I think that the motors needed to stretch the skin would be more difficult to implement, bigger than vibrating elements and would require more energy, making them less suitable for use in a wearable system.

I'm considering speakers as actuators as well, but I think sound is a bit too obtrusive and would annoy both the user and their environment.

As for the motion "language" and synchronization, they are important problems, but I will deal with them when after I get the first phase done. Right now I would be very happy to have a prototype with hard-coded motion which could guide a user to do two or three moves in succession.

Hi,

Just a few thoughts:

  • I don't "see" how a person can see LEDs on 4 surfaces of "Anything". I can't see my tricepts (opposes my bicepts) without a mirror.

  • If you want a user to be able to simply "put on" the garment, I don't see how the "skin stretching" actuator would work.

  • Smallish vibration actuators might be typical Cellphone vibrators (available and cheap) and electromagnetic speakers (probably with mechanical contact not just an acoustic output).

  • Telling the user when to stop moving implies feedback so you KNOW he should stop....

The only way I can see this working is if the suit just detects where the body is now, then is read by the system and projected as a 3D model on a display. The system would then animate the display and provide positive feedback to the user when the motions are mimicked properly.

Even if you COULD see the LEDS, you would have to turn your head to look at them, destroying your ability to preform the proper motions. If you feel a vibration, a shock, a pulse, skin tightening, whatever... you have no metaphor with which to associate that with movement. My bicep twitched! What does that mean?

But if you could see a model of 'you' on a screen, then watch the proper motion and try to mimic it, people would be able to intuitively use that immediately.

XBox 360 with that 3D controller thing. Heck, there's probably already a game out like this.

what about using a tens unit? Transcutaneous electrical nerve stimulation - Wikipedia

First of all, sorry for the late response. My team disagreed with my initial proposal and decided motion guidance is not the main objective of the project. The main goal is now to determine whether certain (Tai Chi) exercises have a calming effect on the user by measuring their heart rate while practicing the movements. After much debate, we have decided to incorporate both functions in the prototype: there should be heart rate detection which can be passed to a computer and analyzed after the exercise AND there should be simple motion guidance, but not as rich as I envisioned it to be. It seems the work will be split in two parts. I'll research the heart rate detection first and create a new topic for that if it's necessary and continue the motion capture discussion in here.

Terry King:

  • I don't "see" how a person can see LEDs on 4 surfaces of "Anything". I can't see my tricepts (opposes my bicepts) without a mirror.

True. If possible, I would perhaps like to use peripheral vision instead of direct vision (it doesn't make much sense that a user has to keep an eye on all body parts while doing any kind of martial art). In that case, most of the LEDs should probably be on the lower arms.

  • If you want a user to be able to simply "put on" the garment, I don't see how the "skin stretching" actuator would work.

I'm still researching if this can be made to feel intuitive.

  • Smallish vibration actuators might be typical Cellphone vibrators (available and cheap) and electromagnetic speakers (probably with mechanical contact not just an acoustic output).

Since the system should be incorporated into textile, I was considering the LilyPad Vibe Board LilyPad Vibe Board - DEV-11008 - SparkFun Electronics. However, they are relatively expensive. This one Vibration Motor - ROB-08449 - SparkFun Electronics is cheaper, but operates at 3V. Is this going to be a problem or can I just use the 3.3V output pin on the Arduino board? Could you give an example of the electromagnetic speakers please?

  • Telling the user when to stop moving implies feedback so you KNOW he should stop....

What do you mean by this? That the user will not be able to simply "put on" the shirt and start working? After some consideration, I think that won't be possible anyway. There will be a learning curve, but it should be as small as possible.

Teslafan:
There are applications like that available, yes. However, additional infrastructure is needed and that is exactly what I am trying to prevent. I agree that the ability to perform the proper motions will be destroyed when you have to directly look at them. I'll research the possibility of using peripheral vision, though I realize this will not be easy (if not impossible). I do believe vibrations could give a good indication of the required motions, though. Perhaps some explanation should be given with the textile, for example "vibration on a body part means you have to move that body part in the direction of the vibration", but I believe it can be made intuitive as well. Just as when you feel something itch, you automatically try to scratch it in most cases. I need to do some more research and testing to find out how this should be done.

whoaski:
The problem with TENS is that it the system would have an effect on the nerves of a user and "force" them move in a certain way. The goal is to guide the user, not to force them into doing anything. If you relate it to learning a martial art, I am trying to replace the need to watch a teacher and copy their movement, not to automatically make their body move.

Thanks for the feedback everyone! I did not expect such quick responses.

I mentioned the tens unit idea not to shock them into a stance but as a marker. going through the motions the tens unit could sense muscles contractions and relaxations, kinda like a finger on the pulse of a movement. then say you do something wrong you lift you arm above your shoulders when you were not supposed to, you would receive a little zap under the out of place arm. not much just enough to feel something in your arm. I thought of it as a way to feel the position of the movement even if you cant see the leds. ever put a 9 volt batt to your tongue? you can feel it but it doesn't make your tongue move or twitch. and the idea that the sensors (the pads) can be the actuators means your input can be your output as well. I am by no means any expert or doctor or physicist. I'm only sharing where my mind wanders with this problem

Me neither, I appreciate your feedback. It is an interesting idea to use the sensors as actuators as well.

However, it turns out this project is moving towards motion capturing. I've had contact with another student in my school who is doing a project on motion capturing with accelerometers and gyroscopes and I could use his sensors to detect the movements, calculate how far they are from the ideal movement and then give feedback with vibrating elements or TENS. The other students uses just one sensor to control a game. If I put a couple of these sensors on an arm, I can gather information from the angles and accelerations according to their internal frame of reference and combine this with information on the distance of the sensors to a base station on the body (as an external frame of reference) to determine the position of the arm. All of the acceleration, orientation and distance information will be sent to a computer, which will calculate the position, how far this position is from the ideal position and send information to the base station about which actuators should be activated to stimulate the user to move back into the right position.

The two main questions to be answered now is how to determine the distance to the base station and which actuators to use and how. If the motion capturing works, I can use vibrating elements to provide negative feedback: vibrate a certain element when this body part is moving away from the ideal position and stop vibrating when it's back in the right place. This has been done a couple of times and proved to be reasonably successful, but always with the need for external hardware (such as a camera) to do motion detection.

I'm sorry I can't give you more concrete questions, it seems that the direction of the project is still not set. I won't post any more updates until I have a clear goal in mind.