Hi. I’m not sure if I’m using the right words (like “extraction” and “edge”) to ask this question, and if I’ve asked too much in this topic. What is feature extraction called when there is only one edge whose length doesn’t change, and the edge comes from two wearable electronics, on one person? Can it still be the same edge if the edge is moving? I would like for the length not to change; but without a library, I’m worried it might change if the position of one of the wearable electronics changes (because I twist a part of my body one of the wearable electronics is on; or it slides, because what I’m wearing is too loose-fitting). I’d like for the length to be kept the same by using code.
I’d like another edge to come from (one of those two) wearable electronics and a geographical location (that I identify by typing numbers that have to do with at least one distance/angle, or whatever would be best).
In addition to those two edges having a common vertex (that come from a wearable electronics), I’d like for those two edges to be used together (at different times, as they move) to make angles. To make each angle, the vertex each edge has in common will always come from the same one (of the wearable electronics). I would like the angles, but not a screen, and not a virtual reality headset. In other words, the vertices and edges wouldn’t be seen. Then, among other things, the (differences between the) sizes of the angles would be used to control how frequently a light flashes.
Are there libraries for doing all that? What would be the best technology to use? Is saying the wearable electronics stay on a large indoor stage (that signals from outside can’t reach, because the building doesn’t have windows, and is surrounded by taller buildings) descriptive enough? Instead of wearable electronics, would it be better to use something that reflects signals? What is all of the above called? Or, is there a better way? Thank you.