Multiple sensors with machine learning for interactive installations

Hello,

We're currently planning several interactive (art) installations. We'd like to use different Arduino sensors to control live animation, sound generation, etc.

We had this idea that it would be great to have a system that we could train to respond to a combination of sensor inputs so we could have the flexibility to experiment with them instead of having to figure out a separate algorithm in each case.

What would be a good combination of tools (like specific Python modules or specialized IDEs) to start developing such a system? Is this approach even feasible? I'd like to think so because it would probably be the best way to build this kind of installations.

Thanks!

These are some vague reqirements. You mentioned machine learning, sound generation and python, which are all topics where arduino isn't the optimal hardware platform really.

lg, couka

couka:
These are some vague reqirements. You mentioned machine learning, sound generation and python, which are all topics where arduino isn’t the optimal hardware platform really.

lg, couka

I know it’s vague, sorry about that. The whole thing is just in the early planning phase.

To clarify, the Arduinos would only be responsible for filtering and transmitting input from the sensors to the host PC which would handle everything that’s computationally intensive, like machine learning and real-time multimedia generation.