Hi to all u wonderful peeps out there! I was wondering if I could get some advice or solutions to my idea, here's the breakdown:
Initially, I want to see if Is possible to have people send audio files via their phone either using wifi or Bluetooth to an Arduino+Ethernet shield/ESP32 that has SD card access, which by using capacitive touch sensors can then trigger these stored sound audio files and as the SD card updates/syncs with sounds populated by people, the new sounds are reallocated/redistributed to each touchpoint!
So far the parts I'm working with/have access to are:
-Arduino Mega 2560, UNO, NANO
-2x Adafruit 12-key capacitive touch sensor breakout
As I've been researching I've found info that brings me close to my goal but I still have missing pieces to the puzzle, for example:
With the ESP32 I know you can do this Tech Note 086 - Uploading Files to an ESP32/ESP8266 - YouTube which for me, the interface for uploading via the web is ACE and exactly what I want! The only problem from there is integrating the capacitive touch element (I know the ESP32 has touchpoints but are limited for the amount I need hence I have the Adafruit touch sensors to use if possible ) and the assignment/reassignment of the audio files to the touchpoints automatically as the SD card syncs....the VS1053 would handle the sound output maybe?
I know that it's possible to datalog using the Ethernet shield, so rather than sending actual audio files via ethernet, I was thinking maybe it could be possible to have mp3s stored as URL links which are logged/synced in a Google doc or Exel spread that people upload to, which is then stored/logged on the SDcard, which then the capacitive touch sensors are assigned to (eg: 1 sensor is assigned to 1 Row in Excel spread containing the mp3 URL) and can be launched/played polyphonically using touchpoints and maybe the VS1053? I know its possible to launch mp3 URLs from the ESP32 and listen using the VS1053, in a similar fashion to how the Internet radios work on ESP32 so was thinking maybe the ethernet logging could work in a similar way?
I'm also aware of the SDfat libraries for controlling SD cards but haven't worked in detail with them, just trying to figure out this workflow first.... could I at some point assign touch sensors to SDfat commands/functions?
Anyhow, am I overthinking this or is this even possible?
Is there a way to maybe link these two approaches together at all?
Definitely a big Brainstorm at the moment so any help, constructive advice would be much appreciated!
Thanks in advance Xx