Thank you for the advice @zwieblum. I had a quick look on google at UDP and to my understanding (which could be completely wrong!) it looks like UDP would only be used on a normal WiFi connection and not with an ESP-NOW network?
The current ESP-NOW code I am running (previous post) does contain delivery/reception confirmation which I am making an assumption would detect if there were an issue with broken packets?
I am also making another huge assumption that sending data packages continuously would increase the power usage?? As both the remote control & parrot will be running on batteries I would like to keep power drain to a minimum.
Update 1(A): Release the magic blue smoke!
So for the second time on this project I have blown up a BT201 mp3 module! oopsie! I know the component that was at fault in a 1k linear potentiometer. The first time this happened was just bad planning by me, I assumed that I could just output from the BT201 module, through the pot & into the speaker..... just into one pin of the pot & out of the other (I cant remember which pins it was, but it did work....... for a while anyways!)
The second time I wired it like shown in a youtube video following this picture:
Just assume that the headphone jack used on the picture was from the speaker connection of the BT201 module. Has been working fine for a few days, but just an hour ago it stopped again. There had been a little crackling when changing volume so not sure if it was a duff pot or if my soldering was off.
Either way, twice now I've managed to break these boards!
Why this way? Why the change?
The BT201 module outputs sound simultaneously through both the mono speaker & stereo headphone jack. This was a perfect combination allowing me to play both audio channels trough the speaker for the sound output I needed, while also allowing me to connect the left channel from the headphone jack to the Analogue input of the ESP8266 allowing me to read the vocal signals to sync the mouth movement too. Everything worked fine apart from one aspect..... The volume needed to be at maximum to be able to read enough variation on the analogue signal to determine what was sound vs noise. The BT201 has onboard volume control but that affects both the headphone/speaker at the same time, making the speaker very loud!!! Not so good when you are doing most of your experimenting at night while other people in the house are trying to sleep. Hence adding the pot inline with the speaker for independent control.
So lessons learned from that setup are that reading the analogue signal this way contained a lot of noise. I followed the circuit diagram below to build a filter which did help with the issue, however the volume still needed to be max. Maybe it was the wrong design to use? I don't know, just trying my best to make sense of things as I go along!

So with all that being said & done, Where do we stand now?
Playful Technology on github has also done a lot of work converting one of these toys to Arduino. I have been working over their code & spliced it together with some ESP-NOW code I had been playing with, allowing me to call certain movement functions from the buttons of the remote. It had been working relatively fine most of the afternoon however I think the head motor is sticking a little bit so need to take the parrot apart to check for anything catching inside...... but the more I think of it, the more I am thinking about looking to see if I can cut it apart to replace the single DC motor for eyelid/mouth movement with a pair so micro servo's which will just make control so much easier without needing to worry about the way they have implemented their limit switches... well to be honest its more the eye side of it. The mouth is easy apply voltage to open mouth, when voltage stops a spring snaps the mouth closed. If you run that same motor in reverse it cycles through eyelid movements which you have to stop at certain limit points..... would be so much easier to do with a servo!!!
Anyways, I need to be sensible & have a walk away from that side of it for a moment while I still have my calm! I guess that tonight / tomorrow will probably be having a little bit of a play with getting i2s sound rather than getting myself worked up too much over the movement control.
Anyone who has I2S sound knowledge:
I have put together this flow chart to try & better explain the way I am thinking about syncing movement to sound. I am not after anyone to do the coding for me, but if you have done something similar could you confirm/correct me if I am planning this the right/wrong way?
Again, sorry for being all a bit wordy but I figure that if I an not show in plain code what I am trying to do, then giving my thoughts / explanations are the next best thing.
Thank you

