Project: "visual flight radar" (VFR) - be invited

Assuming, I am familiar enough meanwhile with Portenta H7 - I want to use board for a "fancy" project.
Please, join me, be invited - at least to share thoughts, hints, do brainstorming with me... Thank you.

Project
"Visual Flight Radar" (VFR)

Intention
I want to track airplanes in flight (flying over my home). I want to catch and track the airplane, via microphone(s). Adjust the camera mount (tilt and pan, not (yet) zoom), find and follow the airplane above my home (direct the camera towards it and start video recording).

Forget the mechanics for now (stepper motors and all this stuff), let's focus on the main items to implement as SW/FW:

Project Items

  1. get the digital MIC on Portenta Vision Shield to work (get the sound)
    Question: is one MIC enough to track a noisy object or is spatial sound, e.g. with 4 MICs better?
  2. install an AI, ML library, for sound processing:
    find the direction of the sound source, at best: distinguish sounds and "recognize the type of airplane" (learn "sound signatures" to differentiate between different models, single vs. dual engine, helicopter, type of engine, even based on sound pattern of the specific airplane, distinguish via/as tail number)
    Question: is Doppler Effect already part of the library or a need to "teach physics of sound"?
  3. bring-up camera vision:
    have library for taking pictures or video, later: also add video processing, e.g.: focus the seen airplane in center of screen, use camera also for tracking, follow it also with vision... (for now mainly based on sound).

So, it means, I had to start with:

  • sound aggregation
  • video aggregation
  • use of AI and ML libraries (and learning based on sound)
    on Portenta H7 (or Portenta X8)

Actually, I guess, at least a stereo MIC and a stereo video would be nice. Tending towards to have 4 MICs as spatial sound, mono vision first but with option to add stereo vision later.
Even Portenta H7 does not have spatial sound - I am tending towards to use 2..4 modules and combine (aggregate sound, vision data from all boards and post-process, connected via ETH network or all modules sending via SPI to a main Portenta module).

Project Achievements
In a first version: the Portenta H7 should realize airplane sound (and not cars and mowers on the ground) and direct the camera module to it (pan and tilt info for camera mast movement).
The camera should be pointed to aircraft and record a video (or take photos), track the flight path of aircraft.

Later goals can be:
Use also camera to track airplane, use sound (and picture) to distinguish different aircraft types, learn based on sound pattern and visual shape which type of aircraft it is,
combine with ADS-B data (e.g. from FlightRadar24, FlightAware) when an aircraft is approaching (on map), augment video with ADS-B data (tail number, altitude), or use it to "teach" system which type of aircraft it is.

Very fancy: with stereoscopic vision (dual camera) and the learnt aircraft type, shape, size/dimensions known: estimate the altitude above ground (AGL) - based on shape size in video. Speed is not important, but AGL would be cool (e.g. if a pilot violates the rules for minimum safe altitude over congested area - but I will not use to file complains).

The main idea is:
I am a pilot. I live right under the practice area and all the airplanes and students in "my" flight school fly over my head, over my home. I want to "see" them (visually, not just on FlightRadar ADS-B), send them a picture of their flight. LOL
And: I want to take a picture of the Osprey flying around here, sometimes, or a bi-plane coming by sometimes.

Who does want to join me?
Who can provide suggestions, tricks, thoughts about this project?
Thank you.

Did your research find that all on-line flight mapping and reporting is 5 minutes behind real time, by law?

Not a big deal:
a) I have my own ADS-B receiver (via Raspberry Pi and PiAware), it looks pretty real-time (if I see on my map (as local receiver) and I step out, the aircraft is almost there where I am looking for,
Internet says: 15 sec. processing delay (which makes sense).
b) even there is a delay in web based FlightRadar or FlightAware: I need just info about an approaching aircraft. When it is still 5 min. away from my home and I can predict the flight path - it should be over my home. So, it should be still possible to make an assumption which airplane is it when the camera will see it (there are not so many very close to each other).

And: I could filter also based on reported MSL (altitude): high flying aircrafts I do not need to track (e.g. all the airliners above my home). Due to sound propagation delay - the sound tracking will any fail (too much off based on arriving sound).

For AI and ML, to recognize which plane it "was" - any delay does not matter really: I need just after taken the photo or video which plane it was. I can augment with this info even hours later (or even manually). I need just to know the type, to mark the sound pattern with it (learning the sound signature).

Then I am sure you understand the security aspect of aircraft tracking. Good luck.

That is an extremely difficult task, as you will discover, for any source of sound.

Problems include but are not limited to reliably identifying the desired sound signal, accurate timing of signals (requires DSP cross correlation techniques) and trilateration to locate the source in 3D. A source moving at high but unknown speed, at large but unknown distance away considerably complicates the latter.

Fun project, though! My approach would be to develop all the algorithms on a decent desktop computer, rather than assume it can be crammed into something as limited as the Portenta H7.

Sure, it is difficult (therefore I want to do).
But why it should not be possible to use two (or even 4) MICs and do signal processing, e.g. as easiest just to measure the "delay" when same signal comes in? It would give me the "direction" where the signal is coming from. Do a cross-correlation between two MICs and find the "offset" which is indication of direction (OK: I had to bear in mind the internal delay, e.g. when as I2S signal, one channel comes before the other - but this "timing adjustment" I can do later).

I agree: intention is anyway to use MCU (Portenta H7) to aggregate sound from MIC and video. for a host PC. Send it via network to a host computer. I intend to use Matlab on host, to do the signal processing and the algorithm.
The beauty: Matlab can generate code for some STM32 MCUs, I think I have seen that Matlab sees my Portenta H7 board and tells me "it is supported by Matlab".

The problem is not the data and signal processing (e.g. the spatial audio from MIC array) - the "trouble" starts with FW code to setup the MIC, the camera etc.
Send it from MCU to host PC (OK, network is solved - I use it already).
When I get data (sound and video) - any platform is fine to post-process. And Matlab might help me a lot for the post-processing.
I need just to bring-up the MICs and the Camera interface, to have a data stream sending to host.

But, in addition to use the MCU just as a "sensor" to aggregate the data - I want to use also for AI, ML.
We will see (anyway later after all the "drivers" running and data coming in).

First part is to get both MICs on VisionShield and camera to work. Everything else above the HW-layer is "a piece of cake" (and can be done on a host PC first, before I convert into embedded FW code, e.g. let Matlab generate code for MCU).

Project Update - 1
I have launched a new project (in STM32 CubeIDE), updated the STM HAL drivers, Middleware etc.
Compile (and warning) clean and it runs.
I reuse a previous project as starting point which has all the features I might need later (SDCard, ETH network, SDRAM, "Pico-C", USB-C UART, QSPI flash, configuration stored in RTC...)

So, my starting project is up and running. It is a STMCube32IDE project (not Arduino GIU and not sketch based, as full native C-code project). The project ZIP is here (to grab some source code):
Portenta H7 VFR project (STM32CubveIDE)

So, next steps:

  1. bring-up digital microphones (PDM), starting point:
    STM32H7 PDM digital microphones
  2. bring-up the camera interface (DCIM), staring point:
    DCIM camera interface
  3. integrate OpenMV, starting points:
    STM32 AI for vision
    OpenMV STM32H7 - separate HW

So, STM supports already vision AI via FP-AI-VISION1:
STM Vision1 AI

Cool, I think it is not so difficult at the end...
Stay tuned.

Project Update - 2
I have the PDM MICs working: I have a simple peak meter, giving me an indication via UART. If I clap my hands - I see it. So the MIC is working (just to check the gain and the audio quality - see USB issue below).

For testing the MICs and to bring to a host PC in order to listen to it - I tried to bring-up USB-A on the main board (as second USB, keep the HS USB on USB-C connector working for my control UART).

I am able to see the enumeration, I see the USB audio device as MIC input on PC.
So, the main board USB-A as a FS seems to work.

But: it ends up all the time on MCU in an ISO OUT Error (in USB INT Handler). No audio streamed because of it.
"rrrr" : I saw on Internet that this seems to be a known issue with STM HAL drivers, not handling ISO transfer and Endpoint (EP) properly.
But when I configure the USB-A as a VCP device (USB UART) - it works.
So, the USB-A itself is OK, the enumeration is OK, just the ISO audio transfer fails.

What I have realized so far:

  1. STM HAL drivers do not support really to use HS and FS devices in parallel (just one or the other). OK, I could fix by adding code (to distinguish if I configure HS or the FS device in MCU).
    YES, the MCU can handle two USB interfaces in parallel! Cool.
  2. The main board (for the USB-A) has a power control chip, to provide VBUS power (or not).
    Default is (via pull-down): enable VBUS power. So, the PC would see it as a Host device (when providing the power on VBUS).
    Solution: close jumper J12 on main board, initialize and drive PG3 high (in order to disable the VBUS power). OK
  3. In USB enumeration (the device descriptor) - better to specify "self-powered" instead of "bus powered".
  4. It looks like: when using USB HS (with external PHY), the clock config for USB device does not matter. I had a bug in USB clock config (using a wrong PLL). But for USB FS it is important to do the clock config correct, e.g. to use PLL3 and make sure it generates really 48 MHz.

All fine, just that my ISO transfer via USB-A, as FS device, to send the MIC audio frame, does not work due to this ISO EP Error.
Anybody with a clue why and how to fix?

Otherwise, I will use network (UDP) and send my MIC audio to a Python script (or Matlab) where I can analyze or listen to it.
Or I use the second USB VCP to send the audio, it is for sure fast enough to do so (1.536 Mbit/s needed, I use VCP already with 1.8432 Mbit/s).

It would be just nice to see the Portenta H7 as a MIC sound card (to get the MIC as USB Audio device). It would allow me to use Audio tools on (Windows) PC, e.g. Audacity, without to write scripts or using "tricks" to convert my audio.

So, far, a big step forward.

Project Update - 3
The USB-A audio streaming to host PC is working!

I saw a config for USB device FIFO and size too small. But it has not really fixed the issue.

The only working solution:

  • install USB sniffer, I use "Device Monitoring Studio"
  • when this tool is running, even not sniffing data:
    • I see my USB enumeration is correct
    • I see the USB traffic from MCU coming in on PC Audio Recording (I fill USB buffer with sine and the sine comes in on PC via USB - cool)
  • when I close the tool:
    • it keeps still working and streaming audio
    • BUT: a new start, unplug and plug in USB-A cable - when tool is not running - no audio received

Very strange: looks more like an issue with my Windows 10 computer, the timing, esp. on USB device start. With the sniffer running (never mind if and what it would sniff) - all fine, audio comes in via USB from Portenta H7 main board via USB-A (FS) connection.

Remarks:
very tricky to debug USB issues: when you use breakpoints on timing sensitive code, e.g. during the enumeration - the USB times out and PC does not see it. So, some parts of USB, e.g. USB enumeration, device connection must run in real time and no way to debug (with breakpoints).

Anyway: a step forward: now I get my audio from MCU via USB, like a microphone soundcard.

Here, how my sine generated inside MCU, out via MCU to PC (using tool Audacity) looks like.

Next steps:

  • connect the PDM MIC buffers to USB audio (get the real MIC audio)
  • analyze the signal (noise, level, distortion, SNR ...)
  • but be aware: there can be a sync problem: both entities run with independent clocks and it can result in glitches, drop-outs due to buffers overrun/underrun (known issue for me).
    Let's see if I can use the same PLL for PDM MIC as for USB (PPL3), so that their clock is "coupled" and both remain in sync. If not (clock drifting away) - I could live with it, I do not need really bit-error free audio for this application.

What aspects of airplane sounds are you keying on? Exhaust note from internal combustion engines? Sound from a propeller? Sound of one,two, 4 jet engines?
What do you do when someone changes from a two blade prop to a three blade prop on the same airplane?

"Cool" question (to make it very sophisticated):
To differentiate between exhaust noise, prop noise, wing noise; knocking valves... - how to do? Any idea?
But, I, am sure, based on the "noise signature" you can distinguish between prop and jet engine (my ear I can for sure).
And single engine vs. dual engine (multi) as piston - as well (my ear can do as well).

For my feeling it is not necessary to be so "precise" on audio analysis:

  • I need audio first just to direct camera to any aircraft (and not cars on street) - should be a different "sound pattern"
  • later, I can train the system by: which aircraft picture/silhouette has which sound pattern (in general)?
    But this is not really the intention:
    as long as the MICs will find and track any aircraft and adjust the camera - all fine.

After a while, with AI and ML - the system could learn how a specific aircraft sounds.
This is additional data at the end, but not a starting criteria to know in order to bring the system to work.

BTW: (as pilot and a bit familiar with airplanes): changing from a two-blade to four-blade prop is not possible without to change the engine (gear ratio). And at the end: you might not hear the difference: a four-blade rotates with 1/2 the RPM and it might sound the same as a two-blade prop. A prop generating thrust "cannot" rotate faster as the prop tip would generate "wind" faster as speed of sound. I do not care for the prop. You can hear the "prop load", but the not the type of the prop (e.g. wooden, composite, metal, and its angle of attack when fix pitch prop used...).

You could also ask: possible to "hear" if planes have flaps extended or not? Yes, you could. But not my interest.
I am pretty convinced, I could figure out what the type of engine it is (Lycoming, Rotax ... as piston engines). On jet engines: no way! (maybe jet vs. turbo-prop).
To differentiate between airplane and helicopter - this might be the easiest part.

The challenge is on MCU: bring-up MIC, camera. Everything else, what to do with the data... - later. The tracking with camera via MIC sound is challenging enough, I think.
I making progress...

Project Update - 4

Even I could not fix the INCOMPISOOUT issue - I have connected now the PDM MIC to USB transfer: All is noise, almost at 80% level all the time. Even I see my hand clapping in simple VU meter - I do not hear anything (just to see a bit in Audacity waveform).

So, based on what I have seen in code (I use the original Arduino/mbed audio.c file):

  1. this code configures (also) the PLL3 for PDM MIC. But this config kills my USB FS (on USB-A, the 48 KHz is not there anymore). OK, I could fix (move config to system_clock.c and correct the PLL3 config and use).
    Good is: USB_A (as USB FS) runs on same PLL3 as the SAI4 (for PDM MIC) does: this should give me a fix clock relation (always in sync, to avoid buffer slips).
  2. My USB MIC transfer is stereo (2 channels), 16bit PCM.
    But when I see the PDM MIC code - it looks like:
  • it uses 8bit per channels, even two channels for stereo is there - but it does not match! Therefore the noise? (for sure)
  • not sure, but I saw that LSB first on SAI4 but MSB first on PDM filter - not sure if it is an issue.
    For sure, the entire config is for 8bit from MIC as stereo (2 channels), not 16bit (I have expected and I would need).

Next steps:

  • change USB also to 8bit stereo or better: fix "audio.c" for using 16bit per channel
  • strange: why Arduino/mbed uses 8bit data from MIC? (16bit should be reasonable and possible, based on the MIC datasheet), it would result in a bad resolution/sensitivity and low DNR (even the MIC seems to be good for 16bit samples, Arduino does not use 16bit)

So, the PDM MIC does not match with my USB (8bit vs. 16bit). And Arduino/mbed code looks "strange" (why not 16bit?), killing my clock config? (OK)...

HAPPY NEW YEAR
and keep going with your project.

Project Update -5

When you reuse a piece of code from other projects, a repository - check carefully if code is really complete.
Even it is compile clean - it does not mean it is complete (or correct). An initialization, in my case an allocation of buffer - can be missing.

Details:
I have reused the "audio.c" from Arduino/mbed library (repository). It compiles fine.
I have used the PDM buffer "PDM_BUFFER": in order to forward to USB or to do my VU Peak Meter.
But so WRONG!

PDM MIC works this way:

  • it enables PDM MIC, here via SAI4, writes the PDM Values (!!) into a buffer PDM_BUFFER
  • but PDM is Pulse Dense Modulated (length of gaps in between pulses are the values), it is not yet PCM (Pulse Coded Modulation, as values for amplitude levels)
  • it needs a PDMFilter: it takes the PDM signal, filters and converts it to PCM values
  • so, two steps: all first to PDM buffer, PDMFIlter generates a new output, here via reference pointer "g_pcmbuf", a new buffer with PCM values now

But this "g_pcmbuf" is a reference (pointer) to a buffer: the buffer is not allocated, points to NULL. So, it would write to my DTCM RAM (address 0x00000000, as NULL). No errors, all looks fine, but it overwrites my data (wondering why it has not crashed with another strange misbehavior).
It does not fire the MPU because address 0 is DTCM and NULL is valid to write via this pointer.

This "g_pcmbuf" seems to be allocated (created) in PDM.cpp - which I do not use. So, the PDMFIlter output buffer is not really created (allocated). It initializes the pointer with NULL, no compile error, but WRONG!

And I am listening on the wrong buffer: the PDM_BUFFER is just PDM. No surprise why it does create just noise and why all the values never see a reasonable VU peak.
Know the data processing steps in your system (in my case: two stages, with two different buffers) and grab the result in the correct format from the "correct" buffer.

Next steps:

  • allocate a PDMFilter output buffer, this "g_pcmbuf" - allocate the memory (done in not used PDM.cpp)
  • use the PCM values from there, not the PDM values! use the PDMFilter output buffer
  • check, if after PDMFilter the PCM values are provided as 16bit, signed, two channels (potentially they do - just to check it does not end up in a "data format" issue)

Lesson learned: reusing a piece of code is no guarantee - even it is compile clean - it looks
complete and correct (in my case a buffer used via a pointer which was preset to NULL, pointing to an existing memory but used for other purposes (RTOS stack, local data)).
To have a NULL pointer initialized looks compile clean but is not correct! And during run-time it has not created a "strange crash", even data was overwritten on address 0x00000000 - just luck.

We can fix this issue (when understanding the source code properly, not assuming "compile clean" is enough to see it working properly. The "logic in code" matters still, and compilers cannot find such "bugs" (e.g. missing code).

Project Update - 6
! I have the PDM MICs working now via USB Audio to a PC !
Cool.

Details:
First, I had audible artefacts, audible clicks in sound on PC (and clearly to see in Audacity as "jumps" in audio samples).
Knowing this issue - I could solve.
See details also here:
USB Audio issue

The PC USB Audio is a separate, independent and "asynchronous" clock, in relation to my MCU SAI4 clock used for PDM MICs. They differ in clock frequency, they drift - and no way to bring both independent clocks into "sync", with same speed or use one (PC Audio Clock) as master clock to generate the other (fixed coupled).

But using the "fractional multiplier" on PLL3, used for SAI4 (PDM MIC) clock and with some debug means (see waveform on GPIOs for timing relation), I could trim my system and all fine now.

Now I see the MICs, can analyze the MIC audio (with Audacity, using as USB Recording Device) etc.
Here, how it looks like when playing a 1 KHz tone on smartphone in front of the MICs.

I can now also analyze the spectrum, the sensitivity, noise level, see difference between left and right) ...

But:
I am not sure if VisionShield is good enough for my use case.
I have seen:

  • one channel has different sensitivity as the other
  • it is a bit obvious: one MIC sits directly in front of the ETH connector:
    so, it gets a sound reflection, it is blocked a bit by this ETH connector. This can cancel out
    the sound due to reflection from this ETH connector, the sound level is not the same between both channels!
  • not good for my use case (application): I want to have both MICs sensitive in the same way:
    I want to track an object via sound and both MICs should give me same sound level when the source is centered in front of the MICs (but it does not, OK, I could try to "correct")

Never mind - great progress!
Even these (USB related) sound artefacts are not really an issue: I need just "artefact free" audio on PC (via USB) to evaluate the MICs, their parameters.
Later, when I do the MIC sound processing, to track my airplanes - it does not matter anymore,
because these artefacts are generated by the PDM MIC to USB (the clock issue) which I would not need anymore, later. So, the MIC sound seems to be clean internally.

BTW:
The debug of the timing was important. See, how the SAI4 clock (for PDM MICs) is related to the USB timing?
I did via dedicated GPIO output pins: toggle a GPIO pin when SAI4 frame (half-buffer) is ready and also another one when USB frame is ready (also half-frame, due to "DoubleBuffer" used).
Check the timing relation, the drift, and when a "wrong" buffer content would be sent (when they GPIO clocks "pass" each other).
This looked like this:

You see, that both clocks are drifting: they are not identical, SAI4 clock was a bit slower as the USB Audio clock. So, this "must" generate" an issue, a "buffer overrun/underrun" with audible artefacts (clicks).
Solved by adjusting SAI4 PLL clock via "fractional multiplier".
Fine tune the SAI4 clock frequency.
(Better: add a "Clock Recovery" and trim this value)

Next steps:
Now:

  • start with audio analysis, track sound source (direction), generate signals to control the "mast motor" (yaw and tilt movement values, even w/o to use stepper motors to do so, mechanics at the end...)
  • add camera vision: take pictures or shot movie for the object (airplane) in view

The biggest part is done and working (the audio). Cool!

Project Update - 7
Now TODO:

  • implement and test if MIC can track an object (e.g. where my smartphone as audio source is located, generate vectors to move the board...)
  • implement audio processing:
    do it in an RTOS thread, measure the volume level (RMS) and compare - a hint for angle of sound source,
    do a cross-correlation of signals Left and Right and find a timing offset - a hint for angle of sound source

For Video:
Instead to bring up the DCIM camera - I intend to use "Huskylens" with I2C to Portanta H7:
Arduino Huskyelens

Other next TODO:

  • bring-up two PWM outputs, so that I can use RC model server motors, for the rotation of the camera

I think, I am pretty close...

Project - Aborting...

I will stop my project - all is already there!
See this - exactly what I want to accomplish:
SkyScan with Raspberry Pi

I am familiar with Raspberry Pi (I was thinking to use RPi anyway as well for camera), but this guy has all what I want to achieve (including ADS-B tracking for my final stage).

Why should I continue when all is already there?
Also professional solutions, such as:
L3HARRIS Symphony Vantage
or even this one:
FlightLine Films - Optical Tracking

I could try to bring-up the camera, so much effort and at the end...?
Even, I have found cameras as PTZ (Pan, Tilt and Zoom) - I was thinking to have a Zoom Camera as well, ... any outdoor camera can do the job (maybe just to trim the SW, learn to look into the sky).

Pan Tilt Zoom (PTZ) camera for Raspberry Pi

Even such camera can do the job:
PTZ POE camera with audio

It costs just USD 200. But the Portenta H7 plus mechanics, plus housing ... would cost me even more (doing it myself), scratching my head during the project duration...

Not reasonable to progress. Even it is fun, but it will be wasting time and money... over a long run (with an accomplishment still worse as other Open Source solutions).

I am sorry: I stop the project (and might divert to Raspberry Pi).
Have fun with your projects, have progress on your projects and maybe we "see" us again.
Best regards to the followers here.

Interesting that the "SkyScan" project uses radio location data, rather than sound, to identify and track the aircraft. Fairly trivial, with that approach.

To enable better tracking, most planes broadcast a signal known as Automatic Dependent Surveillance–Broadcast or ADS-B. This signal is at 1090MHz and can be easily received using a low cost Software Defined Radio (SDR), like the RTL-SDR which repurposes a digital TV chip.

From the ADS-B transmissions, you can get a plane's location and altitude. If you know where a plane is and where you are, you can do some math, point a camera at the plane and take a picture. If you have a Pan/Tilt camera lying around, you can have it automatically track a plane as it flies by and snap photos

Sure, exactly what I had in mind: find aircrafts on a map, prepare and track it when over your location.

I know ADS-B pretty well (I have my own Raspberry Pi ADS-B local receiver, independent of Internet). I use ADS-B when I fly airplanes, I use iPad with ADS-B receiver during the flight (even my aircraft equipped with G1000 glass cockpit has it as well).
ADS-B is pretty cool and very helpful during a real flight in an aircraft.

BTW: they try to get rid of radar facilities and use ADS-B instead. This is also necessary when it will come to drones used as city air taxi. Nothing else is fast and precise enough to track airplanes. ADS-B is: "let the aircraft send its GPS position in 3D, with augmented info such as tail number and speed". Nice approach (just the fact it relies heavily on the still not really public GPS). And ADS-B can be processed on ground stations, but also received directly between plane-to-plane (or plane-to-myGround in my case).

To play with ADS-B is real fun (a Raspberry Pi with a radio receiver). You will be surprised how many planes you can see in the air, also how far away (for instance: I live south of LAX and I can see planes up to Bakersfield, potentially 100 miles, but they are unobstructed on high altitude).

Funny flight experience is: when you fly with your airplane on low altitude, maybe will hilly terrain: you get your own ADS-B-Out signal: it appears on screen like another plane. How often I was looking for an airplane close to my one...?

Project Update - 8

I cannot stop...
I thought about: "how to bring the MIC audio via network to a host PC?"
I know Audinate DANTE (and I have used in the past, but issue to get a license for private use):
Audinate DANTE - professional network audio

So, I did search and found: VB-Audio, VoiceMeeter:
VB-Audio, VoiceMeeter

Pretty cool: it provides a "virtual audio cable": via network you can have audio input and output.
I have implemented an UDP network streamers on Portenta H7 MCU, which sends my MIC audio via "VBAN Protocol". - it works!
And:
YEAAAHHH - I see my MIC via network: I can listen, I can record, I can forward to other audio applications. Pretty easy to implement (it is UDP with a 28byte header and the PCM MIC sound, just a collect a bit larger audio frames for efficiency).

So, if I would use WiFi (instead of ETH right now) - I would have a real wireless microphone. Cool.

Next steps:

  1. bring-up two two TIMs with PWM, for RC model servo motors (for tilt and pan of camera),
    but see my "concerns" in separate post
  2. I have started to record sound of aircrafts above my home with my smartphone: a bit disappointing, see also on my "concerns".
    I was thinking to record some audio of flying airplanes and to train the system by replaying it.
  3. Still the camera to do: not sure if I will do with Portenta H7 VisionShield: potentially, on another platform, e.g. Raspberry Pi, a bit more powerful and easier (e.g. via network, USB, at best use an IP camera via network, see PTZ cameras)

Anyway, meanwhile I use the Portenta H7 pretty feature reach, using almost all the features on board: bring-up WiFi for network audio instead of ETH cable right now - would be cool as well.