I was trying to explore the Image classification On the Arduino Q. The description says it uses a pre trained selection of images, but it does not say what images it is pre trained to use. Or if it does say I missed it.
At first it recognised a pair of sizers as a wall clock with 80% probability. I did try some other images and set the probability down to 1% and came up with a can opener.
So I photographed a can opener and that was successful.
The "Developer Plan" of the Edge Impulse service is free. For those whose usage exceeds the allowances provided by the free plan, paid plans are available.
I had an idea, I thought if I created a copy of this application, then maybe I could look at the code and even edit it. However, I can't find a way to do this. Even my copied version, when I run it dosn't present me with the same user interface like the original one did.
You should definitely be able to select any of the files of the App from the left hand panel and see the content, even with the example App. And after making a copy, you can edit the Arduino sketch and Python code files.
What was the difficulty you encountered when you tried to do it?
Are you maybe hoping to look at the model file (in this specific case, mobilenet-v2-224px.eim)? If so, you should note that it is a binary, so not something you can just open up in a text editor:
Starting app "Classify images expand"
Sketch profile configured: FQBN="arduino:zephyr:unoq", Port=""
The library ArxContainer has been automatically added from sketch project.
The library ArxTypeTraits has been automatically added from sketch project.
The library DebugLog has been automatically added from sketch project.
The library MsgPack has been automatically added from sketch project.
Sketch uses 277 bytes (0%) of program storage space. Maximum is 1966080 bytes.
Global variables use 0 bytes (0%) of dynamic memory, leaving 523624 bytes for local variables. Maximum is 523624 bytes.
Open On-Chip Debugger 0.12.0+dev-ge6a2c12f4 (2025-05-22-15:51)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
debug_level: 2
clock_config
/tmp/remoteocd/sketch.elf-zsk.bin
Info : Linux GPIOD JTAG/SWD bitbang driver (libgpiod v2)
Info : Note: The adapter "linuxgpiod" doesn't support configurable speed
Info : SWD DPIDR 0x0be12477
Info : [stm32u5.ap0] Examination succeed
Info : [stm32u5.cpu] Cortex-M33 r0p4 processor detected
Info : [stm32u5.cpu] target has 8 breakpoints, 4 watchpoints
Info : [stm32u5.cpu] Examination succeed
Info : [stm32u5.ap0] gdb port disabled
Info : [stm32u5.cpu] starting gdb server on 3333
Info : Listening on port 3333 for gdb connections
CPU in Non-Secure state
[stm32u5.cpu] halted due to debug-request, current mode: Thread
xPSR: 0x41000000 pc: 0x0801748c psp: 0x2002ccf8
Info : device idcode = 0x30016482 (STM32U57/U58xx - Rev W : 0x3001)
Info : TZEN = 0 : TrustZone disabled by option bytes
Info : RDP level 0 (0xAA)
Info : flash size = 2048 KiB
Info : flash mode : dual-bank
Info : Padding image section 0 at 0x080f07ec with 4 bytes (bank write end alignment)
Warn : Adding extra erase range, 0x080f07f0 .. 0x080f1fff
shutdown command invoked
python provisioning
python downloading
Network classify-images-expand_default Creating
Network classify-images-expand_default Created
Container classify-images-expand-main-1 Creating
Container classify-images-expand-main-1 Created
Container classify-images-expand-main-1 Starting
Container classify-images-expand-main-1 Started
Not the onscreen window inviting me to drag in an image to run the image classifier on. Like I showed in my first post.
No, I was looking for a list of words that the code must spit out when it recognises, or thinks it recognises an object.
That is an interface that is displayed in the web browser. You need to open the web browser and navigate to the appropriate URL once the App is running.
You should also be able to obtain it via the /api/info endpoint of the HTTP server produced by running the model with the --run-http-server flag (as the Docker container of the Brick does):
Attach the external power supply to the USB-C hub to power everything.
Run the App.
So this means that between steps 1 and 2.1 there is no external supply to the hub, despite it being shown connected in the diagram, and in effect it will be overloading the hub.
To an old fashion hardware guy, this sounds wrong so I have not tried this sequence. Seeing how easily the Q is damaged I am reluctant to try this and I have the external power connected to the hub at all times. Maybe this is why I am having difficulty running anything that suggests this sequence.
I wonder if any one can confirm that they have tried this sequence without figurate smoke being generated?
This is the only proper way to connect and power an Uno Q system involving external USB C devices. A PD-capable USB-C hub is mandatory: you power it with a +5VCD/3A USB-C PD power supply and it will power the Uno Q and the connected USB devices too.
Exactly what I said. An unpowered hub will not work and could possibly damage the Arduino Q, or the port that it is connected to. I know the Q is a very unforgiving board having partially bricked my origin (from the Arduino store) board. With having a replacement from a UK official distributor work first time.
While the diagram posted shows both the +5V connected, the words say not to connect it yet.
Yes I know that you need an external power supply. But the sequence of attaching the external power supply only after everything is connected seems to defy all the rules of electronics.
At best that diagram in the example should not show the connection to +5 VDC 3A, before you tell the user to connect it up.
You can connect devices and power in any sequence you prefer, but devices required by the App must be connected before launching it.
This is because the Apps run inside containers, and there is no way to attach a system device to a container at runtime dynamically.
So, if you run an App that requires a USB camera and the camera is not connected to the USB hub at the moment of launch, the App cannot recognize it if you connect it while running.
And the response I get from trying to run this app is
Starting app "Detect Objects on Camera
python provisioning
missing required device: no camera found
I have tried opening Photo Booth and enabling my Web cam to turn it on. But I still get the same result when trying to run this App.
Also as my web cam has a built in microphone, I have tried it for detecting sounds and, unsurprisingly, I get a no sound input detected. I also get the same thing when using my Griffin iMic which works on the Raspberry Pi, last time I tested it.
I am expecting the delivery of a USB microphone module over the weekend. But I am not hopeful it will rectify the situation.
The App Lab and Uno Q are unable to access a camera connected to your Mac. Actually, even if the App Lab is running on your Mac, it is executing Apps on the Uno Q, so every device needed for an App must be connected to the Uno Q itself, typically via a USB Hub.
To run the Object Detection example — and any other example requiring external hardware correctly — you need to set up the board in SBC/Networkmode and connect the App Lab on your Mac using network communication.
The following steps should allow you to set up your Uno Q to run the Object Detection example correctly.
Connect the Uno Q to a powered USB-C hub using the hub's host port or cable.
Connect the USB camera to the USB-C hub.
Power the USB-C hub via its PD power input port.
Wait a few minutes for the Uno Q to finish booting.
Launch the App Lab on your Mac.
Wait until the Uno Q shows up as a “Network” device.
Connect to it after inserting the Linux Credentials.
Open the Object Detection example and run it.
After a few dozen seconds, the app will be running.
A new tab in your defaultmacOS web browser will open, directing you to a web interface hosted on the Uno Q and displaying the Object Detection user interface.
I am familiar with step 10 from running the Pin Toggle app, but this did not happen when I tried this just now. Nothing opened up, nor did the light on my Web Cam come on indicating it was running.
This was displayed underneath the Python heading,
Activating python virtual environment
2025-11-07 08:53:27.555 WARNING - [MainThread] arduino.app_internal.core.ei: [ObjectDetection] Host: ei-obj-detection-runner - Ports: 1337 - URL: http://ei-obj-detection-runner:1337
======== App is starting ============================
2025-11-07 08:53:27.575 INFO - [WebUI.execute] WebUI: The application interface is available here:
Try clicking on the Network URL (the one with the explicit IP address of the board): it occasionally happens that the browser tab won’t open automatically.
The camera will start capturing as soon as you open the web page.
Again thanks, but this did not work, but something happened. I copied the Network URL http://192.168.178.71:7000. and pasted it to the window, where it appeared without the
:7000
Part of the URL.
At which point the user interface for "Detect objects on images" loaded up, inviting me to drag and drop images to be recognised. I tried this and it worked as before.
But again nothing showed up regarding a camera with the camera not showing its green light.