Arduino Q Image Classifacation Example

I was trying to explore the Image classification On the Arduino Q. The description says it uses a pre trained selection of images, but it does not say what images it is pre trained to use. Or if it does say I missed it.

At first it recognised a pair of sizers as a wall clock with 80% probability. I did try some other images and set the probability down to 1% and came up with a can opener.

So I photographed a can opener and that was successful.

Then I turned the winding handle through 90˚ and got this

It thought it was a ball point pen!

So is there a list somewhere that defines exactly what things it is looking for?

Thanks for reading.

Hi @Grumpy_Mike. I don't know if it will exactly answer your question, but this is the "MobileNetV2 Image classifier - ImageNet base pretrained" model used by the "Image Classification" brick of the "Classify images" example App:

From the description of the model:

Based on: https://www.tensorflow.org/api_docs/python/tf/keras/applications/MobileNetV2

Thanks but it did not help much.

Is the edge application a payed for app?

The model is trained on the "ImageNet" dataset:

The "Developer Plan" of the Edge Impulse service is free. For those whose usage exceeds the allowances provided by the free plan, paid plans are available.

You can see the details here:

Thanks for that.

I had an idea, I thought if I created a copy of this application, then maybe I could look at the code and even edit it. However, I can't find a way to do this. Even my copied version, when I run it dosn't present me with the same user interface like the original one did.

You should definitely be able to select any of the files of the App from the left hand panel and see the content, even with the example App. And after making a copy, you can edit the Arduino sketch and Python code files.

What was the difficulty you encountered when you tried to do it?

Are you maybe hoping to look at the model file (in this specific case, mobilenet-v2-224px.eim)? If so, you should note that it is a binary, so not something you can just open up in a text editor:

What specific differences did you observe?

Just a screen full of writing.

Starting app "Classify images expand"
Sketch profile configured: FQBN="arduino:zephyr:unoq", Port=""
The library ArxContainer has been automatically added from sketch project.
The library ArxTypeTraits has been automatically added from sketch project.
The library DebugLog has been automatically added from sketch project.
The library MsgPack has been automatically added from sketch project.
Sketch uses 277 bytes (0%) of program storage space. Maximum is 1966080 bytes.
Global variables use 0 bytes (0%) of dynamic memory, leaving 523624 bytes for local variables. Maximum is 523624 bytes.
Open On-Chip Debugger 0.12.0+dev-ge6a2c12f4 (2025-05-22-15:51)
Licensed under GNU GPL v2
For bug reports, read
	http://openocd.org/doc/doxygen/bugs.html
debug_level: 2
clock_config
/tmp/remoteocd/sketch.elf-zsk.bin
Info : Linux GPIOD JTAG/SWD bitbang driver (libgpiod v2)
Info : Note: The adapter "linuxgpiod" doesn't support configurable speed
Info : SWD DPIDR 0x0be12477
Info : [stm32u5.ap0] Examination succeed
Info : [stm32u5.cpu] Cortex-M33 r0p4 processor detected
Info : [stm32u5.cpu] target has 8 breakpoints, 4 watchpoints
Info : [stm32u5.cpu] Examination succeed
Info : [stm32u5.ap0] gdb port disabled
Info : [stm32u5.cpu] starting gdb server on 3333
Info : Listening on port 3333 for gdb connections
CPU in Non-Secure state
[stm32u5.cpu] halted due to debug-request, current mode: Thread 
xPSR: 0x41000000 pc: 0x0801748c psp: 0x2002ccf8
Info : device idcode = 0x30016482 (STM32U57/U58xx - Rev W : 0x3001)
Info : TZEN = 0 : TrustZone disabled by option bytes
Info : RDP level 0 (0xAA)
Info : flash size = 2048 KiB
Info : flash mode : dual-bank
Info : Padding image section 0 at 0x080f07ec with 4 bytes (bank write end alignment)
Warn : Adding extra erase range, 0x080f07f0 .. 0x080f1fff
shutdown command invoked
python provisioning
python downloading
 Network classify-images-expand_default  Creating
 Network classify-images-expand_default  Created
 Container classify-images-expand-main-1  Creating
 Container classify-images-expand-main-1  Created
 Container classify-images-expand-main-1  Starting
 Container classify-images-expand-main-1  Started

Not the onscreen window inviting me to drag in an image to run the image classifier on. Like I showed in my first post.

No, I was looking for a list of words that the code must spit out when it recognises, or thinks it recognises an object.

That is the App Lab console.

That is an interface that is displayed in the web browser. You need to open the web browser and navigate to the appropriate URL once the App is running.

I believe you can see it here:

https://huggingface.co/datasets/ILSVRC/imagenet-1k#:~:text=Click%20here%20to%20see%20the%20full%20list%20of%20ImageNet%20class%20labels%20mapping

You should also be able to obtain it via the /api/info endpoint of the HTTP server produced by running the model with the --run-http-server flag (as the Docker container of the Brick does):

What a list, this is huge!
I am assuming that this list of objects and animals is the full amount of training and is not what is normally available.

In the introduction before the app runs you're told to use just the following words:-
Cat, Cell phone, Clock, Cup, Dog, and Potted plant.

So I am assuming that this is the cut down training data that is used in the pre-trained model to detect objects on a live video feed from a camera.

There is another point that worries me. In the hardware setup it says.

  1. Connect the USB-C hub to the UNO Q and the USB camera. !
    1. Attach the external power supply to the USB-C hub to power everything.
  1. Run the App.

So this means that between steps 1 and 2.1 there is no external supply to the hub, despite it being shown connected in the diagram, and in effect it will be overloading the hub.

To an old fashion hardware guy, this sounds wrong so I have not tried this sequence. Seeing how easily the Q is damaged I am reluctant to try this and I have the external power connected to the hub at all times. Maybe this is why I am having difficulty running anything that suggests this sequence.

I wonder if any one can confirm that they have tried this sequence without figurate smoke being generated?

What do you mean?

This is the only proper way to connect and power an Uno Q system involving external USB C devices. A PD-capable USB-C hub is mandatory: you power it with a +5VCD/3A USB-C PD power supply and it will power the Uno Q and the connected USB devices too.

Please see https://docs.arduino.cc/tutorials/uno-q/single-board-computer/

Exactly what I said. An unpowered hub will not work and could possibly damage the Arduino Q, or the port that it is connected to. I know the Q is a very unforgiving board having partially bricked my origin (from the Arduino store) board. With having a replacement from a UK official distributor work first time.

While the diagram posted shows both the +5V connected, the words say not to connect it yet.

Yes I know that you need an external power supply. But the sequence of attaching the external power supply only after everything is connected seems to defy all the rules of electronics.

At best that diagram in the example should not show the connection to +5 VDC 3A, before you tell the user to connect it up.

Please see https://docs.arduino.cc/tutorials/uno-q/single-board-computer/
So when you do it says

  • RAM: 2 GB or 4 GB LPDDR4 (we recommend the 4 GBvariant for a smooth SBC experience)

So nice of the document to recommend something that so far does not exist.

@Grumpy_Mike

You can connect devices and power in any sequence you prefer, but devices required by the App must be connected before launching it.

This is because the Apps run inside containers, and there is no way to attach a system device to a container at runtime dynamically.

So, if you run an App that requires a USB camera and the camera is not connected to the USB hub at the moment of launch, the App cannot recognize it if you connect it while running.

Is this your case?

OK I know that.

No. Everything is connected up to the hub before I launch.

And the response I get from trying to run this app is

Starting app "Detect Objects on Camera
python provisioning
missing required device: no camera found

I have tried opening Photo Booth and enabling my Web cam to turn it on. But I still get the same result when trying to run this App.

Also as my web cam has a built in microphone, I have tried it for detecting sounds and, unsurprisingly, I get a no sound input detected. I also get the same thing when using my Griffin iMic which works on the Raspberry Pi, last time I tested it.

I am expecting the delivery of a USB microphone module over the weekend. But I am not hopeful it will rectify the situation.

Hi @Grumpy_Mike

The App Lab and Uno Q are unable to access a camera connected to your Mac. Actually, even if the App Lab is running on your Mac, it is executing Apps on the Uno Q, so every device needed for an App must be connected to the Uno Q itself, typically via a USB Hub.

To run the Object Detection example — and any other example requiring external hardware correctly — you need to set up the board in SBC/Network mode and connect the App Lab on your Mac using network communication.

The following steps should allow you to set up your Uno Q to run the Object Detection example correctly.

Double-check that you have finished the onboarding workflow and that the Uno Q has WiFi configured: If WiFi is appropriately configured, you should see the Uno Q showing up (also) as a Network device in the App Lab (see https://docs.arduino.cc/tutorials/uno-q/single-board-computer/#network-mode).

  1. Connect the Uno Q to a powered USB-C hub using the hub's host port or cable.
  2. Connect the USB camera to the USB-C hub.
  3. Power the USB-C hub via its PD power input port.
  4. Wait a few minutes for the Uno Q to finish booting.
  5. Launch the App Lab on your Mac.
  6. Wait until the Uno Q shows up as a “Network” device.
  7. Connect to it after inserting the Linux Credentials.
  8. Open the Object Detection example and run it.
  9. After a few dozen seconds, the app will be running.
  10. A new tab in your default macOS web browser will open, directing you to a web interface hosted on the Uno Q and displaying the Object Detection user interface.

Please let me know if this works for you.

Thanks for that.

I am familiar with step 10 from running the Pin Toggle app, but this did not happen when I tried this just now. Nothing opened up, nor did the light on my Web Cam come on indicating it was running.

This was displayed underneath the Python heading,

Activating python virtual environment
2025-11-07 08:53:27.555 WARNING - [MainThread] arduino.app_internal.core.ei: [ObjectDetection] Host: ei-obj-detection-runner - Ports: 1337 - URL: http://ei-obj-detection-runner:1337
======== App is starting ============================
2025-11-07 08:53:27.575 INFO - [WebUI.execute] WebUI: The application interface is available here:

I am not sure what this tells you or what action to take here.

Also my Mac screen dump dose not work, something that happens sometimes when running Apps. The cure is to do a restart on my Mac.

@Grumpy_Mike,

Try clicking on the Network URL (the one with the explicit IP address of the board): it occasionally happens that the browser tab won’t open automatically.

The camera will start capturing as soon as you open the web page.

Again thanks, but this did not work, but something happened. I copied the Network URL http://192.168.178.71:7000. and pasted it to the window, where it appeared without the

:7000 

Part of the URL.

At which point the user interface for "Detect objects on images" loaded up, inviting me to drag and drop images to be recognised. I tried this and it worked as before.

But again nothing showed up regarding a camera with the camera not showing its green light.

@Grumpy_Mike

To detect objects with the USB camera, the correct example is “Detect Objects on Camera“

Doh! My bad sorry.

So I reset my Mac and I am back to:-

Starting app "Detect Objects on Camera"
python provisioning
missing required device: no camera found