Hi,
I've started a course from HarvardX on EDX that introduces me to TinyML. One of the lessons includes using the Arduino TinyML kit which consists of the Arduino Nano 33 BLE Sense board and the OV7675 camera among other components. I've followed two tutorials on how to get started and managed to take a photo, the only issue is that the hexadecimal printout when converted looks saturated. I don't know if this is due to the code or something with the hardware itself and I can't find much else about the camera other than the EDX course and handout.
Here is an example image of what I am seeing.
The code compiled onto the board can be found in the Arduino IDE : File → Examples → Harvard_TinyMLx → test_camera
Here is the code as well
#include <Arduino_OV767X.h>
#include <mbed.h>
static mbed::DigitalIn button(p28);
#define PRESSED 0
int bytes_per_frame;
int bytes_per_pixel;
uint8_t data[160 * 120 * 2]; // QQVGA: 160x120 X 2 bytes per pixel (YUV422)
template <typename T>
inline T clamp_0_255(T x) {
return std::max(std::min(x, static_cast<T>(255)), static_cast<T>(0));
}
inline void ycbcr422_rgb888(int32_t Y, int32_t Cb, int32_t Cr, uint8_t* out) {
// Cb and Cr offset by 128
Cr -= 128;
Cb -= 128;
// Convert YUV to RGB using standard conversion formulas
out[0] = clamp_0_255(Y + (1.402 * Cr)); // Red
out[1] = clamp_0_255(Y - (0.344 * Cb) - (0.714 * Cr)); // Green
out[2] = clamp_0_255(Y + (1.772 * Cb)); // Blue
}
void setup() {
Serial.begin(115200); // Correct baud rate
while (!Serial);
if (!Camera.begin(QQVGA, YUV422, 1)) {
Serial.println("Failed to initialize camera!");
while (1);
}
bytes_per_pixel = Camera.bytesPerPixel();
bytes_per_frame = Camera.width() * Camera.height() * bytes_per_pixel;
// Optionally enable a test pattern for debugging camera output
// Camera.testPattern();
}
void loop() {
if (button == PRESSED) {
Camera.readFrame(data);
uint8_t rgb888[3]; // Buffer to hold RGB values
Serial.println("");
}
}
The code offered to convert hexadecimal to an image: colabs/4-2-12-OV7675ImageViewer.ipynb at master · tinyMLx/colabs · GitHub
The course and this handout show the similar instructions I followed: https://tinyml.seas.harvard.edu/assets/other/4D/22.03.11_Marcelo_Rovai_Handout.pdf
The course on EDX is titled: Deploying TinyML by HarvardX.
The conversion seems to work, so I believe it has to do with the compiled code for the Arduino, but I'm unsure what exactly would be the problem.