Problem using tensorflowlite and esp32 cam

I have this error in my code but i have never used RoundingDivideByPOT() function in my code idk reasons or solutions so can u help me
assert failed: IntegerType gemmlowp::RoundingDivideByPOT(IntegerType, int) [with IntegerType = long int] fixedpoint.h:359 (exponent <= 31)

What code?

How do you expect us to spot this error if we can't see it?

Do you have a genuine Arduino Nano ESP32 or some other type of ESP32?

For informed help, please read and follow the instructions in the "How to get the best out of this forum" post.

Esp 32 cam

The code is the tflite example of TensorFlow Lite person detection ESP32 CAM but by changing the model to my model and the output data type with float instead of uint8_t

So what ever you think that word salad meant, it is not a substitute for posting the actual code you are using.

Is not the answer to

Are you serious about wanting help?
From your ultra evasive answers it looks to me like you are not.

The link that jremington suggested is this one.
how-to-get-the-best-out-of-this-forum

Please read it and try to understand it before posting again.
We only know what you tell us, and without knowing what you have, we don't stand a chance. At the moment you are telling us nothing useful.

I moved your topic to a more appropriate forum category @abdulazizhany .

The Nano ESP32 category you chose is only used for discussions directly related to the Arduino Nano ESP32 board.

In the future, please take the time to pick the forum category that best suits the subject of your question. There is an "About the _____ category" topic at the top of each category that explains its purpose.

Thanks in advance for your cooperation.

#include <TensorFlowLite_ESP32.h>
#include "main_functions.h"
#include "detection_responder.h"
#include "image_provider.h"
#include "model_settings.h"
#include "model_data.h"
#include "tensorflow/lite/experimental/micro/kernels/micro_ops.h"
#include "tensorflow/lite/experimental/micro/micro_error_reporter.h"
#include "tensorflow/lite/experimental/micro/micro_interpreter.h"
#include "tensorflow/lite/experimental/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"

// Globals, used for compatibility with Arduino-style sketches.
namespace {
tflite::ErrorReporter* error_reporter = nullptr;
const tflite::Model* model = nullptr;
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;

// Allocate memory for input, output, and intermediate arrays.
constexpr int kTensorArenaSize = 1024 * 1024;
uint8_t* tensor_arena = nullptr;
}  // namespace

// The name of this function is important for Arduino compatibility.
void setup() {
  // Set up logging.
  static tflite::MicroErrorReporter micro_error_reporter;
  error_reporter = &micro_error_reporter;
  // Allocate memory for the tensor arena.
  tensor_arena = new uint8_t[kTensorArenaSize];
  if (tensor_arena == nullptr) {
    error_reporter->Report("Failed to allocate memory for tensor arena.");
    return;
  }

  // Map the model into a usable data structure.
  model = tflite::GetModel(g_model_data);
  if (model->version() != TFLITE_SCHEMA_VERSION) {
    error_reporter->Report(
      "Model provided is schema version %d not equal "
      "to supported version %d.",
      model->version(), TFLITE_SCHEMA_VERSION);
    return;
  }

  // Pull in only the operation implementations we need.
  static tflite::MicroMutableOpResolver micro_mutable_op_resolver;
  micro_mutable_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_DEPTHWISE_CONV_2D,
    tflite::ops::micro::Register_DEPTHWISE_CONV_2D());
  micro_mutable_op_resolver.AddBuiltin(tflite::BuiltinOperator_CONV_2D,
                                       tflite::ops::micro::Register_CONV_2D(), 1, 5);
  micro_mutable_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_AVERAGE_POOL_2D,
    tflite::ops::micro::Register_AVERAGE_POOL_2D());
  micro_mutable_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_MAX_POOL_2D,
    tflite::ops::micro::Register_MAX_POOL_2D());
  micro_mutable_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_RESHAPE,
    tflite::ops::micro::Register_RESHAPE());
  micro_mutable_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_FULLY_CONNECTED,
    tflite::ops::micro::Register_FULLY_CONNECTED(), 1, 9);
  micro_mutable_op_resolver.AddBuiltin(
    tflite::BuiltinOperator_SOFTMAX,
    tflite::ops::micro::Register_SOFTMAX());

  // Build an interpreter to run the model with.
  static tflite::MicroInterpreter static_interpreter(
      model, micro_mutable_op_resolver, tensor_arena, kTensorArenaSize,
      error_reporter);
  interpreter = &static_interpreter;

  // Allocate memory from the tensor_arena for the model's tensors.
  TfLiteStatus allocate_status = interpreter->AllocateTensors();
  if (allocate_status != kTfLiteOk) {
    error_reporter->Report("AllocateTensors() failed");
    return;
  }

  // Get information about the memory area to use for the model's input.
  input = interpreter->input(0);
}

// The name of this function is important for Arduino compatibility.
void loop() {
  // Get image from provider.
  if (kTfLiteOk != GetImage(error_reporter, kNumCols, kNumRows, kNumChannels,
                            input->data.uint8)) {
    error_reporter->Report("Image capture failed.");
  }

  // Run the model on this input and make sure it succeeds.
  if (kTfLiteOk != interpreter->Invoke()) {
    error_reporter->Report("Invoke failed.");
  }

  TfLiteTensor* output = interpreter->output(0);
  float* output_data = reinterpret_cast<float*>(output->data.raw);

  // Process the inference results.
  float red_score = output_data[kRedIndex];
  float green_score = output_data[kGreenIndex];
  float yellow_score = output_data[kYellowIndex];
    RespondToDetection(error_reporter, red_score, green_score, yellow_score);
}

// Cleanup function to be called when no longer needed
void cleanup() {
  // Delete the dynamically allocated tensor arena memory
  delete[] tensor_arena;
  tensor_arena = nullptr;
}

I don't think you have 1024 * 1024 * 2 = 2097152 bytes of memory. I don't think you have anything like that amount of free memory to play with.

I use heap allocation cuz of that + it is not the error ig

Correct it will compile but it will not work.

Although ESP32 has only 520KB built-in RAM, the ESP32 Cam modules have an extra 4MB PSRAM attached via the QSPI bus. So there might be enough...

But I have no idea how that extra memory is managed or if it is useable like this as part of the heap.

What is the solution cuz I need around 770KiloBytes

I read about something like that but i didn't find how to use it

Please use English. It's the only human language I know enough of to do anything more than ask for coffee or beer!

I edited it

1 Like

Hopefully a forum member who knows about use of PSRAM on ESP32 can help you.

I hope that