Code Problems while trying to run Tensor Flow lite CNN model

Hello all,

I tried to use the "hello world" example from Tensor Flow Lite examples

for a CNN tensorflow lite inference.

I use the "Arduino nano 33 ble" card.

I have converted the .tflite model to .cc model .

The network should get an array input of 6 IMU signals over 150 time stamps (150*6 array, uint8).

The output of the CNN should be true\false (detect if human is active or not).

Finally, I need to implement this code on M4 processor with just 256KB of memory, so I just want
to understand how to compute the actual footprint of the sketch.

I tried to define 150*6 array of zeros just to try and compile the sketch with Arduino IDE but I got the attached error..

Someone can help me ? I am new to Arduino and don't have alot of experience with C coding.

Summary :

  1. need help with compilation errors.
  2. need to understand actual footprint of sketch on M4 processor without overheads.

Here is the code from main sketch:

#include <TensorFlowLite.h>

#include "constants.h"
#include "main_functions.h"
#include "model.h"
#include "output_handler.h"
#include "tensorflow/lite/micro/all_ops_resolver.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/micro_log.h"
#include "tensorflow/lite/micro/system_setup.h"
#include "tensorflow/lite/schema/schema_generated.h"

// Globals, used for compatibility with Arduino-style sketches.
namespace {
const tflite::Model* model = nullptr;
tflite::MicroInterpreter* interpreter = nullptr;
//TfLiteTensor* input = nullptr;
TfLiteTensor* input = nullptr;
TfLiteTensor* output = nullptr;
int inference_count = 0;


// Sharon change memory allocation 6X150X8 = 7200 bytes ()
// constexpr int kTensorArenaSize = 2000;
constexpr int kTensorArenaSize = 8192;
// Keep aligned to 16 bytes for CMSIS
alignas(16) uint8_t tensor_arena[kTensorArenaSize];
}  // namespace

// The name of this function is important for Arduino compatibility.
void setup() {
  tflite::InitializeTarget();

  // Map the model into a usable data structure. This doesn't involve any
  // copying or parsing, it's a very lightweight operation.
  model = tflite::GetModel(g_model);
  if (model->version() != TFLITE_SCHEMA_VERSION) {
    MicroPrintf(
        "Model provided is schema version %d not equal "
        "to supported version %d.",
        model->version(), TFLITE_SCHEMA_VERSION);
    return;
  }

  // This pulls in all the operation implementations we need.
  // NOLINTNEXTLINE(runtime-global-variables)
  static tflite::AllOpsResolver resolver;

  // Build an interpreter to run the model with.
  static tflite::MicroInterpreter static_interpreter(
      model, resolver, tensor_arena, kTensorArenaSize);
  interpreter = &static_interpreter;

  // Allocate memory from the tensor_arena for the model's tensors.
  TfLiteStatus allocate_status = interpreter->AllocateTensors();
  if (allocate_status != kTfLiteOk) {
    MicroPrintf("AllocateTensors() failed");
    return;
  }

  // define input float array of 3Xgyro + 3Acc signals with all zeros - sharon
  uint8_array_t Inarray[150][6] = {0};
  
  
  // Obtain pointers to the model's input and output tensors.
  //input = interpreter->input(0);
  
  input = Inarray; 
  output = interpreter->output(0);

  // Keep track of how many inferences we have performed.
  inference_count = 0;

}

// The name of this function is important for Arduino compatibility.
void loop() {
// Calculate an x value to feed into the model. We compare the current
// inference_count to the number of inferences per cycle to determine
// our position within the range of possible x values the model was
// trained on, and use this to calculate a value.

float position = static_cast(inference_count) /
static_cast(kInferencesPerCycle);
float x = position * kXrange;

// Quantize the input from floating-point to integer
int8_t x_quantized = x / input->params.scale + input->params.zero_point;
// Place the quantized input in the model's input tensor
input->data.int8[0] = x_quantized;

// Run inference, and report any error
TfLiteStatus invoke_status = interpreter->Invoke();
if (invoke_status != kTfLiteOk) {
MicroPrintf("Invoke failed on x: %f\n", static_cast(x));
return;
}

// Obtain the quantized output from model's output tensor
int8_t y_quantized = output->data.int8[0];
// Dequantize the output from integer to floating-point
float y = (y_quantized - output->params.zero_point) * output->params.scale;

// Output the results. A custom HandleOutput function can be implemented
// for each supported hardware target.
HandleOutput(x, y);

// Increment the inference_counter, and reset it if we have reached
// the total number per cycle
inference_count += 1;
if (inference_count >= kInferencesPerCycle) inference_count = 0;
}

Sharon

A screenshot of text is rarely useful. Copy and Paste your code and error message. It helps people help you. How to get the best out of this forum - Using Arduino / Installation & Troubleshooting - Arduino Forum

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.