Error with compiling hello_word

Hi,I have a error compiling to hello _word: arm_nn_mat_mult_nt_t_s8.c:136: undefined reference to `__SXTB16_RORn'

Board: Arduino nano 33 BLE sense
IDE verson:1.8.12
Mbed OS

I keep getting this error code and I really don't know what I can do. Please give me a hand! I would appreciate it a lot!

Error code are as follows:
C:\Users\Administrator\Documents\Arduino\libraries\Arduino_TensorFlowLite\src\tensorflow\lite\micro\tools\make\downloads\kissfft\kiss_fft.c:378:9: note: in a call to built-in function 'memcpy'

libraries\Arduino_TensorFlowLite\tensorflow\lite\micro\tools\make\downloads\cmsis\CMSIS\NN\Source\NNSupportFunctions\arm_nn_mat_mult_nt_t_s8.c.o: In function `arm_nn_mat_mult_nt_t_s8':

C:\Users\Administrator\Documents\Arduino\libraries\Arduino_TensorFlowLite\src\tensorflow\lite\micro\tools\make\downloads\cmsis\CMSIS\NN\Source\NNSupportFunctions/arm_nn_mat_mult_nt_t_s8.c:111: undefined reference to `__SXTB16_RORn'

C:\Users\Administrator\Documents\Arduino\libraries\Arduino_TensorFlowLite\src\tensorflow\lite\micro\tools\make\downloads\cmsis\CMSIS\NN\Source\NNSupportFunctions/arm_nn_mat_mult_nt_t_s8.c:112: undefined reference to `__SXTB16_RORn'

C:\Users\Administrator\Documents\Arduino\libraries\Arduino_TensorFlowLite\src\tensorflow\lite\micro\tools\make\downloads\cmsis\CMSIS\NN\Source\NNSupportFunctions/arm_nn_mat_mult_nt_t_s8.c:118: undefined reference to `__SXTB16_RORn'

C:\Users\Administrator\Documents\Arduino\libraries\Arduino_TensorFlowLite\src\tensorflow\lite\micro\tools\make\downloads\cmsis\CMSIS\NN\Source\NNSupportFunctions/arm_nn_mat_mult_nt_t_s8.c:125: undefined reference to `__SXTB16_RORn'

C:\Users\Administrator\Documents\Arduino\libraries\Arduino_TensorFlowLite\src\tensorflow\lite\micro\tools\make\downloads\cmsis\CMSIS\NN\Source\NNSupportFunctions/arm_nn_mat_mult_nt_t_s8.c:136: undefined reference to `__SXTB16_RORn'

libraries\Arduino_TensorFlowLite\tensorflow\lite\micro\tools\make\downloads\cmsis\CMSIS\NN\Source\NNSupportFunctions\arm_nn_mat_mult_nt_t_s8.c.o:C:\Users\Administrator\Documents\Arduino\libraries\Arduino_TensorFlowLite\src\tensorflow\lite\micro\tools\make\downloads\cmsis\CMSIS\NN\Source\NNSupportFunctions/arm_nn_mat_mult_nt_t_s8.c:137: more undefined references to `__SXTB16_RORn'

Please help, Thanks!

Please show us your sketch.

Use CTRL T to format your code.

Attach your ‘complete’ sketch.

Hi,
Welcome to the forum.

What Arduino board are you using?
What version IDE?
What OS?

Thanks... Tom... :grinning: :+1: :coffee: :australia:

I‘m sorry, I forgot to post detail imformation about it.
I’m using Arduino nano 33 BLE sense.
IDE version is 1.8.2.
Mbed OS
Thanks a lot!

Hi,
We need to see your code please.

Thanks.. Tom... :grinning: :+1: :coffee: :australia:

Thanks! I have post my code on the beginning of the topic, have a look please.

Hi,
Please do not got back and edit old posts by inserting code, put your code in a new post and it will not cause confusion when someone else reads this thread.

Tom.. :grinning: :+1: :coffee: :australia:

Thanks, I'll put my code in a new post.

This is an example in Arduino libraries called Arduino_TensorflowLite
This is my code hello_world:

#include <TensorFlowLite.h>

#include "main_functions.h"

#include "tensorflow/lite/micro/all_ops_resolver.h"
#include "constants.h"
#include "model.h"
#include "output_handler.h"
#include "tensorflow/lite/micro/micro_error_reporter.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"

// Globals, used for compatibility with Arduino-style sketches.
namespace {
tflite::ErrorReporter* error_reporter = nullptr;
const tflite::Model* model = nullptr;
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;
TfLiteTensor* output = nullptr;
int inference_count = 0;

constexpr int kTensorArenaSize = 2000;
uint8_t tensor_arena[kTensorArenaSize];
} // namespace

// The name of this function is important for Arduino compatibility.
void setup() {
// Set up logging. Google style is to avoid globals or statics because of
// lifetime uncertainty, but since this has a trivial destructor it's okay.
// NOLINTNEXTLINE(runtime-global-variables)
static tflite::MicroErrorReporter micro_error_reporter;
error_reporter = &micro_error_reporter;

// Map the model into a usable data structure. This doesn't involve any
// copying or parsing, it's a very lightweight operation.
model = tflite::GetModel(g_model);
if (model->version() != TFLITE_SCHEMA_VERSION) {
TF_LITE_REPORT_ERROR(error_reporter,
"Model provided is schema version %d not equal "
"to supported version %d.",
model->version(), TFLITE_SCHEMA_VERSION);
return;
}

// This pulls in all the operation implementations we need.
// NOLINTNEXTLINE(runtime-global-variables)
static tflite::AllOpsResolver resolver;

// Build an interpreter to run the model with.
static tflite::MicroInterpreter static_interpreter(
model, resolver, tensor_arena, kTensorArenaSize, error_reporter);
interpreter = &static_interpreter;

// Allocate memory from the tensor_arena for the model's tensors.
TfLiteStatus allocate_status = interpreter->AllocateTensors();
if (allocate_status != kTfLiteOk) {
TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed");
return;
}

// Obtain pointers to the model's input and output tensors.
input = interpreter->input(0);
output = interpreter->output(0);

// Keep track of how many inferences we have performed.
inference_count = 0;
}

// The name of this function is important for Arduino compatibility.
void loop() {
// Calculate an x value to feed into the model. We compare the current
// inference_count to the number of inferences per cycle to determine
// our position within the range of possible x values the model was
// trained on, and use this to calculate a value.
float position = static_cast(inference_count) /
static_cast(kInferencesPerCycle);
float x = position * kXrange;

// Quantize the input from floating-point to integer
int8_t x_quantized = x / input->params.scale + input->params.zero_point;
// Place the quantized input in the model's input tensor
input->data.int8[0] = x_quantized;

// Run inference, and report any error
TfLiteStatus invoke_status = interpreter->Invoke();
if (invoke_status != kTfLiteOk) {
TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed on x: %f\n",
static_cast(x));
return;
}

// Obtain the quantized output from model's output tensor
int8_t y_quantized = output->data.int8[0];
// Dequantize the output from integer to floating-point
float y = (y_quantized - output->params.zero_point) * output->params.scale;

// Output the results. A custom HandleOutput function can be implemented
// for each supported hardware target.
HandleOutput(error_reporter, x, y);

// Increment the inference_counter, and reset it if we have reached
// the total number per cycle
inference_count += 1;
if (inference_count >= kInferencesPerCycle) inference_count = 0;
}

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.