How to begin SPI again once it is set to end?

Hi guys,

I am trying to play a bit with my camera and mosfets. I am able to control it using the mosfet but now I have an issue when I turn off SPI, I am not able to begin it again. The reason why I want to put it to END state is to save some energy and decrease the current. How this part can be solved? I tried with easy approach just write SPI.end() once the camera task is finished and put SPI.begin() before the camera is used again, but seems it is not possible to "alive" SPI again :confused:

Oi! I did not know that turning off SPI would save power. Could you post the reference you found that documents SPI.end() saves power?

Based on my experiments even with the mosfet I am not able to put the device in the sleep mode between 2 camera tasks (capturing images). When the camera task is not there the current consumption in the sleep state is around ~350microAmpere, but adding the camera part the lowest I can reach is ~11mA, which means that some parts (SPI, CS, or Wire) cause this higher peaks?

I use RFM69 radio modules and communicate with them over SPI. I don't turn SPI off and I get several months off 3x AA batteries. Perhaps there is something active in your hardware that is drawing power.

#include "image_provider.h"

#if defined(ARDUINO) && !defined(ARDUINO_ARDUINO_NANO33BLE)
#define ARDUINO_EXCLUDE_CODE
#endif  // defined(ARDUINO) && !defined(ARDUINO_ARDUINO_NANO33BLE)

#ifndef ARDUINO_EXCLUDE_CODE

// Required by Arducam library
//#include <SPI.h>
//#include <Wire.h>
#include <memorysaver.h>
// Arducam library
#include <ArduCAM.h>
// JPEGDecoder library
#include <JPEGDecoder.h>

// Checks that the Arducam library has been correctly configured
#if !(defined OV2640_MINI_2MP_PLUS)
#error Please select the hardware platform and camera module in the Arduino/libraries/ArduCAM/memorysaver.h
#endif

// The size of our temporary buffer for holding
// JPEG data received from the Arducam module
#define MAX_JPEG_BYTES 5120 //5120 to change
// The pin connected to the Arducam Chip Select
#define CS 7

int8_t* image_send;
int index_test;

// Camera library instance
ArduCAM myCAM(OV2640, CS);
// Temporary buffer for holding JPEG data from camera
uint8_t jpeg_buffer[MAX_JPEG_BYTES] = {0};
// Length of the JPEG data currently in the buffer
uint32_t jpeg_length = 0;

// Get the camera module ready
TfLiteStatus InitCamera(tflite::ErrorReporter* error_reporter) {
  TF_LITE_REPORT_ERROR(error_reporter, "Attempting to start Arducam");
  // Enable the Wire library
  Wire.begin();
  TF_LITE_REPORT_ERROR(error_reporter, "After Wire begin!");
  // Configure the CS pin
  pinMode(CS, OUTPUT);
  digitalWrite(CS, HIGH);
  TF_LITE_REPORT_ERROR(error_reporter, "After CS set to HIGH!");
  // initialize SPI
  SPI.begin();
  TF_LITE_REPORT_ERROR(error_reporter, "After SPI is set to begin!");
  SPI.beginTransaction(SPISettings(8000000, MSBFIRST, SPI_MODE0));
  // Reset the CPLD
  myCAM.write_reg(0x07, 0x80);
  delay(100);
  myCAM.write_reg(0x07, 0x00);
  delay(100);
  // Test whether we can communicate with Arducam via SPI
  myCAM.write_reg(ARDUCHIP_TEST1, 0x55);
  uint8_t test;
  test = myCAM.read_reg(ARDUCHIP_TEST1);
  if (test != 0x55) {
    TF_LITE_REPORT_ERROR(error_reporter, "Can't communicate with Arducam");
    delay(1000);
    return kTfLiteError;
  }
  // Use JPEG capture mode, since it allows us to specify
  // a resolution smaller than the full sensor frame
  myCAM.set_format(JPEG);
  myCAM.InitCAM();
  // Specify the smallest possible resolution
  myCAM.OV2640_set_JPEG_size(OV2640_160x120);
  delay(100);
  return kTfLiteOk;
}

// Begin the capture and wait for it to finish
TfLiteStatus PerformCapture(tflite::ErrorReporter* error_reporter) {
  TF_LITE_REPORT_ERROR(error_reporter, "Starting capture");
  // Make sure the buffer is emptied before each capture
  myCAM.flush_fifo();
  myCAM.clear_fifo_flag();
  // Start capture
  myCAM.start_capture();
  // Wait for indication that it is done
  while (!myCAM.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK)) {
  }
  TF_LITE_REPORT_ERROR(error_reporter, "Image captured");
  delay(50);
  // Clear the capture done flag
  myCAM.clear_fifo_flag();
  return kTfLiteOk;
}

// Read data from the camera module into a local buffer
TfLiteStatus ReadData(tflite::ErrorReporter* error_reporter) {
  // This represents the total length of the JPEG data
  jpeg_length = myCAM.read_fifo_length();
  TF_LITE_REPORT_ERROR(error_reporter, "Reading %d bytes from Arducam",
                       jpeg_length);
  // Ensure there's not too much data for our buffer
  if (jpeg_length > MAX_JPEG_BYTES) {
    TF_LITE_REPORT_ERROR(error_reporter, "Too many bytes in FIFO buffer (%d)",
                         MAX_JPEG_BYTES);
    return kTfLiteError;
  }
  if (jpeg_length == 0) {
    TF_LITE_REPORT_ERROR(error_reporter, "No data in Arducam FIFO buffer");
    return kTfLiteError;
  }
  myCAM.CS_LOW();
  myCAM.set_fifo_burst();
  for (int index = 0; index < jpeg_length; index++) {
    jpeg_buffer[index] = SPI.transfer(0x00);
  }
  delayMicroseconds(15);
  TF_LITE_REPORT_ERROR(error_reporter, "Finished reading");
  myCAM.CS_HIGH();
  return kTfLiteOk;
}

// Decode the JPEG image, crop it, and convert it to greyscale
TfLiteStatus DecodeAndProcessImage(tflite::ErrorReporter* error_reporter,
                                   int image_width, int image_height,
                                   int8_t* image_data) {
  TF_LITE_REPORT_ERROR(error_reporter,
                       "Decoding JPEG and converting to greyscale");
  // Parse the JPEG headers. The image will be decoded as a sequence of Minimum
  // Coded Units (MCUs), which are 16x8 blocks of pixels.
  JpegDec.decodeArray(jpeg_buffer, jpeg_length);

  // Crop the image by keeping a certain number of MCUs in each dimension
  const int keep_x_mcus = image_width / JpegDec.MCUWidth;
  const int keep_y_mcus = image_height / JpegDec.MCUHeight;

  // Calculate how many MCUs we will throw away on the x axis
  const int skip_x_mcus = JpegDec.MCUSPerRow - keep_x_mcus;
  // Roughly center the crop by skipping half the throwaway MCUs at the
  // beginning of each row
  const int skip_start_x_mcus = skip_x_mcus / 2;
  // Index where we will start throwing away MCUs after the data
  const int skip_end_x_mcu_index = skip_start_x_mcus + keep_x_mcus;
  // Same approach for the columns
  const int skip_y_mcus = JpegDec.MCUSPerCol - keep_y_mcus;
  const int skip_start_y_mcus = skip_y_mcus / 2;
  const int skip_end_y_mcu_index = skip_start_y_mcus + keep_y_mcus;

  // Pointer to the current pixel
  uint16_t* pImg;
  // Color of the current pixel
  uint16_t color;

  // Loop over the MCUs
  while (JpegDec.read()) {
    // Skip over the initial set of rows
    if (JpegDec.MCUy < skip_start_y_mcus) {
      continue;
    }
    // Skip if we're on a column that we don't want
    if (JpegDec.MCUx < skip_start_x_mcus ||
        JpegDec.MCUx >= skip_end_x_mcu_index) {
      continue;
    }
    // Skip if we've got all the rows we want
    if (JpegDec.MCUy >= skip_end_y_mcu_index) {
      continue;
    }
    // Pointer to the current pixel
    pImg = JpegDec.pImage;

    // The x and y indexes of the current MCU, ignoring the MCUs we skip
    int relative_mcu_x = JpegDec.MCUx - skip_start_x_mcus;
    int relative_mcu_y = JpegDec.MCUy - skip_start_y_mcus;

    // The coordinates of the top left of this MCU when applied to the output
    // image
    int x_origin = relative_mcu_x * JpegDec.MCUWidth;
    int y_origin = relative_mcu_y * JpegDec.MCUHeight;

    // Loop through the MCU's rows and columns
    for (int mcu_row = 0; mcu_row < JpegDec.MCUHeight; mcu_row++) {
      // The y coordinate of this pixel in the output index
      int current_y = y_origin + mcu_row;
      for (int mcu_col = 0; mcu_col < JpegDec.MCUWidth; mcu_col++) {
        // Read the color of the pixel as 16-bit integer
        color = *pImg++;
        // Extract the color values (5 red bits, 6 green, 5 blue)
        uint8_t r, g, b;
        r = ((color & 0xF800) >> 11) * 8;
        g = ((color & 0x07E0) >> 5) * 4;
        b = ((color & 0x001F) >> 0) * 8;
        // Convert to grayscale by calculating luminance
        // See https://en.wikipedia.org/wiki/Grayscale for magic numbers
        float gray_value = (0.2126 * r) + (0.7152 * g) + (0.0722 * b);

        // Convert to signed 8-bit integer by subtracting 128.
        gray_value -= 128;

        // The x coordinate of this pixel in the output image
        int current_x = x_origin + mcu_col;
        // The index of this pixel in our flat output buffer
        int index = (current_y * image_width) + current_x;
        index_test = index;
        image_data[index] = static_cast<int8_t>(gray_value);
        image_send[index] = static_cast<int8_t>(gray_value);
      }
    }
  }
  TF_LITE_REPORT_ERROR(error_reporter, "Image decoded and processed");
  return kTfLiteOk;
}

// Get an image from the camera module
TfLiteStatus GetImage(tflite::ErrorReporter* error_reporter, int image_width,
                      int image_height, int channels, int8_t* image_data) {
  static bool g_is_camera_initialized = false;
  if (!g_is_camera_initialized) {
    TfLiteStatus init_status = InitCamera(error_reporter);
    if (init_status != kTfLiteOk) {
      TF_LITE_REPORT_ERROR(error_reporter, "InitCamera failed");
      return init_status;
    }
    g_is_camera_initialized = true;
  }

  TfLiteStatus capture_status = PerformCapture(error_reporter);
  if (capture_status != kTfLiteOk) {
    TF_LITE_REPORT_ERROR(error_reporter, "PerformCapture failed");
    return capture_status;
  }

  TfLiteStatus read_data_status = ReadData(error_reporter);
  if (read_data_status != kTfLiteOk) {
    TF_LITE_REPORT_ERROR(error_reporter, "ReadData failed");
    return read_data_status;
  }

  TfLiteStatus decode_status = DecodeAndProcessImage(
      error_reporter, image_width, image_height, image_data);
  if (decode_status != kTfLiteOk) {
    TF_LITE_REPORT_ERROR(error_reporter, "DecodeAndProcessImage failed");
    return decode_status;
  }
  Wire.end();
  digitalWrite(CS, LOW);
  
  return kTfLiteOk;
}

void initialize_camera(tflite::ErrorReporter* error_reporter, int image_width,
                      int image_height, int channels, int8_t* image_data)
{
  TfLiteStatus init_status = InitCamera(error_reporter);
  if (init_status != kTfLiteOk) {
      TF_LITE_REPORT_ERROR(error_reporter, "InitCamera failed");
      //return init_status;
    }
  
  TfLiteStatus capture_status = PerformCapture(error_reporter);
  if (capture_status != kTfLiteOk) {
    TF_LITE_REPORT_ERROR(error_reporter, "PerformCapture failed");
    //return capture_status;
  }

  TfLiteStatus read_data_status = ReadData(error_reporter);
  if (read_data_status != kTfLiteOk) {
    TF_LITE_REPORT_ERROR(error_reporter, "ReadData failed");
    //return read_data_status;
  }

  TfLiteStatus decode_status = DecodeAndProcessImage(
      error_reporter, image_width, image_height, image_data);
  if (decode_status != kTfLiteOk) {
    TF_LITE_REPORT_ERROR(error_reporter, "DecodeAndProcessImage failed");
    //return decode_status;
  }
  Wire.end();
  digitalWrite(CS, LOW);
}

#endif  // ARDUINO_EXCLUDE_CODE

This is the code that I am using for my camera. When the camera is not connected, there is a normal consumption in the sleep state, which means for sure there is something from the camera that causes this increase but I cannot figure it out what. I thought it was SPI but now I am not sure. Maybe you can see something here that is an issue? :confused:

https://forum.arduino.cc/t/irf520n-mosfet-with-arduino-nano-33-ble-sense/

I know that’s my post and as I mentioned I am able to switch it off but there is still something that the mosfet cannot control.

What about the camera drawing extra current?

Please provide a link to the documentation that spi.end() reduces power consumption.

Could be that you need a 'high-side' switch.
Power the camera without using any transistor. If you can open/close its VCC (or +V, whatever you want to call it) and thereby not have the trouble then that would be telling.

In my case, using the transistor, we are able to open/close GND so from 120mA we can decrease the current to 16mA, but I want to put it completely off and stay in the sleep state (~320microAmpere) before I need to use it again. Do you think it is possible to achieve that by considering VCC instead?

I tested the setup in 3 ways:

  1. I disconnected all pins except VCC and GND and consumption was: 102mA
  2. Setup with the transistor: 11mA in sleep state between camera tasks
  3. Setup without VCC and GND (only SPI and other wires): 4.09mA

I do not know where the problems is and the camera cannot easily go to the deep sleep state as board can (by turning off the camera completely and turning on when it's needed only) :confused:

What will be the best option for P mosfet in this case?

Everything / Anything available will be SMD/SMT.

Can this be one of the possible solutions (load switch):

  1. https://www.ti.com/tool/TPS22919EVM
  2. TPS22917EVM Evaluation board | TI.com

Sure.
Lots of real estate (est. 2 x 3 in)

Perfect, as I have already ordered these two. I hope that they will work better than N mosfet that I tried to use before. Thanks for advice!

Hi @runaway_pancake, me again after some time :slight_smile: In the meantime, I received both load switches and tested them with my setup. Both boards are able to decrease the current consumption from 12mA (case with N logic mosfet) to 1.2mA, but still not able to completely achieve the current consumption during the sleep state (around 300 microAmpere). Do you maybe know if there is anything else that can cause this behavior or that's just the case when the camera is connected (I mean this is the lowest I can achieve). Thanks in advance!

That's a tenfold reduction.
So - it's working.
Is that the total current of the system or just the current between the camera Vin and the load switch output ?

Yes indeed, it works much better comparing to mosfets. This is the current of the board in the sleep mode waiting to call the camera task again. The classic sleep state that I am getting right now is around 300 microAmpere (which is still way more than verified 11 microAmpere) and that’s something I expect now when the camera is turned off. And just as a reminder, now when the camera is connected the current is 1.2mA. What can cause this increase? Do you maybe have any idea?

This "sleep current" figure, 1mA - what is the current between the load switch and the camera, that 'node'?
Current between Vsource and LS_in. LS_out and camera in.
That will require a quality milliammeter (microammeter).