How do you set the integration time for a camera?

I using the TSL1401R-LF Linescan Camera Module, which reads in a 1 X 128 line of pixels, and I’m having trouble understanding how the integration time is set for this device.

I understand from page 9 that the minimum integration time is a constant that results from a function of the clock speed, but I don’t fully understand when this integration period takes place.

According to the timing waveform (Figure 1 on Page 5) the integration period appears to be set after the first 18 clock cycles. Does this mean that the integration period is set while the camera is outputting pixels?

This is how I am currently programming the camera. I’m using delayMicrosecond() after the HIGH and LOW clock impulses as well as at the end. I have enough delays to achieve the minimum 33.75-microsecond delay, but I’m still having sensitivity problems.

This is how I am currently programming the camera. I’m using delayMicrosecond() after the HIGH and LOW clock impulses as well as at the end. I have enough delays to achieve the minimum 33.75 microsecond delay, but I’m still having sensitivity problems.

    int delayTime = 20;
    
    void readPixels()  
    {
      digitalWriteFast(SI, HIGH);
      delayMicroseconds(delayTime/2);
      digitalWriteFast(CLK, HIGH);
      delayMicroseconds(delayTime/2);
      digitalWriteFast(SI, LOW);
      delayMicroseconds(delayTime/2);
      digitalWriteFast(CLK, LOW);
      delayMicroseconds(delayTime);
    
      for(int i = 0; i < 128; i++)
      { 
        digitalWriteFast(CLK, HIGH);
        pixelsArray1[i]=analogRead(Cam1Aout);
        pixelsArray2[i]=analogRead(Cam2Aout);
        pixelsArray3[i]=analogRead(Cam3Aout);
        delayMicroseconds(delayTime);
        digitalWriteFast(CLK, LOW);
        delayMicroseconds(delayTime);
      }
    
      digitalWriteFast(CLK, HIGH);
      delayMicroseconds(delayTime);
      digitalWriteFast(CLK, LOW);
      delayMicroseconds(delayTime);
    
      delayMicroseconds(20);
    
    }