void HTTPClient::setConnectTimeout(int32_t connectTimeout)
{
_connectTimeout = connectTimeout;
}
is only used once, when trying to connect
if(!_client->connect(_host.c_str(), _port, _connectTimeout)) {
log_d("failed connect to %s:%u", _host.c_str(), _port);
return false;
}
HTTPClient
tries to follow HTTP 1.1 and reuse the connection if you do multiple calls (unless you setReuse(false)
). So that will help, but the connection is likely not the problem. You might esp_task_wdt_reset
after begin
returns 200
to mark progress.
void HTTPClient::setTimeout(uint16_t timeout)
{
_tcpTimeout = timeout;
if(connected()) {
_client->setTimeout((timeout + 500) / 1000);
}
}
(The underlying WiFiClient has its own timeout in whole seconds and is uint32_t
-- a uselessly long time.) The other timeout is used initially to read the response-line and headers
if((millis() - lastDataTime) > _tcpTimeout) {
return HTTPC_ERROR_READ_TIMEOUT;
}
which returns -11
instead 200
or whatever if it's been too long since the last bunch of bytes -- not total time
while(connected()) {
size_t len = _client->available();
if(len > 0) {
String headerLine = _client->readStringUntil('\n');
headerLine.trim(); // remove \r
lastDataTime = millis();
The only other usage offers a clue though (with the same _client->setTimeout
as before)
// set Timeout for WiFiClient and for Stream::readBytesUntil() and Stream::readStringUntil()
_client->setTimeout((_tcpTimeout + 500) / 1000);
readStringUntil
is basically the same as readString
String Stream::readString()
{
String ret;
int c = timedRead();
while(c >= 0) {
ret += (char) c;
c = timedRead();
}
return ret;
}
String Stream::readStringUntil(char terminator)
{
String ret;
int c = timedRead();
while(c >= 0 && c != terminator) {
ret += (char) c;
c = timedRead();
}
return ret;
}
They both use timedRead
// private method to read stream with timeout
int Stream::timedRead()
{
int c;
_startMillis = millis();
do {
c = read();
if(c >= 0) {
return c;
}
} while(millis() - _startMillis < _timeout);
return -1; // -1 indicates timeout
}
And this is another usage of timeout per byte. When that happens, Stream::getString
just stops and returns what it has so far, with no indication of failure. That's with httpClient.getStream().getString()
. In comparison, httpClient.getString()
String HTTPClient::getString(void)
{
// _size can be -1 when Server sends no Content-Length header
if(_size > 0 || _size == -1) {
StreamString sstring;
// try to reserve needed memory (noop if _size == -1)
if(sstring.reserve((_size + 1))) {
writeToStream(&sstring);
return sstring;
} else {
log_d("not enough memory to reserve a string! need: %d", (_size + 1));
}
}
return "";
}
also returns a String
with no indication of error. It calls writeToStream
, ignoring the return value
/**
* write all message body / payload to Stream
* @param stream Stream *
* @return bytes written ( negative values are error codes )
*/
int HTTPClient::writeToStream(Stream * stream)
One of those errors is
if(chunkHeader.length() <= 0) {
return returnError(HTTPC_ERROR_READ_TIMEOUT);
}
// ...
// read trailing \r\n at the end of the chunk
char buf[2];
auto trailing_seq_len = _client->readBytes((uint8_t*)buf, 2);
if (trailing_seq_len != 2 || buf[0] != '\r' || buf[1] != '\n') {
return returnError(HTTPC_ERROR_READ_TIMEOUT);
}
which occur only with "chunked
" encoding.
So no, doesn't look like you're missing anything. Reviewing all this code though, looks like there is a simple-enough workaround; just need the right place to override.
#include <StreamString.h>
class WatchedStreamString : public StreamString {
size_t write(const uint8_t *buffer, size_t size) override {
feedLoopWDT();
Serial.print("writing "); // or log_d
Serial.println(size);
return StreamString::write(buffer, size);
}
};
Add that subclass to the sketch, then the usage is instead
http.setTimeout(2500); // lower than WDT, so end-of-stream with no Content-Length is handled
// server's local IP -- here --
if (!http.begin("http://10.0.0.231:8080/?init=1&part=9&wait=2")) {
Serial.println("! begin");
return;
}
feedLoopWDT(); // making progress!
auto start = millis();
int status = http.GET();
feedLoopWDT(); // more progress!
Serial.println();
Serial.println(millis() - start);
Serial.println(status);
WatchedStreamString wss;
auto size = http.getSize();
if (size > 0) {
if (wss.reserve(size)) {
Serial.print("reserved ");
Serial.println(size);
} else {
Serial.println("uh oh");
}
}
// http.getString();
http.writeToStream(&wss); // progress with each block... not enough if it's a trickle
Serial.println(millis() - start);
Serial.println(wss);
Serial.println("done");
Note that setTimeout
must be less than the WDT time. They both default to five seconds. If the response payload returns Content-Length
, uses Transfer-Encoding: chunked
, or honors Connection: close
, then the HTTPClient
can accurately detect the last byte. Otherwise it will wait for more data before giving up; and you don't want to trigger the WDT right at the very end when you're done.