floating point to scaled int converison

I have a function below that uses floating point numbers D1 and D2. They are variables in an equation.

I don't want to use floats I am constrained for memory on the embedded platform I developing for (floating point library is large). I would only like to use ints. So the function would return an int and use ints in the calculation.

For example instead of 22.95 degrees the function would return 229500 degrees.

Would anybody know how I calculate what values D1 and D2 should become?

float SHT1x::readTemperatureC()
{
  int _val;                // Raw value returned from sensor
  float _temperature;      // Temperature derived from raw value

  // Conversion coefficients from SHT15 datasheet
  const float D1 = -40.0;  // for 14 Bit @ 5V
  const float D2 =   0.01; // for 14 Bit DEGC

  // Fetch raw value
  _val = readTemperatureRaw();

  // Convert raw value to degrees Celsius
  _temperature = (_val * D2) + D1;

  return (_temperature);
}
int SHT1x::readTemperatureC()
{
  int _val;                // Raw value returned from sensor
  float _temperature;      // Temperature derived from raw value

  // Conversion coefficients from SHT15 datasheet
  const int D1 = ?;  // for 14 Bit @ 5V
  const int D2 = ?; // for 14 Bit DEGC

  // Fetch raw value
  _val = readTemperatureRaw();

  // Convert raw value to degrees Celsius
  _temperature = (_val * D2) + D1;

  return (_temperature);
}

Well, if you want to use an int you can't have it go up to 229,500 like you give as an example. Int is -32768 to 32767. You'd need to multiply by a smaller amount (this seems reasonable, those sensors aren't that good - that sensor is giving you numbers ) or use a long.

Beyond that, just multiply D1 and D2 by the scaling factor.

If the sensor is accurate to 0.01 degrees, why not just multiply by 100, so the constants would be -4000 and 1? That way you capture all the potential accuracy and still fit it in an int.