I don't really understand how and why people use prescalers for ADC. I read that it changes the ADC conversion time. But does it also reduce the value you read? I see people connecting Vcc (about 3-5V) directly to an analog pin and using some 1.1V bandgap as the analog reference, and it seems that the reading of ADC with a prescaler will allow the ADC to read values greater than the reference?
I don't fully understand how they're doing this. It seems like if you have a pre-scaler set to 3x, you'll divide the analog reading in a third...so let's say your 3.3V nominal Vcc is read as 1.1V, which is within your 1.1V bandgap, so you'll read 1024?
Prescaler has nothing with the max. measurable value but with conversion speed. Maybe you should read MCU's datasheet. If we are talking about ATmega, e.g. there is recommended input clock frequency between 50kHz and 200kHz to get maximum resolution, etc.
Second thing, 1.1V reference don't allow to measure higher voltage than 1.1V. However, there is a "trick" on ATmega to measure band-gap reference voltage (as input) against AVCC (as reference). It is used in well known Secret Arduino Voltmeter.
Prescalers are for dividing clock frequencies. It's necessary so that you can configure the ADC to have consistent convertion time over a very wide range of system clock frequencies. An ATmega328P can be clocked from anything between 0 Hz - 20 MHz. The Successive-Approximation ADCs used in most microcontrollers are much less flexible and require a clock within a certain narrow band for best performance. For the common AVRs used here, it's recommended to be between 50 kHz and 200 kHz. If it's too fast, the analog circuitry won't have time to respond properly, and if it's too slow charge will drain off of the sample-and-hold filter during the conversion.