Right, but I think production cameras- at least the cheaper ones- pass the data through to the computer where there is plenty of data storage. I might be wrong though.
The reason I am assuming this is because I am familiar with most high end processing software for astrophotography. MaximDL in particular. The image from the camera is in a raw format- the data is nothing but the readout level of each pixel. In a 6 megapixel image, I can zero in on pixel #4,530,285 and get the exact value of the photons registered. If the chip has a well depth of 256, I would get 0-20 or so for black sky, maybe 80-90 for a nebula sample, etc. You then apply color to the entire image artificially in the software (the CCD is monochrome, so you put a very high quality optical filter that cooresponds to the R, G, or B image you are capturing). Or, you might get a Luminance image with no filtration and you apply the Luminance layer to a finished RGB composite to give it more detail.
So all the camera needs to do is measure each value for each pixel and send it to the software where it is asembled into a rectangular raw image.