djoxy:
this is what called a compression or just an encoding ??
This is called "UDL" algorithm (Unrecoverable Data Loss).
You can easily achieve a 100% compression rate with this: compress anything into 0 bytes.
Ok, just a Friday afternoon programming joke. Sorry, couldn't resist.
The key is to be able to decompress of decode it back to what it was before.
If you can do that, you are good.
I think what you are hinting at is called "run-length encoding" (RLE).
If you see a number of repeating bytes, you can them compress into 2: the first one if the byte, the second byte is the number of repeating bytes.
TIFF image compression is based on this. Look it up. This is the fastest, but not most efficient, compression algorithm. Good for images with solid backgrounds, where pixels repeat often.
Don't use it for data where repeating bytes are not frequent - your compressed files will be bigger than the original (ha-ha).
arduino_new:
impressive. You should definitely patent it first before releasing.
Ah, but to patent something means you have to disclose what it is you are patenting. The patent law strictly prohibits trying to patent an algorithm, which is what the OP is proposing. I wonder why the OP started this thread?
The Min-Max-99 algorithm was able to compress up to 8 GB of data from any type to only 10 KB
Basically the algorithmic equivalent of perpetual motion, or the infamous "200 mile per gallon carburetor". Simply not remotely possible. No doubt the data can be compressed to any arbitrary level. It's the recovery that is the hard part, and it is simply not possible to compress arbitrary data to that level without loss. All data compression algorithms depend on finding patterns and redundancy in the data. Even ASCII text data does not have enough redundancy to compress by 800:1, except for VERY special cases.