# can i use this algorithm for data compression

My algorithm convert every 50 byte or more to 2 byte only of data
in other way

i mean with only 2 byte i can recover the whole 50 byte of data

this is what called a compression or just an encoding ??

ex:
01010101 01010101 01010101011001101101011010010101010 …Until 50 byte

to

10101010 10101010

You forget to show the algorithm...

Cheers,
Kari

i mean with only 2 byte i can recover the whole 50 byte of data

In general, no.

djoxy:
My algorithm convert every 50 byte or more to 2 byte only of data
in other way

i mean with only 2 byte i can recover the whole 50 byte of data

this is what called a compression or just an encoding ??

ex:
01010101 01010101 01010101011001101101011010010101010 .....Until 50 byte

to

10101010 10101010

That's a 94% compression rate. Very impressive. You could be the next billionaire.

djoxy:
this is what called a compression or just an encoding ??

This is called "UDL" algorithm (Unrecoverable Data Loss).
You can easily achieve a 100% compression rate with this: compress anything into 0 bytes.

Ok, just a Friday afternoon programming joke. Sorry, couldn't resist.

The key is to be able to decompress of decode it back to what it was before.
If you can do that, you are good.

I think what you are hinting at is called "run-length encoding" (RLE).
If you see a number of repeating bytes, you can them compress into 2: the first one if the byte, the second byte is the number of repeating bytes.
TIFF image compression is based on this. Look it up. This is the fastest, but not most efficient, compression algorithm. Good for images with solid backgrounds, where pixels repeat often.

Don't use it for data where repeating bytes are not frequent - your compressed files will be bigger than the original (ha-ha).

It's been too long! But, I think a FAX line is also RLE encoded.

Paul

FAX is pretty much TIFF. I have actually coded sending FAX images over a modem line. Good old days...
"Who would ever need more than 640K of RAM?"

djoxy:
My algorithm convert every 50 byte or more to 2 byte only of data
in other way

i mean with only 2 byte i can recover the whole 50 byte of data

this is what called a compression or just an encoding ??

ex:
01010101 01010101 01010101011001101101011010010101010 .....Until 50 byte

to

10101010 10101010

I can convert a megabyte array of spaces into 0x0100000020, but that's not "compression".

thank you so much everyone
the algorithm is a little complicated and will be available for public soon for development

djoxy:
thank you so much everyone
the algorithm is a little complicated and will be available for public soon for development

MIN-MAX-99 Data compression algorithm

impressive. You should definitely patent it first before releasing.

arduino_new:
impressive. You should definitely patent it first before releasing.

Ah, but to patent something means you have to disclose what it is you are patenting. The patent law strictly prohibits trying to patent an algorithm, which is what the OP is proposing. I wonder why the OP started this thread?

Paul

The Min-Max-99 algorithm was able to compress up to 8 GB of data from any type to only 10 KB .

• The Min-Max-99 algorithm still in test because it is very complicated

Cool. Be sure to let us know when you have figured out the reverse procedure, and get every single bit of those gigabytes back!

Even better than these guys.

From the link the OP provided:

The Min-Max-99 algorithm was able to compress up to 8 GB of data from any type to only 10 KB

Basically the algorithmic equivalent of perpetual motion, or the infamous "200 mile per gallon carburetor". Simply not remotely possible. No doubt the data can be compressed to any arbitrary level. It's the recovery that is the hard part, and it is simply not possible to compress arbitrary data to that level without loss. All data compression algorithms depend on finding patterns and redundancy in the data. Even ASCII text data does not have enough redundancy to compress by 800:1, except for VERY special cases.

Regards,
Ray L.

RayLivingston:
Even ASCII text data does not have enough redundancy to compress by 800:1, except for VERY special cases.

Let alone the 800,000:1 being claimed.