Nicolas Pitre <nico@xxxxxxx> writes: > if (inscnt) { > + while (moff && ref_data[moff-1] == data[-1]) { > + if (msize == 0x10000) > + break; > + /* we can match one byte back */ > ... > + break; > + } > out[outpos - inscnt - 1] = inscnt; Once you make it into a patch form, it is plainly obvious that this is a good optimization. Since our BLK_SIZE is 16 bytes, you are grabbing up to 15 more bytes (on average 8 more bytes or so) for every match after a partially modified block. Very nice. I wonder if a larger BLK_SIZE (say 32 bytes) would give us faster packing without losing much compression if we use this idea. - : send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html