On Tue, Jun 28, 2016 at 08:41:42AM +0100, Giovanni Cabiddu wrote: > > Are you suggesting a different cache of scratch buffers for every > algorithm implementation or a shared cache shared across all legacy > scomp algorithms? One that's shared for every scomp algorithm. > Would it be ok 128K instead? > We are proposing to use the acomp API from BTRFS. Limiting the size > of the source and destination buffers to 64K would not work since > BTRFS usually compresses 128KB. > Here is the RFC sent by Weigang to the BTFS list: > http://www.spinics.net/lists/linux-btrfs/msg56648.html While I don't see any big differences between 64K and 128K, I have noticed that btrfs is already doing partial decompression on a page-by-page basis, which is the most optimal setup. So whatever we do for this conversion we should make sure that btrfs does not regress into using vmalloc. Cheers, -- Email: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html