On Mon, Oct 22, 2012 at 1:43 PM, Greg KH <greg@xxxxxxxxx> wrote: > On Wed, Oct 10, 2012 at 05:42:18PM -0700, Nitin Gupta wrote: >> Change 130f315a (staging: zram: remove special handle of uncompressed page) >> introduced a bug in the handling of incompressible pages which resulted in >> memory allocation failure for such pages. >> >> When a page expands on compression, say from 4K to 4K+30, we were trying to >> do zsmalloc(pool, 4K+30). However, the maximum size which zsmalloc can >> allocate is PAGE_SIZE (for obvious reasons), so such allocation requests >> always return failure (0). >> >> For a page that has compressed size larger than the original size (this may >> happen with already compressed or random data), there is no point storing >> the compressed version as that would take more space and would also require >> time for decompression when needed again. So, the fix is to store any page, >> whose compressed size exceeds a threshold (max_zpage_size), as-it-is i.e. >> without compression. Memory required for storing this uncompressed page can >> then be requested from zsmalloc which supports PAGE_SIZE sized allocations. >> >> Lastly, the fix checks that we do not attempt to "decompress" the page which >> we stored in the uncompressed form -- we just memcpy() out such pages. > > So this fix needs to go to the stable 3.6 release also, right? > Forgot to mention -- yes, this needs to be in 3.6 also. Thanks, Nitin _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/devel