On (24/02/07 09:25), Barry Song wrote: > From: Barry Song <v-songbaohua@xxxxxxxx> > > Firstly, there is no need to keep zcomp_strm's buffers contiguous > physically. > > Secondly, The recent mTHP project has provided the possibility to > swapout and swapin large folios. Compressing/decompressing large > blocks can hugely decrease CPU consumption and improve compression > ratio. This requires us to make zRAM support the compression and > decompression for large objects. > With the support of large objects in zRAM of our out-of-tree code, > we have observed many allocation failures during CPU hotplug as > large objects need larger buffers. So this change is also more > future-proof once we begin to bring up multiple sizes in zRAM. > > Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx> Reviewed-by: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> Note: Taking it in NOT because of the out-of-tree code (we don't really do that), but because this is executed from CPU offline/online paths, which can happen on devices with fragmented memory (a valid concern IMHO). Minchan, if you have any objections, please chime in. > @@ -37,7 +38,7 @@ static void zcomp_strm_free(struct zcomp_strm *zstrm) > { > if (!IS_ERR_OR_NULL(zstrm->tfm)) > crypto_free_comp(zstrm->tfm); > - free_pages((unsigned long)zstrm->buffer, 1); > + vfree(zstrm->buffer); > zstrm->tfm = NULL; > zstrm->buffer = NULL; > } > @@ -53,7 +54,7 @@ static int zcomp_strm_init(struct zcomp_strm *zstrm, struct zcomp *comp) > * allocate 2 pages. 1 for compressed data, plus 1 extra for the > * case when compressed size is larger than the original one > */ > - zstrm->buffer = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 1); > + zstrm->buffer = vzalloc(2 * PAGE_SIZE); > if (IS_ERR_OR_NULL(zstrm->tfm) || !zstrm->buffer) { > zcomp_strm_free(zstrm); > return -ENOMEM; > -- > 2.34.1 >