2016-02-18 19:19 GMT+09:00 Sergey Senozhatsky <sergey.senozhatsky.work@xxxxxxxxx>: > On (02/18/16 18:55), Sergey Senozhatsky wrote: >> > There is a reason that it is order of 2. Increasing ZS_MAX_PAGES_PER_ZSPAGE >> > is related to ZS_MIN_ALLOC_SIZE. If we don't have enough OBJ_INDEX_BITS, >> > ZS_MIN_ALLOC_SIZE would be increase and it causes regression on some >> > system. >> >> Thanks! >> >> do you mean PHYSMEM_BITS != BITS_PER_LONG systems? PAE/LPAE? isn't it >> the case that on those systems ZS_MIN_ALLOC_SIZE already bigger than 32? Indeed. > I mean, yes, there are ZS_ALIGN requirements that I completely ignored, > thanks for pointing that out. > > just saying, not insisting on anything, theoretically, trading 32 bit size > objects in exchange of reducing a much bigger memory wastage is sort of > interesting. zram stores objects bigger than 3072 as huge objects, leaving I'm also just saying. :) On the above example system which already uses 128 byte min class, your change makes it to 160 or 192. It could make a more trouble than you thought. > 4096-3072 bytes unused, and it'll take 4096-3072/32 = 4000 32 bit objects > to beat that single 'bad' compression object in storing inefficiency... Where does 4096-3072/32 calculation comes from? I'm not familiar to recent change on zsmalloc such as huge class so can't understand this calculation. > well, patches 0001/0002 are trying to address this a bit, but the biggest > problem is still there: we have too many ->huge classes and they are a bit > far from good. Agreed. And I agree your patchset, too. Anyway, could you answer my other questions on original reply? Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>