On Tue, Nov 12, 2013 at 4:11 AM, Bob Liu <bob.liu@xxxxxxxxxx> wrote: > > On 11/12/2013 03:12 AM, Dan Streetman wrote: >> Seth, have you (or anyone else) considered making zswap a writethrough >> cache instead of writeback? I think that it would significantly help >> the case where zswap fills up and starts writing back its oldest pages >> to disc - all the decompression work would be avoided since zswap >> could just evict old pages and forget about them, and it seems likely >> that when zswap is full that's probably the worst time to add extra >> work/delay, while adding extra disc IO (presumably using dma) before >> zswap is full doesn't seem to me like it would have much impact, >> except in the case where zswap isn't full but there is so little free >> memory that new allocs are waiting on swap-out. >> >> Besides the additional disc IO that obviously comes with making zswap >> writethrough (additional only before zswap fills up), are there any >> other disadvantages? Is it a common situation for there to be no >> memory left and get_free_page actively waiting on swap-out, but before >> zswap fills up? >> >> Making it writethrough also could open up other possible improvements, >> like making the compression and storage of new swap-out pages async, >> so the compression doesn't delay the write out to disc. >> > > I like this idea and those benefits, the only question I'm not sure is > would it be too complicate to implement this feature? It sounds like we > need to reimplement something like swapcache to handle zswap write through. Simply converting to writethrough should be as easy as returning non-zero from zswap_frontswap_store(), although zswap_writeback_entry() also needs simplification to skip the writeback. I think it shouldn't be difficult; I'll start working on a first pass of a patch. Thanks! -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>