Hi Mel & Seth, On 05/21/2013 04:10 PM, Mel Gorman wrote: > On Mon, May 20, 2013 at 10:42:25AM -0500, Seth Jennings wrote: >> On Mon, May 20, 2013 at 02:54:39PM +0100, Mel Gorman wrote: >>> On Sun, May 19, 2013 at 03:52:19PM -0500, Seth Jennings wrote: >>>> My first guess is that the external fragmentation situation you are referring to >>>> is a workload in which all pages compress to greater than half a page. If so, >>>> then it doesn't matter what NCHUCNKS_ORDER is, there won't be any pages the >>>> compress enough to fit in the < PAGE_SIZE/2 free space that remains in the >>>> unbuddied zbud pages. >>>> >>> >>> There are numerous aspects to this, too many to write them all down. >>> Modelling the external fragmentation one and how it affects swap IO >>> would be a complete pain in the ass so lets consider the following >>> example instead as it's a bit clearer. >>> >>> Three processes. Process A compresses by 75%, Process B compresses to 15%, >>> Process C pages compress to 15%. They are all adding to zswap in lockstep. >>> Lets say that zswap can hold 100 physical pages. >>> >>> NCHUNKS == 2 >>> All Process A pages get rejected. >> >> Ah, I think this is our disconnect. Process A pages will not be rejected. >> They will be stored in a zbud page, and that zbud page will be added >> to the 0th unbuddied list. This list maintains a list of zbud pages >> that will never be buddied because there are no free chunks. >> > > D'oh, good point. Unfortunately, the problem then still exists at the > writeback end which I didn't bring up in the previous mail. What's your opinion if we write back the whole compressed page to swap disk? -- Regards, -Bob _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/devel