> Hi, Dan. > I reviewed quickly. So I may be wrong. :) Hi Minchan -- Thanks for your thorough review! I don't think anyone else yet has examined the semantics of the cleancache patch as deeply as you have. Excellent! > > + /* > > + * if we're uptodate, flush out into the cleancache, otherwise > > + * invalidate any existing cleancache entries. We can't leave > > + * stale data around in the cleancache once our page is gone > > + */ > > + if (PageUptodate(page)) > > + cleancache_put_page(page); > > + else > > + cleancache_flush_page(mapping, page); > > I doubt it's right place related to PFRA. I agree it doesn't seem to be the right place, but it does work and there doesn't seem to be a better place. > 1) > You mentiond PFRA in you description and I understood cleancache has > a cold clean page which is evicted by reclaimer. > But __remove_from_page_cache can be called by other call sites. > > For example, shmem_write page calls it for moving the page from page > cache > to swap cache. Although there isn't the page in page cache, it is in > swap cache. > So next read/write of shmem until swapout happens can be read/write in > swap cache. > > I didn't looked into whole of callsites. But please review again them. I think the "if (PageUptodate(page))" eliminates all the cases where bad things can happen. Note that there may be cases where some unnecessary puts/flushes occur. The focus of the patch is on correctness first; it may be possible to increase performance (marginally) in the future by reducing unnecessary cases. > 3) Please consider system memory pressure. > And I hope Nitin consider this, too. This is definitely very important but remember that cleancache provides a great deal of flexibility: Any page in cleancache can be thrown away at any time as every page is clean! It can even accept a page and throw it away immediately. Clearly the backend needs to do this intelligently so this will take some policy work. Since I saw you sent a separate response to Nitin, I'll let him answer for his in-kernel page cache compression work. The solution to the similar problem for Xen is described in the tmem internals document that I think I pointed to earlier here: http://oss.oracle.com/projects/tmem/documentation/internals/ Thanks, Dan -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href