On Thu, Sep 26, 2013 at 1:58 PM, Minchan Kim <minchan@xxxxxxxxxx> wrote: > Hello Weigie, > > On Wed, Sep 25, 2013 at 05:33:43PM +0800, Weijie Yang wrote: >> On Wed, Sep 25, 2013 at 4:31 PM, Bob Liu <lliubbo@xxxxxxxxx> wrote: >> > On Wed, Sep 25, 2013 at 4:09 PM, Weijie Yang <weijie.yang.kh@xxxxxxxxx> wrote: >> >> I think I find a new issue, for integrity of this mail thread, I reply >> >> to this mail. >> >> >> >> It is a concurrence issue either, when duplicate store and reclaim >> >> concurrentlly. >> >> >> >> zswap entry x with offset A is already stored in zswap backend. >> >> Consider the following scenario: >> >> >> >> thread 0: reclaim entry x (get refcount, but not call zswap_get_swap_cache_page) >> >> >> >> thread 1: store new page with the same offset A, alloc a new zswap entry y. >> >> store finished. shrink_page_list() call __remove_mapping(), and now >> >> it is not in swap_cache >> >> >> > >> > But I don't think swap layer will call zswap with the same offset A. >> >> 1. store page of offset A in zswap >> 2. some time later, pagefault occur, load page data from zswap. >> But notice that zswap entry x is still in zswap because it is not >> frontswap_tmem_exclusive_gets_enabled. > > frontswap_tmem_exclusive_gets_enabled is just option to see tradeoff > between CPU burining by frequent swapout and memory footprint by duplicate > copy in swap cache and frontswap backend so it shouldn't affect the stability. Thanks for explain this. I don't mean to say this option affects the stability, but that zswap only realize one option. Maybe it's better to realize both options for different workloads. >> this page is with PageSwapCache(page) and page_private(page) = entry.val >> 3. change this page data, and it become dirty > > If non-shared swapin page become redirty, it should remove the page from > swapcache. If shared swapin page become redirty, it should do CoW so it's a > new page so that it doesn't live in swap cache. It means it should have new > offset which is different with old's one for swap out. > > What's wrong with that? It is really not a right scene for duplicate store. And I can not think out one. If duplicate store is impossible, How about delete the handle code in zswap? If it does exist, I think there is a potential issue as I described. >> 4. some time later again, swap this page on the same offset A. >> >> so, a duplicate store happens. >> >> what I can think is that use flags and CAS to protect store and reclaim on >> the same offset happens concurrentlly. >> >> >> thread 0: zswap_get_swap_cache_page called. old page data is added to swap_cache >> >> >> >> Now, swap cache has old data rather than new data for offset A. >> >> error will happen If do_swap_page() get page from swap_cache. >> >> >> > >> > -- >> > Regards, >> > --Bob >> >> -- >> To unsubscribe, send a message with 'unsubscribe linux-mm' in >> the body to majordomo@xxxxxxxxx. For more info on Linux MM, >> see: http://www.linux-mm.org/ . >> Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a> > > -- > Kind regards, > Minchan Kim -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>