On Thu, Oct 13, 2016 at 03:18:02PM +0200, Jan Kara wrote: > On Thu 13-10-16 15:08:44, Kirill A. Shutemov wrote: > > On Thu, Oct 13, 2016 at 11:44:41AM +0200, Jan Kara wrote: > > > On Thu 15-09-16 14:54:59, Kirill A. Shutemov wrote: > > > > We writeback whole huge page a time. > > > > > > This is one of the things I don't understand. Firstly I didn't see where > > > changes of writeback like this would happen (maybe they come later). > > > Secondly I'm not sure why e.g. writeback should behave atomically wrt huge > > > pages. Is this because radix-tree multiorder entry tracks dirtiness for us > > > at that granularity? > > > > We track dirty/writeback on per-compound pages: meaning we have one > > dirty/writeback flag for whole compound page, not on every individual > > 4k subpage. The same story for radix-tree tags. > > > > > BTW, can you also explain why do we need multiorder entries? What do > > > they solve for us? > > > > It helps us having coherent view on tags in radix-tree: no matter which > > index we refer from the range huge page covers we will get the same > > answer on which tags set. > > OK, understand that. But why do we need a coherent view? For which purposes > exactly do we care that it is not just a bunch of 4k pages that happen to > be physically contiguous and thus can be mapped in one PMD? My understanding is that things like PageDirty() should be handled on the same granularity as PAGECACHE_TAG_DIRTY, otherwise things can go horribly wrong... -- Kirill A. Shutemov -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html