> > Are you testing for this failure scenario? If so, can you briefly > > describe the testing? > > Not yet.. But one possible scheme is to record the dirty time of each > page in a debug kernel and expose them to user space. Then we can run > any kind of workloads, and in the mean while run a background scanner > to collect and report the distribution of dirty page ages. > > Does it sound too heavy weight? Or we may start by reporting the dirty > inode age first. To maintain a mapping->writeback_index_wrapped_when and > a mapping->pages_dirtied_when to follow it (or just reuse/reset > mapping->dirtied_when?). The former will be reset to jiffies on each > full scan of the pages. range_whole=1 scan can maintain its start time > in a local variable. Then we get an estimation "what's the max > possible dirty page age this inode has?". There will sure be redirtied > pages though.. Hmm the lighter scheme will fail the common "active sequential write to large file" case, because the full scan will never manage to come to an end.. Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html