> One of the many requirements for writeback is that if userspace is > continually dirtying pages in a particular file, that shouldn't cause > the kupdate function to concentrate on that file's newly-dirtied pages, > neglecting pages from other files which were less-recently dirtied. > (and dirty nodes, etc). Sadly I do find the old pages that the flusher never get a chance to catch and write them out. In the below case, if the task dirties pages fast enough at the end of file, writeback_index will never get a chance to wrap back. There may be various variations of this case. file head [ *** ==>***************]==> old pages writeback_index fresh dirties Ironically the current kernel relies on pageout() to catch these old pages, which is not only inefficient, but also not reliable. If a full LRU walk takes an hour, the old pages may stay dirtied for an hour. We may have to do (conditional) tagged ->writepages to safeguard users from losing data he'd expect to be written hours ago. Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html