> One of the many requirements for writeback is that if userspace is > continually dirtying pages in a particular file, that shouldn't cause > the kupdate function to concentrate on that file's newly-dirtied pages, > neglecting pages from other files which were less-recently dirtied. > (and dirty nodes, etc). Sadly I do find the old pages that the flusher never get a chance to catch and write them out. In the below case, if the task dirties pages fast enough at the end of file, writeback_index will never get a chance to wrap back. There may be various variations of this case. file head [ *** ==>***************]==> old pages writeback_index fresh dirties Ironically the current kernel relies on pageout() to catch these old pages, which is not only inefficient, but also not reliable. If a full LRU walk takes an hour, the old pages may stay dirtied for an hour. We may have to do (conditional) tagged ->writepages to safeguard users from losing data he'd expect to be written hours ago. Thanks, Fengguang -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>