On Wed, Feb 25, 2015 at 05:25:06PM +0100, Jan Kara wrote: > Yeah, that sounds reasonable. I've been thinking how to fix those time > ordering issues and sadly it isn't trivial. We'll likely need a > timestamp in the inode (although a coarse one is enough) remembering when > inode was last written (i_dirtied_when needs to remember when dirty *data* > were created so we cannot use it as I originally thought). And we'll need > to sort inodes back into the list of inodes with dirty timestamps. It > could be done in O(length of list + number of inodes added) if we are > careful but it will be non-trivial. Well, the bottom line is that the two major problems you've listed (ignoring the cosmetic issues) is that some of the inodes with dirty timestamps might not get rewritten out until the inodes get ejected from memory or the file system is unmounted. This isn't exactly a disaster; it's not going to cause data loss, or cause the system to become unstable, no? 1) Inode that gets periodically dirtied with I_DIRTY_PAGES, cleaned and dirtied again will have inode with updated timestamps never written due to age since inode->dirtied_when gets reset on each redirtying with I_DIRTY_PAGES. If we maintain an i_dirtied_when and a i_dirtied_time_when field, all we need to do is to check if i_dirtied_time_when is older than 24 hours while we are processing writeback for inodes on b_io (nor just when we are processing inodes on b_dirty_time), and update the timestamps if necessary. 2) The code won't maintain time ordering of b_dirty_time list by inode->dirtied_when - this happens because requeue_inode() moves inode at the head of the b_dirty_time list but inodes in b_io list from which we move are no longer ordered by dirtied_when (due to that list being combined from several lists and also because we sort the list by superblock). As a result terminating logic in move_expired_inodes() may terminate the scan too early for b_dirty_time list. To solve this problem, we need to make sure that the inode is inserted into the list sorted by i_dirtied_time_when (and then move_expired_inodes can just terminate checking on i_dirtied_time_when instead of i_dirtied_when when we are scanning the b_dirty_time list). If we don't care for this overhead, we could can do the following, at the cost of a bit less precision about when we write out timestamps: a) When checking to see if we need to write back timestamps while processing inodes on b_io, we check not only i_dirty_time_when, but we also check to see if mtime is oldered than a day. If so, we force out the timestamps. This means we could potentially push out timestamps earlier than we should, but in the steady state, the timestamps will only be updated once a day. b) When we move an inode from b_io to b_dirty_time, we set i_dirty_time_when to "now". Because of (a) we know that mtime will be at most stale by one day. If we don't dirty the inode's pages for the next 24 hours, at that point the timestamps will be written out. Hence, in the worst case the dirty timestamps might be stale on disk by a maximum of two days. Yes, we're playing loosey-goosey with exactly when the dirtied inodes will get written out to disk, but the whole point of lazytime is to relax when timestamps get updated on disk in exchange for better performance. This just relaxes the timing a bit more. But the fact that exactly when the timestamps get written back is not something I view as a particularly critical, even if we don't fix both of these two issues in 4.0, I don't think it's the end of the world even if we don't yank the support for the lazytime mount option in ext4. Jan, what do you think? - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html