https://bugzilla.kernel.org/show_bug.cgi?id=78651 --- Comment #14 from Theodore Tso <tytso@xxxxxxx> --- What device (in major, minor number) were you writing to? It looks like the active devices were 253,1 and 253,3 and 253,5 And did you have the same lazyinit settings for the 1gb and 256mb journal? If you did enable the jbd2_checkpoint, I don't see any evidence that we ever needed to run a checkpoint. And the number of blocks used for each transaction is quite small, so it looks like the journal size shouldn't be making a difference. It's possible the lazy initialization could be stealing enough bandwidth that it would be making a difference. I'm surprised that it would cause a gradual decrease over time, though. The design was that it would steal a roughly constant percentage of disk time to initialize the inode tables. If you are immediately unmounting the file system once you are done with the backups, it could be that the lazy initialization is never finishing, but we do mark each block group as its inode table gets initialized, so the next time you remount it, it should pick up where it left off, until the inode tables are fully initialized. If you tell mke2fs to disable the lazy init feature, then the mke2fs takes longer, but it does initialize all of the inode table all at once. -- You are receiving this mail because: You are watching the assignee of the bug. -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html