On Tue, Jun 03, 2014 at 04:05:31PM +0200, Jan Kara wrote: > So we currently flush inodes in first dirtied first written back order when > superblock is not specified in writeback work. That completely ignores the > fact to which superblock inode belongs but I don't see per-sb fairness to > actually make any sense when > 1) flushing old data (to keep promise set in dirty_expire_centisecs) > 2) flushing data to reduce number of dirty pages > And these are really the only two cases where we don't do per-sb flushing. > > Now when filesystems want to do something more clever (and I can see > reasons for that e.g. when journalling metadata, even more so when > journalling data) I agree we need to somehow implement the above two types > of writeback using per-sb flushing. Type 1) is actually pretty easy - just > tell each sb to writeback dirty data upto time T. Type 2) is more difficult > because that is more openended task - it seems similar to what shrinkers do > but that would require us to track per sb amount of dirty pages / inodes > and I'm not sure we want to add even more page counting statistics... > Especially since often bdi == fs. Thoughts? Honestly I think doing per-bdi writeback has been a major mistake. As you said it only even matters when we have filesystems on multiple partitions on a single device, and even then only in a simple setup, as soon as we use LVM or btrfs this sort of sharing stops to happen anyway. I don't even see much of a benefit except that we prevent two flushing daemons to congest a single device for that special case of multiple filesystems on partitions of the same device, and that could be solved in other ways. The major benefit of the per-bdi writeback was that for the usual case of one filesystem per device we get exactly one flusher thread per filesystems intead of multiple competing ones, but per-sb writeback would solve that just as fine. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html