On Wed, 13 Mar 2013 22:02:16 +0100 Jan Kara <jack@xxxxxxx> wrote: > > > ... remembering why we need to get to sb and why ext3 needs this ... So > > > maybe a better solution would be to have a bio flag meaning that pages need > > > bouncing? And we would set it from filesystems that need it - in case of > > > ext3 only writeback of data from kjournald actually needs to bounce the > > > pages. Thoughts? > > > > What about dirty pages that don't result in journal transactions? I think > > ext3_sync_file() eventually calls ext3_ordered_writepage, which then calls > > __block_write_full_page, which in turn calls submit_bh(). > So here we have two options: > Either we let ext3 wait the same way as other filesystems when stable pages > are required. Then only data IO from kjournald needs to be bounced (all > other IO is properly protected by PageWriteback bit). > > Or we won't let ext3 wait (as it is now), keep the superblock flag that fs > needs bouncing, and set the bio flag in __block_write_full_page() and > kjournald based on the sb flag. > > I think the first option is slightly better but I don't feel strongly > about that. It seems Just Wrong that we're dicking around with filesystem superblocks at this level. It's the bounce code, for heavens sake! What the heck's going on here and why wasn't I able to work that out from reading the code :( The need to stabilise these pages is driven by the characteristics of the underlying device and driver stack, isn't it? Things like checksumming? What else drives this requirement? </rant> Because I *think* it should be sufficient to maintain this boolean in the backing_dev. My *guess* is that this is all here because we want to enable stable-snapshotting on a per-fs basis rather than on a per-device basis? If so, why? If not, what? btw, local variable `bdi' in must_snapshot_stable_pages() doesn't do anything. None of this will stop Shuge's kernel from going splat either. -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html