> > I have no idea how serious the scalability problems with this are. If > > they are serious, different solutions can probably be found for the > > above, but this is certainly the simplest. > > Atomic operations to a single per-backing device from all CPUs at once? > That's a pretty serious scalability issue and it will cause a major > performance regression for XFS. OK. How about just accounting writeback pages? That should be much less of a problem, since normally writeback is started from pdflush/kupdate in large batches without any concurrency. Or is it possible to export the state of the device queue to mm? E.g. could balance_dirty_pages() query the backing dev if there are any outstanding write requests? > I'd call this a showstopper right now - maybe you need to look at > something like the ZVC code that Christoph Lameter wrote, perhaps? That's rather a heavyweight approach for this I think. The only info balance_dirty_pages() really needs is whether there are any dirty+writeback bound for the backing dev or not. It knows about the diry pages, since it calls writeback_inodes() which scans the dirty pages for this backing dev looking for ones to write out. If after returning from writeback_inodes() wbc->nr_to_write didn't decrease and wbc->pages_skipped is zero then we know that there are no more dirty pages for the device. Or at least there are no dirty pages which aren't already under writeback. Thanks, Miklos - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html