On Wednesday 18 August 2010 15:28:56 Peter Zijlstra wrote: > On Wed, 2010-08-18 at 14:52 +0530, Nikanth Karthikesan wrote: > > On Tuesday 17 August 2010 13:54:35 Peter Zijlstra wrote: > > > On Tue, 2010-08-17 at 10:39 +0530, Nikanth Karthikesan wrote: > > > > Oh, nice. Per-task limit is an elegant solution, which should help > > > > during most of the common cases. > > > > > > > > But I just wonder what happens, when > > > > 1. The dirtier is multiple co-operating processes > > > > 2. Some app like a shell script, that repeatedly calls dd with seek > > > > and skip? People do this for data deduplication, sparse skipping > > > > etc.. 3. The app dies and comes back again. Like a VM that is > > > > rebooted, and continues writing to a disk backed by a file on the > > > > host. > > > > > > > > Do you think, in those cases this might still be useful? > > > > > > Those cases do indeed defeat the current per-task-limit, however I > > > think the solution to that is to limit the amount of writeback done by > > > each blocked process. > > > > Blocked on what? Sorry, I do not understand. > > balance_dirty_pages(), by limiting the work done there (or actually, the > amount of page writeback completions you wait for -- starting IO isn't > that expensive), you can also affect the time it takes, and therefore > influence the impact. > But this has nothing special to do with the cases like multi-threaded dirtier, which is why I was confused. :) Thanks Nikanth -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>