Re: [RFC][PATCH] Per file dirty limit throttling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2010-08-18 at 14:52 +0530, Nikanth Karthikesan wrote:
> On Tuesday 17 August 2010 13:54:35 Peter Zijlstra wrote:
> > On Tue, 2010-08-17 at 10:39 +0530, Nikanth Karthikesan wrote:
> > > Oh, nice.  Per-task limit is an elegant solution, which should help
> > > during most of the common cases.
> > >
> > > But I just wonder what happens, when
> > > 1. The dirtier is multiple co-operating processes
> > > 2. Some app like a shell script, that repeatedly calls dd with seek and
> > > skip? People do this for data deduplication, sparse skipping etc..
> > > 3. The app dies and comes back again. Like a VM that is rebooted, and
> > > continues writing to a disk backed by a file on the host.
> > >
> > > Do you think, in those cases this might still be useful?
> > 
> > Those cases do indeed defeat the current per-task-limit, however I think
> > the solution to that is to limit the amount of writeback done by each
> > blocked process.
> > 
> 
> Blocked on what? Sorry, I do not understand.

balance_dirty_pages(), by limiting the work done there (or actually, the
amount of page writeback completions you wait for -- starting IO isn't
that expensive), you can also affect the time it takes, and therefore
influence the impact.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]