Re: [PATCH 6/9] writeback: introduce smoothed global dirty limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 29, 2011 at 10:52:51PM +0800, Wu Fengguang wrote:
> The start of a heavy weight application (ie. KVM) may instantly knock
> down determine_dirtyable_memory() if the swap is not enabled or full.
> global_dirty_limits() and bdi_dirty_limit() will in turn get global/bdi
> dirty thresholds that are _much_ lower than the global/bdi dirty pages.
> 
> balance_dirty_pages() will then heavily throttle all dirtiers including
> the light ones, until the dirty pages drop below the new dirty thresholds.
> During this _deep_ dirty-exceeded state, the system may appear rather
> unresponsive to the users.
> 
> About "deep" dirty-exceeded: task_dirty_limit() assigns 1/8 lower dirty
> threshold to heavy dirtiers than light ones, and the dirty pages will
> be throttled around the heavy dirtiers' dirty threshold and reasonably
> below the light dirtiers' dirty threshold. In this state, only the heavy
> dirtiers will be throttled and the dirty pages are carefully controlled
> to not exceed the light dirtiers' dirty threshold. However if the
> threshold itself suddenly drops below the number of dirty pages, the
> light dirtiers will get heavily throttled.
> 
> So introduce global_dirty_limit for tracking the global dirty threshold
> with policies
> 
> - follow downwards slowly
> - follow up in one shot
> 
> global_dirty_limit can effectively mask out the impact of sudden drop of
> dirtyable memory. It will be used in the next patch for two new type of
> dirty limits. Note that the new dirty limits are not going to avoid
> throttling the light dirtiers, but could limit their sleep time to 200ms.
> 
> Signed-off-by: Wu Fengguang <fengguang.wu@xxxxxxxxx>

...

> +static void global_update_bandwidth(unsigned long thresh,
> +				    unsigned long dirty,
> +				    unsigned long now)
> +{
> +	static DEFINE_SPINLOCK(dirty_lock);
> +	static unsigned long update_time;
> +
> +	/*
> +	 * Do a lockless check first to optimize away locking for most time.
> +	 */
> +	if (now - update_time < MAX_PAUSE)

	if (time_before(now, update_time + MAX_PAUSE))

> +		return;
> +
> +	spin_lock(&dirty_lock);
> +	if (now - update_time >= MAX_PAUSE) {

	if (time_after_eq(now, update_time + MAX_PAUSE))

Thanks,
-Andrea
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux