Re: [PATCH 03/10] writeback: Do not congestion sleep if there are no congested BDIs or significant writeback

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 09, 2010 at 09:54:36AM +0100, Mel Gorman wrote:
> On Wed, Sep 08, 2010 at 11:52:45PM +0900, Minchan Kim wrote:
> > On Wed, Sep 08, 2010 at 12:04:03PM +0100, Mel Gorman wrote:
> > > On Wed, Sep 08, 2010 at 12:25:33AM +0900, Minchan Kim wrote:
> > > > > + * @zone: A zone to consider the number of being being written back from
> > > > > + * @sync: SYNC or ASYNC IO
> > > > > + * @timeout: timeout in jiffies
> > > > > + *
> > > > > + * Waits for up to @timeout jiffies for a backing_dev (any backing_dev) to exit
> > > > > + * write congestion.  If no backing_devs are congested then the number of
> > > > > + * writeback pages in the zone are checked and compared to the inactive
> > > > > + * list. If there is no sigificant writeback or congestion, there is no point
> > > >                                                 and 
> > > > 
> > > 
> > > Why and? "or" makes sense because we avoid sleeping on either condition.
> > 
> > if (nr_bdi_congested[sync]) == 0) {
> >         if (writeback < inactive / 2) {
> >                 cond_resched();
> >                 ..
> >                 goto out
> >         }
> > }
> > 
> > for avoiding sleeping, above two condition should meet. 
> 
> This is a terrible comment that is badly written. Is this any clearer?
> 
> /**
>  * wait_iff_congested - Conditionally wait for a backing_dev to become uncongested or a zone to complete writes
>  * @zone: A zone to consider the number of being being written back from
>  * @sync: SYNC or ASYNC IO
>  * @timeout: timeout in jiffies
>  *
>  * In the event of a congested backing_dev (any backing_dev) or a given @zone
>  * having a large number of pages in writeback, this waits for up to @timeout
>  * jiffies for either a BDI to exit congestion or a write to complete.
>  *
>  * If there is no congestion and few pending writes, then cond_resched()
>  * is called to yield the processor if necessary but otherwise does not
>  * sleep.
>  */

Looks good.

> 
> > > 
> > > > > + * in sleeping but cond_resched() is called in case the current process has
> > > > > + * consumed its CPU quota.
> > > > > + */
> > > > > +long wait_iff_congested(struct zone *zone, int sync, long timeout)
> > > > > +{
> > > > > +	long ret;
> > > > > +	unsigned long start = jiffies;
> > > > > +	DEFINE_WAIT(wait);
> > > > > +	wait_queue_head_t *wqh = &congestion_wqh[sync];
> > > > > +
> > > > > +	/*
> > > > > +	 * If there is no congestion, check the amount of writeback. If there
> > > > > +	 * is no significant writeback and no congestion, just cond_resched
> > > > > +	 */
> > > > > +	if (atomic_read(&nr_bdi_congested[sync]) == 0) {
> > > > > +		unsigned long inactive, writeback;
> > > > > +
> > > > > +		inactive = zone_page_state(zone, NR_INACTIVE_FILE) +
> > > > > +				zone_page_state(zone, NR_INACTIVE_ANON);
> > > > > +		writeback = zone_page_state(zone, NR_WRITEBACK);
> > > > > +
> > > > > +		/*
> > > > > +		 * If less than half the inactive list is being written back,
> > > > > +		 * reclaim might as well continue
> > > > > +		 */
> > > > > +		if (writeback < inactive / 2) {
> > > > 
> > > > I am not sure this is best.
> > > > 
> > > 
> > > I'm not saying it is. The objective is to identify a situation where
> > > sleeping until the next write or congestion clears is pointless. We have
> > > already identified that we are not congested so the question is "are we
> > > writing a lot at the moment?". The assumption is that if there is a lot
> > > of writing going on, we might as well sleep until one completes rather
> > > than reclaiming more.
> > > 
> > > This is the first effort at identifying pointless sleeps. Better ones
> > > might be identified in the future but that shouldn't stop us making a
> > > semi-sensible decision now.
> > 
> > nr_bdi_congested is no problem since we have used it for a long time.
> > But you added new rule about writeback. 
> > 
> 
> Yes, I'm trying to add a new rule about throttling in the page allocator
> and from vmscan. As you can see from the results in the leader, we are
> currently sleeping more than we need to.

I can see the about avoiding congestion_wait but can't find about 
(writeback < incative / 2) hueristic result. 

> 
> > Why I pointed out is that you added new rule and I hope let others know
> > this change since they have a good idea or any opinions. 
> > I think it's a one of roles as reviewer.
> > 
> 
> Of course.
> 
> > > 
> > > > 1. Without considering various speed class storage, could we fix it as half of inactive?
> > > 
> > > We don't really have a good means of identifying speed classes of
> > > storage. Worse, we are considering on a zone-basis here, not a BDI
> > > basis. The pages being written back in the zone could be backed by
> > > anything so we cannot make decisions based on BDI speed.
> > 
> > True. So it's why I have below question.
> > As you said, we don't have enough information in vmscan.
> > So I am not sure how effective such semi-sensible decision is. 
> > 
> 
> What additional metrics would you apply than the ones I used in the
> leader mail?

effectiveness of (writeback < inactive / 2) heuristic. 

> 
> > I think best is to throttle in page-writeback well. 
> 
> I do not think there is a problem as such in page writeback throttling.
> The problem is that we are going to sleep without any congestion or without
> writes in progress. We sleep for a full timeout in this case for no reason
> and this is what I'm trying to avoid.

Yes. I agree. 
Just my concern is heuristic accuarcy I mentioned.
In your previous verstion, you don't add the heuristic.
But suddenly you added it in this version. 
So I think you have any clue to add it in this version.
Please, write down cause and data if you have. 

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux