Re: [PATCH 5/9] writeback: support > 1 flusher thread per bdi

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 05 2009, Jan Kara wrote:
> > +static void bdi_queue_work(struct backing_dev_info *bdi, struct bdi_work *work)
> > +{
> > +	if (work) {
> > +		work->seen = bdi->wb_mask;
> > +		BUG_ON(!work->seen);
> > +		atomic_set(&work->pending, bdi->wb_cnt);
>   I guess the idea here is that every writeback thread has to acknowledge
> the work. But what if some thread decides to die after the work is queued
> but before it manages to acknowledge it? We would end up waiting
> indefinitely...

The writeback thread checks for race added work on exit, so it should be
fine. Additionally, only the default thread will exit and that one will
always have it's count and mask be valid (since we auto-fork it again,
if needed).

> 
> > +		BUG_ON(!bdi->wb_cnt);
> > +
> > +		/*
> > +		 * Make sure stores are seen before it appears on the list
> > +		 */
> > +		smp_mb();
> > +
> > +		spin_lock(&bdi->wb_lock);
> > +		list_add_tail_rcu(&work->list, &bdi->work_list);
> > +		spin_unlock(&bdi->wb_lock);
> > +	}
> > +
> >  	/*
> > -	 * This only happens the first time someone kicks this bdi, so put
> > -	 * it out-of-line.
> > +	 * If the default thread isn't there, make sure we add it. When
> > +	 * it gets created and wakes up, we'll run this work.
> >  	 */
> > -	if (unlikely(!bdi->wb.task))
> > +	if (unlikely(list_empty_careful(&bdi->wb_list)))
> >  		wake_up_process(default_backing_dev_info.wb.task);
> > +	else
> > +		bdi_sched_work(bdi, work);
> > +}
> > +
> > +/*
> > + * Used for on-stack allocated work items. The caller needs to wait until
> > + * the wb threads have acked the work before it's safe to continue.
> > + */
> > +static void bdi_wait_on_work_clear(struct bdi_work *work)
> > +{
> > +	wait_on_bit(&work->state, 0, bdi_sched_wait, TASK_UNINTERRUPTIBLE);
> > +}
>   I still feel the rules for releasing / cleaning up work are too
> complicated.
>   1) I believe we can bear one more "int" for flags in the struct bdi_work
> so that you don't have to hide them in sb_data.

Sure, but there's little reason to do that I think, since it's only used
internally. Let me put it another way, why add an extra int if we can
avoid it?

>   2) I'd introduce a flag with the meaning: free the work when you are
> done. Obviusly this flag makes sence only with dynamically allocated work
> structure. There would be no "on stack" flag.
>   3) I'd create a function:
> bdi_wait_work_submitted()
>   which you'd have to call whenever you didn't set the flag and want to
> free the work (either explicitely, or via returning from a function which
> has the structure on stack).
>   It would do:
> bdi_wait_on_work_clear(work);
> call_rcu(&work->rcu_head, bdi_work_free);
> 
>   wb_work_complete() would just depending on the flag setting either
> completely do away with the work struct or just do bdi_work_clear().
> 
>   IMO that would make the code easier to check and also less prone to
> errors (currently you have to think twice when you have to wait for the rcu
> period, call bdi_work_free, etc.).

Didn't we go over all that last time, too?

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux