On Thu 28-05-09 14:53:08, Jens Axboe wrote: > On Thu, May 28 2009, Jan Kara wrote: > > > > Looking into this, it seems a bit complex with all that on_stack, sync / > > > > nosync thing. Wouldn't a simple refcounting scheme be more clear? Alloc / > > > > init_work_on_stack gets a reference from bdi_work, queue_work gets another > > > > reference passed later to flusher thread. We drop the reference when we > > > > leave bdi_start_writeback() and when flusher thread is done with the work. > > > > When refcount hits zero, work struct is freed (when work is on stack, we > > > > just never drop the last reference)... > > > > > > It wouldn't change the complexity of the stack vs non-stack at all, > > > since you have to do the same checks for when it's safe to proceed. And > > > having the single bit there with the hash bit wait queues makes that bit > > > easier. > > I think it would be simpler. Look: > > static void bdi_work_free(struct rcu_head *head) > > { > > struct bdi_work *work = container_of(head, struct bdi_work, rcu_head); > > > > kfree(work); > > } > > > > static void bdi_put_work(struct bdi_work *work) > > { > > if (!atomic_dec_and_test(&work->count, 1)) > > call_rcu(&work->rcu_head, bdi_work_free); > > } > > > > static void wb_work_complete(struct bdi_work *work) > > { > > bdi_work_clear(work); > > bdi_put_work(work); > > } > > > > void bdi_start_writeback(...) > > { > > ... > > if (must_wait || work == &work_stack) > > bdi_wait_on_work_clear(work); > > if (work != &work_stack) > > bdi_put_work(work); > > } > > > > IMO much easier to read... > > And doesn't work, since you cannot exit after clearing the on-stack work > before before rcu is quisced. The bdi_work could be browseable by other > threads under rcu_read_lock(), just like you defer the kfree(), you have > to defer the bdi_work_clear() for on-stack work. Right. My fault. Sorry for the noise. Honza -- Jan Kara <jack@xxxxxxx> SUSE Labs, CR -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html