On 01/17/14 19:42, Hannes Reinecke wrote: > @@ -1256,7 +1188,8 @@ static void pg_init_done(void *data, int errors) > m->queue_io = 0; > > m->pg_init_delay_retry = delay_retry; > - queue_work(kmultipathd, &m->process_queued_ios); > + if (!m->queue_io) > + dm_table_run_queue(m->ti->table); > > /* > * Wake up any thread waiting to suspend. Does pg_init retry still work with this change? I suspect it doesn't. When a retry is requested in pg_init_done(), m->queue_io is still 0 and somebody has to kick pg_init. Instead of replacing "process_queued_ios" work completely, how about keeping it around and just replacing dispatch_queued_ios() by dm_table_run_queue()? > @@ -1606,7 +1540,7 @@ static int multipath_ioctl(struct dm_target *ti, unsigned int cmd, > > spin_lock_irqsave(&m->lock, flags); > > - if (!m->current_pgpath) > + if (!m->current_pgpath || !m->queue_io) > __choose_pgpath(m, 0); > > pgpath = m->current_pgpath; Why is !m->queue_io check added here? > diff --git a/drivers/md/dm.c b/drivers/md/dm.c > index 0704c52..291491b 100644 > --- a/drivers/md/dm.c > +++ b/drivers/md/dm.c > @@ -1912,6 +1912,19 @@ static int dm_any_congested(void *congested_data, int bdi_bits) > return r; > } > > +void dm_table_run_queue(struct dm_table *t) > +{ > + struct mapped_device *md = dm_table_get_md(t); > + unsigned long flags; > + > + if (md->queue) { > + spin_lock_irqsave(md->queue->queue_lock, flags); > + blk_run_queue_async(md->queue); > + spin_unlock_irqrestore(md->queue->queue_lock, flags); > + } > +} > +EXPORT_SYMBOL_GPL(dm_table_run_queue); > + I think this funcion fits better in dm-table.c. -- Jun'ichi Nomura, NEC Corporation -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel