Hi Ethan, I might not understand the problem completely but I do not understand the benefit of changing rr_min_io. As far as I can see from your multipath output, both of the devices consist of two path groups with one path. This means, as long as there is no path failure I/O will never be sent to the inactive group. I guess the only thing you need is a script that might find out from a given scsi device (like sdc) whether this would be the preferred path and then print a number that represents the priority (the lower, the higher). Then use this as priority callout and group by priority with failback set to immediate. Regards, Stefan 2007/8/14, Ethan John <ethan.john@xxxxxxxxx>: > For the record, setting rr_min_io to something extremely large (we're using > 2 billion now, since I'm assuming it's a C integer) solves the immediate > problem that we're having (overhead in path switching causing poor > > mpath45 (20002c9020020001a00151b6b46bb57b0) dm-1 > company,iSCSI target > > [size=15G][features=0][hwhandler=0] > > \_ round-robin 0 [prio=1][active] > > \_ 22:0:0:1 sdc 8:32 [active][ready] > > \_ round-robin 0 [prio=1][enabled] > > \_ 23:0:0:1 sde 8:64 [active][ready] > > mpath44 (20002c9020020001200151b6b46bb57ae) dm-0 > company,iSCSI target > > [size=15G][features=0][hwhandler=0] > > \_ round-robin 0 [prio=1][enabled] > > \_ 22:0:0:0 sdb 8:16 [active][ready] > > \_ round-robin 0 [prio=1][enabled] > > \_ 23:0:0:0 sdd 8:48 [active][ready] > > -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel