On 2010-11-07T11:30:49, Christophe Varoqui <christophe.varoqui@xxxxxxxxx> wrote: > Wouldn't it practical to bypass mpio completely on submit your io to the paths instead ? Yes - and no. Yes: I could do that, and send my IO down all paths via async IO. It was actually the first direction I looked into, however I abandoned it after a while (see below). And yes, it's the first thing everyone recommends ;-) No: It would mean I'd have to query multipathd for every IO, to learn which devices currently are active and linked to the right storage. (The idea of hooking into udev myself or scanning partitions seems a bit of a non-starter.) Alternatively, I could probably try and monitor the mapping for changes, but then would have to parse that syntax. Not to mention I'd have to handle my own partitioning, LVM mapping, etc too. So, it seems somewhat inefficient and inelegant. I think handling this at the dm-multipath level is cleaner; similarly how we handle network bonding (which incidentally has a broadcast mode too), instead of requiring every application to go out and open N independent channels. Regards, Lars -- Architect Storage/HA, OPS Engineering, Novell, Inc. SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg) "Experience is the name everyone gives to their mistakes." -- Oscar Wilde -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel