On Fri, 2009-08-07 at 17:21 -0400, Mike Snitzer wrote: > On Fri, Aug 07 2009 at 5:07pm -0400, > Akshay Lal <alal@xxxxxxxxxxxxxx> wrote: > > > Mike Snitzer wrote: > >> On Fri, Aug 07 2009 at 4:25pm -0400, > >> Akshay Lal <alal@xxxxxxxxxxxxxx> wrote: > >> > >> > >>> I'm having a few issues with path priorities. It seems that the > >>> choice of path to use during I/O is independent of the user defined > >>> priorities for each path. > >>> > >>> I am setting the priorities by executing writing a script that is > >>> used by prio_callout. This seems to work when I execute multipath > >>> -ll since all the specified priorities show up correctly. (the > >>> path_grouping_policy being used is failover) > >>> > >> > >> ... > >> > >> > >>> Is there something I'm doing wrong? I would like to be able to define > >>> the priorities per device, and ensure that data only traverses on > >>> the lower priority path when > >>> a) a failure to the first path (path with a higher priority) occures > >>> b) no other path with a higher priority exists > >>> > >> > >> Do things behave as you'd like if you change path_grouping_policy to > >> 'group_by_prio'? > >> > >> Mike > >> > > > > Mike: > > > > It seems that if I were to set the path_grouping_policy to > > "group_by_prio", then it seems to be working similar to a multibus > > configuration. What I would like is for within a single multipath group, > > (say mpath1), specify a primary path and an alternate/failover path. If > > I can make this configurable via user land tool that'd be great. In this > > vein, I had considered priorities with hope that if I can set the > > priority of a certain path within a group then the path with the highest > > priority will always be chosen and the other path (with the lower > > priority) will only come into play when the primary goes down. > > > > Below is the output of the multipath -ll & conf file when setting the > > path_grouping_policy to group_by_prio. > > > > > > multipath -ll: > > -------------- > > mpath2 (244534e3833623961) dm-1 DSNET,Dispersed Store > > [size=47G][features=1 queue_if_no_path][hwhandler=0][rw] > > \_ round-robin 0 [prio=15][enabled] > > \_ 85:0:0:0 sdc 8:32 [active][ready] > > \_ 87:0:0:0 sde 8:64 [active][ready] > > mpath1 (244534e3266616134) dm-0 DSNET,Dispersed Store > > [size=47G][features=1 queue_if_no_path][hwhandler=0][rw] > > \_ round-robin 0 [prio=15][active] > > \_ 84:0:0:0 sdb 8:16 [active][ready] > > \_ 86:0:0:0 sdd 8:48 [active][ready] > > > > > > /etc/multipath.conf: > > -------------------- > > defaults { > > udev_dir /dev > > polling_interval 1 > > selector "round-robin 0" > > path_grouping_policy group_by_prio > > getuid_callout "/sbin/scsi_id -g -u -s /block/%n" > > prio_callout "/bin/bash > > /root/MultipathScripts/mpath_prio_alt %n" > > path_checker tur > > rr_min_io 128 > > max_fds 8192 > > rr_weight priorities > > failback immediate > > no_path_retry queue > > user_friendly_names yes > > } > > > Please don't top-post. > > I'm pretty sure John meant to say "group_by_prio" rather than "failover" > in his initial reply to this thread. John originally got this insight > (dummy device section et. al. applies to RHEL 5.3) back in April: > > https://www.redhat.com/archives/dm-devel/2009-April/msg00157.html > > Which multipath/distro are you using? > > Mike <snip> Actually, we are using failover in our environment. Perhaps I have missed something but it is working well for us as far as I can tell. We have a single path unless it fails in which case we go to the next path in priority order. We are handling load balancing across paths in a different way as we found the performance of multibus was less than we could achieve otherwise. Then again, this is not an area of expertise for me. Thanks - John -- John A. Sullivan III Open Source Development Corporation +1 207-985-7880 jsullivan@xxxxxxxxxxxxxxxxxxx http://www.spiritualoutreach.com Making Christianity intelligible to secular society -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel