Hi Gerrard, So ultimately which one should we use? /dev/mapper/... or ... My apologies, I'm a bit confused after reading the reply Guess I have to read up more about this at http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/DM_Multipath/index.html which you gave me previously but haven't got a chance. Thanks Goh On 5/27/08, Gerrard Geldenhuis <Gerrard.Geldenhuis@xxxxxxxxxxxx> wrote: > > Hi, > I asked a similar question a while ago and I copy in the response. I hope > it helps. Thanks to Christophe Varoqui for supplying the answers. > > Le lundi 04 février 2008 à 12:50 +0000, Gerrard Geldenhuis a écrit : > > Hi Christophe, > > > > I am a bit confused between the usage of > > > > /dev/mpath > > > > /dev/dm-X and > > > > /dev/mapper/ > > > > > > > > I am unsure as to which device I should be using when creating lvm > > volumes. I have asked a consultant from Redhat who gave the following > > response: > > > > You can use pvcreate on whatever object you want. The important setup is > the "filter" in /etc/lvm/lvm.conf : this filter has to use only one kind of > object to avoid pvscan confusion. Here again, whatever naming convention is > good. > > The /dev/mpath/ contains only multipath-type devmaps, whereas /dev/mapper > contains all kind of devmap types (linear for LV) > > > > > We ended up having a big discussion about this on IRC yesterday, and > > the outcome was inconclusive. However, the guy who is our oracle/san > > expert says /dev/mpath, so that is what I'd do. > > > Also keep in mind that you can disable user_friendly_names in > multipath.conf, which will give you /dev/{mapper,mpath}/6000111122223333 > names. > Those are really interesting when you use clusters (like RAC) because the > naming is consistent between hosts. > > > > > > > You might also find it useful to take a look at the kpartx command, > > and use that after you've added a partition to a LUN. It should see to > > it that the relevant /dev/mpath partition device gets created without > > having to reboot the system > > > Avoid partitioning multipathed device when possible : it will remove > considerable complexity to the software stack. > > > > > > I also asked on rhel5 mailinglist where I got the following response: > > > > > > > > Using /dev/mapper is always how I've seen it done. /dev/mpath/* looks > > to be just a symlink to /dev/dm-? device nodes which are, in turn, > > device nodes with identical major/minor numbers as /dev/mapper/*. > > > > > > > > Why /dev/mpath/* is even there, I'm not sure. > > > > > udev rules trigger their creation > > Regards, > cvaroqui > > > -----Original Message----- > > From: redhat-list-bounces@xxxxxxxxxx [mailto:redhat-list- > > bounces@xxxxxxxxxx] On Behalf Of sunhux G > > Sent: 26 May 2008 10:37 > > To: General Red Hat Linux discussion list > > Subject: Hi Hertha/Gerrard/anyone,SAN disk partitions device files > changes > > with each reboot > > > > Hi Hertha/Gerrard/Anyone else, > > > > > > Thanks for the previous excellent replies to my questions. > > > > Something new just surfaced with the NetApp SAN disks partitions > > that are presented to our RHES 4.6 : > > > > The current mappings on 1st server is : > > lrwxrwxrwx 1 root root 7 May 22 15:36 mpath0 -> ../dm-2 > > lrwxrwxrwx 1 root root 7 May 22 15:36 mpath1 -> ../dm-5 > > lrwxrwxrwx 1 root root 7 May 22 15:36 mpath2 -> ../dm-3 > > lrwxrwxrwx 1 root root 7 May 22 15:36 mpath3 -> ../dm-4 > > lrwxrwxrwx 1 root root 7 May 22 15:36 mpath4 -> ../dm-1 > > lrwxrwxrwx 1 root root 7 May 22 15:36 mpath5 -> ../dm-0 > > > > & "multipath -ll" gives : > > > > mpath0 (360a98000567244396334493370345055) > > [size=5 GB][features="1 queue_if_no_path"][hwhandler="0"] > > \_ round-robin 0 [active] > > \_ 8:0:2:5 sds 65:32 [active] > > \_ 8:0:3:5 sdy 65:128 [active] > > \_ round-robin 0 [enabled] > > \_ 8:0:1:5 sdm 8:192 [active] > > \_ 8:0:0:5 sdg 8:96 [active] > > > > mpath1 (360a9800056724439633449336c786d69) > > [size=5 GB][features="1 queue_if_no_path"][hwhandler="0"] > > \_ round-robin 0 [active] > > \_ 8:0:2:4 sdr 65:16 [active] > > \_ 8:0:3:4 sdx 65:112 [active] > > \_ round-robin 0 [enabled] > > \_ 8:0:0:4 sdf 8:80 [active] > > \_ 8:0:1:4 sdl 8:176 [active] > > > > On another Linux server (with cluster script /etc/init.d/o2cb_start.sh > > started), > > /dev/mpath/mpath0 or mpath1 or ... mpathx completely maps to different > > minor devices /dev/sd... > > > > So we mounted on server 1 a partition (formatted as ocfs using > > ocfs2console) > > first & create a test file on it & then on server 2, we mount > > mpath0/.../mpathx > > one after another to see which of it has the test file on it to identify > > it. > > > > We then put in vfstab the /dev/mpath/mpathx & mountpoint for each server > > that we have determined the hard way. > > > > Alas, after we rebooted both the servers, all the mappings became > > different > > ie on server 1 where mpath0 ->../dm-2 became mpath0 ->../dm-4 after > reboot > > & on server 2 where mpath0 ->../dm-1 became mpath0 ->../dm-3 after > reboot. > > > > Oracle told us to use /dev/mapper/mpathx - this appears to be more > > reliable > > (ie it's fixed to a specific partition regardless of how many reboots are > > done). > > > > Can someone explain what's the differences between > > /dev/mpath/mpathx & /dev/dm-x & /dev/mapper/mpathx > > > > Or I've not completely installed all the required stuff from NetApp on > our > > Redhat servers yet that triggers this? > > > > > > Thanks > > U > > -- > > redhat-list mailing list > > unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe > > https://www.redhat.com/mailman/listinfo/redhat-list > > -- > redhat-list mailing list > unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe > https://www.redhat.com/mailman/listinfo/redhat-list >
-- redhat-list mailing list unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe https://www.redhat.com/mailman/listinfo/redhat-list