Re: EFI in CLVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 18, 2011 at 10:13 AM, Jonathan Barber
<jonathan.barber@xxxxxxxxx> wrote:
>
> On 13 August 2011 04:24, Paras pradhan <pradhanparas@xxxxxxxxx> wrote:
> > Alan,
> > Its a FC SAN.
> > Here is multipath -v2 -ll output and looks good .
> > --
> > mpath13 (360060e8004770d000000770d000003e9) dm-28 HITACHI,OPEN-V*4
> > [size=2.0T][features=1 queue_if_no_path][hwhandler=0][rw]
> > \_ round-robin 0 [prio=2][active]
> >  \_ 5:0:1:7 sdt 65:48 [active][ready]
> >  \_ 6:0:1:7 sdu 65:64 [active][ready]
> > ---
> >
> > If I don't make an entire LUN a PV, I think I would then need partitions. Am
> > i right? and you think this will reduce the speed penalty?
>
> The (possible) speed penalty with a partition + LVM is because the
> blocks in the LVM/filesystem aren't aligned with the blocks in the
> storage system. So when you write a block in the the OS, the storage
> system has to write to two blocks. You can overcome this by manually
> aligning the partitions with the underlying storage.
>
> You can also just not use any partitions/LVM and write the filesystem
> directly to the block device... But I'd just stick with using LVM.
>


Here is what I have noticed though I should have done few more tests.
iozone o/p with partitions (test size is 100MB)
-
"Output is in Kbytes/sec"
"  Initial write "  265074.94
"        Rewrite "  909962.61
"           Read " 1872247.78
"        Re-read " 1905471.81
"   Reverse Read " 1316265.03
"    Stride read " 1448626.44
"    Random read " 1119532.25
" Mixed workload "  922532.31
"   Random write "  749795.80
--

without partitions:
"Output is in Kbytes/sec"
"  Initial write "  376417.97
"        Rewrite "  870409.73
"           Read " 1953878.50
"        Re-read " 1984553.84
"   Reverse Read " 1353943.00
"    Stride read " 1469878.76
"    Random read " 1432870.66
" Mixed workload " 1328300.78
"   Random write "  790309.01
---



>
> If you want to create a LV that uses all of the space on a VG, you can use:
> # lvcreate -l 100%FREEVG -n $NAME $VGNAME
>
> Do you see the same problem if you create the LV without CLVMD
> running? This thread suggests it's possible to stop clvmd whilst the
> cluster is running:
> https://www.redhat.com/archives/linux-cluster/2008-November/msg00151.html
>
> If you run "lvcreate -ddddddd -vvv ..." do you see any useful messages?


I got this locking problem resolved after rebooting all the nodes .
What I have noticed is after adding a LUN, under /dev/mpath instead of
wwid i was seeing as:

lrwxrwxrwx 1 root root 8 Aug 9 17:30 mpath13 -> ../dm-28

After reboot

lrwxrwxrwx 1 root root 7 Aug 15 17:53
360060e8004770d000000770d000003e9 -> ../dm-9

So whats is going not I am not sure. Looks like the issue with
automatic dmsetup?

Thanks
Paras.




>
> Cheers
>
> > Thanks
> > Paras.
> >
> >
> > On Fri, Aug 12, 2011 at 8:39 PM, Alan Brown <ajb2@xxxxxxxxxxxxxx> wrote:
> >>
> >> On 12/08/2011 17:24, Paras pradhan wrote:
> >>>
> >>> Does it mean that I don't need mpath0p1 ? If its the case i don't need to
> >>> run kpartx on mpath0?
> >>
> >> You still need kpartx, but that's a bit clunky anyway. Let dm-multipath
> >> take care of all that for you.
> >>
> >> (The last time I used kpartx and friends was 2003. Dm-multipath and
> >> multipathd are much more user-friendly. All you need then is multipath -v2
> >> -ll to verify things are where they should be...)
> >>
> >>> And not having mpath0p1 will take away this device mapper ioctl failed
> >>> issue when creating lvcreate?
> >>>
> >>
> >> I think that's a separate issue. What's the underlaying structure? SAN?
> >> FC? iscsi? drdb?
> >>
> >>> I am really confused why this lock has failed , also not sure if this is
> >>> related to this >2TB LUN.
> >>>
> >>
> >> It's not. Some of my LUNs are 25+Tb
> >>
> >
> >
> >
> >>
> >> FWIW having PVs on LUN partitions introduces a small but measurable speed
> >> penalty over making the entire LUN a PV - this is mostly down to the small
> >> offset a partition table adds to the front of the LUN.
> >>
> >
> >
> >
> >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster@xxxxxxxxxx
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
>
>
>
> --
> Jonathan Barber <jonathan.barber@xxxxxxxxx>
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux