Re: EFI in CLVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18 August 2011 18:41, Paras pradhan <pradhanparas@xxxxxxxxx> wrote:
> On Thu, Aug 18, 2011 at 10:13 AM, Jonathan Barber
> <jonathan.barber@xxxxxxxxx> wrote:
>>
>> On 13 August 2011 04:24, Paras pradhan <pradhanparas@xxxxxxxxx> wrote:
>> > Alan,
>> > Its a FC SAN.

[snip]

>> > If I don't make an entire LUN a PV, I think I would then need partitions. Am
>> > i right? and you think this will reduce the speed penalty?

[snip]

>> You can also just not use any partitions/LVM and write the filesystem
>> directly to the block device... But I'd just stick with using LVM.
>>
>
>
> Here is what I have noticed though I should have done few more tests.
> iozone o/p with partitions (test size is 100MB)
> -
> "Output is in Kbytes/sec"
> "  Initial write "  265074.94
> "        Rewrite "  909962.61
> "           Read " 1872247.78
> "        Re-read " 1905471.81
> "   Reverse Read " 1316265.03
> "    Stride read " 1448626.44
> "    Random read " 1119532.25
> " Mixed workload "  922532.31
> "   Random write "  749795.80
> --
>
> without partitions:
> "Output is in Kbytes/sec"
> "  Initial write "  376417.97
> "        Rewrite "  870409.73
> "           Read " 1953878.50
> "        Re-read " 1984553.84
> "   Reverse Read " 1353943.00
> "    Stride read " 1469878.76
> "    Random read " 1432870.66
> " Mixed workload " 1328300.78
> "   Random write "  790309.01
> ---

I'm not very familiar with iozone, but if you're only reading /
writing 100M, then probably all you're measuring is the speed of the
linux buffer cache. You should increase the amount of data to greater
than the RAM available to the system. Also, you should repeat these
runs multiple times and at a minimum take an average (and calculate
the standard deviation) of each metric to make sure you aren't getting
unusually good/bad performance. You can then compare the results using
a paired T-test to see if the difference is statistically significant.

[snip]

> I got this locking problem resolved after rebooting all the nodes .

That sounds like the problem encountered in the link I sent before.

> What I have noticed is after adding a LUN, under /dev/mpath instead of
> wwid i was seeing as:
>
> lrwxrwxrwx 1 root root 8 Aug 9 17:30 mpath13 -> ../dm-28
>
> After reboot
>
> lrwxrwxrwx 1 root root 7 Aug 15 17:53
> 360060e8004770d000000770d000003e9 -> ../dm-9

That's odd. Did you change your multipath configuration? It looks like
you've set "user_friendly_names" to "no".

> Thanks
> Paras.
-- 
Jonathan Barber <jonathan.barber@xxxxxxxxx>

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux