Re: Alignment: XFS + LVM2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mike,

thanks a lot for your answer.

> Hi all,
>
> I am trying to setup a storage pool with correct disk alignment and I hope
> somebody can help me to understand some unclear parts to me when
> configuring XFS over LVM2.
>
> Actually we have few storage pools with the following settings each:
>
> - LSI Controller with 3xRAID6
> - Each RAID6 is configured with 10 data disks + 2 for double-parity.
> - Each disk has a capacity of 4TB, 512e and physical sector size of 4K.
> - 3x(10+2) configuration was considered in order to gain best performance
> and data safety (less disks per RAID less probability of data corruption)

What is the chunk size used for these RAID6 devices?
Say it is 256K, you have 10 data devices, so the full stripe would be
2560K.

Actually chunk size is 256KB (in a near future we will try 1MB as we are managing large files but actually we want to keep the current configuration of 256KB)

Which version of lvm2 and kernel are you using?  Newer versions support
a striped LV stripesize that is not a power-of-2.

Current LVM2 version is  lvm2-2.02.100-8.el6.x86_64

> >From the O.S. side we see:
>
> [root@stgpool01 ~]# fdisk -l /dev/sda /dev/sdb /dev/sdc
>
> Disk /dev/sda: 40000.0 GB, 39999997214720 bytes
> 255 heads, 63 sectors/track, 4863055 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disk identifier: 0x00000000
>
> Disk /dev/sdb: 40000.0 GB, 39999997214720 bytes
> 255 heads, 63 sectors/track, 4863055 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disk identifier: 0x00000000
>
> Disk /dev/sdc: 40000.0 GB, 39999997214720 bytes
> 255 heads, 63 sectors/track, 4863055 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disk identifier: 0x00000000
>
> The idea is to aggregate the above devices and show only 1 storage space.
> We did as follows:
>
> vgcreate dcvg_a /dev/sda /dev/sdb /dev/sdc
> lvcreate -i 3 -I 4096 -n dcpool -l 100%FREE -v dcvg_a

I'd imagine you'd want the stripesize of this striped LV to match the
underlying RAID6 stripesize no?  So 2560K, e.g. -i 3 -I 2560

That makes for a very large full stripe through...

Hence for a RAID6 with 256KB of stripe size "I" should be 2560. Does it mean that the "I" parameter is stripesize*number_of_data_disks? I mean, if I have 16 data disks in a RAID6 and 1MB of stripe size, which should be the "I" value?

On the other hand, yes, 2560 is a large full stripe but we are mostly managing large files (hundred MBs and few GB), so I guess this is ok. Is possible to check the minimum recommended file size for a configuration like this? I would like to know it because we also have few storage pools (less than 3% of the total) we a small file profile and I would like to fit the disk configuration to its workload type.

> Hence, stripe of the 3 RAID6 in a LV.
>
> And here is my first question: How can I check if the storage and the LV
> are correctly aligned?
>
> On the other hand, I have formatted XFS as follows:
>
> mkfs.xfs -d su=256k,sw=10 -l size=128m,lazy-count=1 /dev/dcvg_a/dcpool
>
> So my second question is, are the above 'su' and 'sw' parameters correct on
> the current LV configuration? If not, which values should I have and why?
> AFAIK su is the stripe size configured in the controller side, but in this
> case we have a LV. Also, sw is the number of data disks in a RAID, but
> again, we have a LV with 3 stripes, and I am not sure if the number of data
> disks should be 30 instead.

Newer versions of mkfs.xfs _should_ pick up the hints exposed (as
minimum_io_size and optimal_io_size) by the striped LV.

But if not you definitely don't want to be trying to pierce through the
striped LV config to establish settings of the underlying RAID6.  Each
layer in the stack should respect the layer beneath it.  So, if the
striped LV is configured how you'd like, you should only concern
yourself with the limits that have been established for the topmost
striped LV that you're layering XFS on.

Current XFS package has the version xfsprogs-3.1.1-14.el6.x86_64 and comes with Scientific Linux 6. Then, how should I manage the XFS 'su' and 'sw' parameters from the LVM2 configuration in order to ensure disk alignment in order to have best performance?

Once again, thanks a lot for you help,
--
Marc Caubet Serrabou
PIC (Port d'Informació Científica)
Campus UAB, Edificio D
E-08193 Bellaterra, Barcelona
Tel: +34 93 581 33 22
Fax: +34 93 581 41 10
http://www.pic.es
Avis - Aviso - Legal Notice: http://www.ifae.es/legal.html
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux