Re: Wierd lvm2 performance problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 20, 2009 at 03:15:12PM +0200, Sven Eschenberg wrote:
Hi Luca,

Okay, let's assume a chunk size of C. No matter what your md looks like,
the logical md volume consists of a series of size/C chunks. the very
first chunk C0 will hold the LVM header.
If I align the extends with the chunksize and the extends even have the
chunksize, then every extens PEx of my PV equals exactly a chunk on any of
the disks.
Which in turn means, if I want to read PEx I have to read some chunk Cy on
one disk, and PEx+1 would most certainly be a Chunk Cy+1 which would
reside on a different physical disk.

correct

So the question is: Why would you want to align the first PE to the
stripesize, rather then the chunksize?

Because when you _write_ incomplete stripes, the raid code
would need to do a read-modify-write of the parity block.

Filesystem, like ext3/4 and xfs have the ability to account for stripe
size in the block allocator to prevent unnecessary read-modify-writes,
but if you do not stripe-align the start of the filesystem you cannot
take advantage of this.

The annoying issue is that rarely you have a (n^2)+P array, and pe_size
must be a power of 2.
So for example, given my 3D1P raid5 the only solution I devised was
having a chunk size which is a power of 2k, pe_start is aligned to
stripe, pe_size = chunk size, and I have to remember that every time I
extend a LV it has to be extended to the nearest multiple of 3 LE.

Regards,
L.

Regards

-Sven


On Mon, April 20, 2009 07:39, Luca Berra wrote:
On Sun, Apr 19, 2009 at 05:16:21PM +0200, Sven Eschenberg wrote:
Unfortunately I don't have the box at hand for 2 days, but I asked md to
use a chunksize of 2048K and the /proc/mdstat reported 2048K, last time
I
checked.
The LVM hat a phy-extsize of 2M and with the --dataalignment option set
to
2M, pvs reported a pe_start value of 2M aswell.

if you have a 2M chunk size, a full stripe is 2M*(N-1), where N-1 is the
number of drives in your array minus redundancy. (i.e. for a 5 drive
raid5 a stripe size would be 8M).

L.

--
Luca Berra -- bluca@comedia.it
          Communication Media & Services S.r.l.
   /"\
   \ /     ASCII RIBBON CAMPAIGN
    X        AGAINST HTML MAIL
   / \

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

--
Luca Berra -- bluca@comedia.it
        Communication Media & Services S.r.l.
 /"\
 \ /     ASCII RIBBON CAMPAIGN
  X        AGAINST HTML MAIL
 / \

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux