Re: pvmove painfully slow on parity RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 31 Dec 2010, Stuart D. Gathman wrote:

> On Fri, 31 Dec 2010, Spelic wrote:
> 
> > Ok never mind, I found the problem:
> > LVM probably uses O_DIRECT, right?
> > Well it's absymally slow on MD parity raid (I checked with dd on the bare MD
> > device just now) and I don't know why it's so slow. It's not because of the
> > rmw because it's slow even the second time I try, when it does not read
> > anything anymore because all reads are in cache already.
> 
> The point of O_DIRECT is to *not* use the cache.  Although a write-through
> cache would seem to be OK, you have to make sure that ALL writes write-through
> the cache, or the data on parity raid will be corrupted.
> 
> The R/M/W problem afflicts every level of parity raid in subtle ways.
> That's why I don't like it.

Plus, any write to *part* of a chunk, even with a write-through cache, still
has to write the *entire* chunk.  So if chunk size is 64K, and pvmove
writes to 32K blocks with O_DIRECT, that is 2 writes of the 64K chunk, even
with the write-through cache (without the cache, it is 2 reads + 2 writes
of the same chunk).

-- 
	      Stuart D. Gathman <stuart@bmsi.com>
    Business Management Systems Inc.  Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux