Re: poor read performance on rbd+LVM, LVM overload

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 16 2013 at 12:16pm -0400,
Sage Weil <sage@xxxxxxxxxxx> wrote:

> Hi,
> 
> On Wed, 16 Oct 2013, Ugis wrote:
> > 
> > What could make so great difference when LVM is used and what/how to
> > tune? As write performance does not differ, DM extent lookup should
> > not be lagging, where is the trick?
> 
> My first guess is that LVM is shifting the content of hte device such that 
> it no longer aligns well with the RBD striping (by default, 4MB).  The 
> non-aligned reads/writes would need to touch two objects instead of 
> one, and dd is generally doing these synchronously (i.e., lots of 
> waiting).
> 
> I'm not sure what options LVM provides for aligning things to the 
> underlying storage...

LVM will consume the underlying storage's device limits.  So if rbd
establishes appropriate minimum_io_size and optimal_io_size that reflect
the striping config LVM will pick it up -- provided
'data_alignment_detection' is enabled in lvm.conf (which it is by
default).

Ugis, please provide the output of:

RBD_DEVICE=<rbd device name>
pvs -o pe_start $RBD_DEVICE
cat /sys/block/$RBD_DEVICE/queue/minimum_io_size
cat /sys/block/$RBD_DEVICE/queue/optimal_io_size

The 'pvs' command will tell you where LVM aligned the start of the data
area (which follows the LVM metadata area).  Hopefully it reflects what
was published in sysfs for rbd's striping.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux