Re: poor read performance on rbd+LVM, LVM overload

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Ugis, please provide the output of:
>
> RBD_DEVICE=<rbd device name>
> pvs -o pe_start $RBD_DEVICE
> cat /sys/block/$RBD_DEVICE/queue/minimum_io_size
> cat /sys/block/$RBD_DEVICE/queue/optimal_io_size
>
> The 'pvs' command will tell you where LVM aligned the start of the data
> area (which follows the LVM metadata area).  Hopefully it reflects what
> was published in sysfs for rbd's striping.

output follows:
#pvs -o pe_start /dev/rbd1p1
  1st PE
    4.00m
# cat /sys/block/rbd1/queue/minimum_io_size
4194304
# cat /sys/block/rbd1/queue/optimal_io_size
4194304

Seems correct in terms of ceph-LVM io parameter negotiation? I wonded
about gpt header+PV metadata - it makes some shift starting from ceph
1st block beginning. Does this mean that all following LVM 4m data
blocks are shifted by this part and span 2 ceph objects?
If so, performance will be affected.

Ugis
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux