Re: poor read performance on rbd+LVM, LVM overload

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Ugis, please provide the output of:
>
> RBD_DEVICE=<rbd device name>
> pvs -o pe_start $RBD_DEVICE
> cat /sys/block/$RBD_DEVICE/queue/minimum_io_size
> cat /sys/block/$RBD_DEVICE/queue/optimal_io_size
>
> The 'pvs' command will tell you where LVM aligned the start of the data
> area (which follows the LVM metadata area).  Hopefully it reflects what
> was published in sysfs for rbd's striping.

output follows:
#pvs -o pe_start /dev/rbd1p1
  1st PE
    4.00m
# cat /sys/block/rbd1/queue/minimum_io_size
4194304
# cat /sys/block/rbd1/queue/optimal_io_size
4194304

Seems correct in terms of ceph-LVM io parameter negotiation? I wonded
about gpt header+PV metadata - it makes some shift starting from ceph
1st block beginning. Does this mean that all following LVM 4m data
blocks are shifted by this part and span 2 ceph objects?
If so, performance will be affected.

Ugis

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux