Any tuning of LVM-Storage inside an VM related to ceph?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
I have some fileserver with insufficient read speed.
Enabling read ahead inside the VM improve the read speed, but it's
looks, that this has an drawback during lvm-operations like pvmove.

For test purposes, I move the lvm-storage inside an VM from vdb to vdc1.
It's take days, because it's 3TB data.
After enbling read ahead (echo 4096 >
/sys/block/vdb/queue/read_ahead_kb; echo 4096 >
/sys/block/vdc/queue/read_ahead_kb) the move-speed drop noticeable!

Are they any tunings to improve speed related to lvm on rbd-storage?
Perhaps, if using partitions, align the partition on 4MB?

Any hints?


Udo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux