> I'm sorry, but I did not understand you :) Sorry (-: My finger touched the RETURN-key to fast... Try to setup a bigger value for the read ahead cache, maybe 256 MB?
echo "262144">/sys/block/vda/queue/read_ahead_kb Try also "fio" performance tool - it will show more
detailed information.
[global]
ioengine=libaio
invalidate=1
ramp_time=5
#exec_prerun="echo 3 > /proc/sys/vm/drop_caches"
iodepth=16
runtime=30
time_based
direct=1
bs=1m
filename=/dev/vda
[seq-write]
stonewall
rw=write
[seq-read]
stonewall
rw=read Compare the
fio result with a fio-test
agains the mounted rbd-volume (filename=/dev/rbdX) on your KVM
phy. host (not inside the vm). Try this also:
echo 3 > /proc/sys/vm/drop_caches
best regards Danny |
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com