On 11/02/2015 1:46 PM, 杨万元 wrote:
Hello!
We use Ceph+Openstack in our private cloud. Recently we
upgrade our centos6.5 based cluster from Ceph Emperor to Ceph
Firefly.
At first,we use redhat yum repo epel to upgrade, this
Ceph's version is 0.80.5. First upgrade monitor,then osd,last
client. when we complete this upgrade, we boot a VM on the
cluster,then use fio to test the io performance. The io
performance is as better as before. Everything is ok!
Then we upgrade the cluster from 0.80.5 to 0.80.8,when
we completed , we reboot the VM to load the newest librbd.
after that we also use fio to test the io performance.then
we find the randwrite and write is as good as before.but the
randread and read is become worse, randwrite's iops from
4000-5000 to 300-400 ,and the latency is worse. the write's
bw from 400MB/s to 115MB/s. then I downgrade the ceph
client version from 0.80.8 to 0.80.5, then the reslut become
normal.
So I think maybe something cause about librbd. I
compare the 0.80.8 release notes with 0.80.5 ( http://ceph.com/docs/master/release-notes/#v0-80-8-firefly
), I just find this change in 0.80.8 is something about read
request : librbd: cap memory utilization for read requests
(Jason Dillaman) . Who can explain this?
FWIW we are seeing the same thing when switching librbd from 0.80.7
to 0.80.8 - there is a massive performance regression in random
reads. In our case, from ~10,000 4k read iops down to less than
1,000.
We also tested librbd 0.87.1 , and found it does not have this
problem - it appears to be isolated to 0.80.8 only.
Regards
Nathan
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com