Performance drops and low oss performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'm a beginner on ceph. I set up some ceph clusters in google cloud.
Cluster1 has three nodes and each node has three disks. Cluster2 has three
nodes and each node has two disks. Cluster3 has five nodes and each node
has five disks. Disk speed shown by `dd if=/dev/zero of=here bs=1G count=1
oflag=direct` is 117MB/s. The network is 10Gbps.

However, I found something strange:

1. The write performance of all clusters drops dramatically after a few
minutes. I created a pool named "scbench" with replicated size 1 (I know it
is not safe but I want the highest write speed). The write performance
(shown by rados bench -p scbench 1000 write) before and after the drop are:

cluster1: 297MB/s 94.5MB/s
cluster2: 304MB/s 67.4MB/s
cluster3: 494MB/s 267.6MB/s

It looks like the performance before the drop is nodes_num * 100MB/s, and
the performance after the drop is about osds_num * 10MB/s. I have no idea
why there is such a drop and why the performances before the drop are
linear with nodes_num.

2. The write performance of object storage (shown by swift-bench -c 64 -s
4096000 -n 100000 -g 0 swift.conf) is much lower than that of storage
cluster(shown by rados bench -p scbench 1000 write). I have set the
replicated size of "default.rgw.buckets.data" and
"default.rgw.buckets.index" to 1

The speed of cluster1 oss is 117MB/s (before the drop) and 26MB/s (after
the drop), and the speed of cluster3 oss is 118MB/s (the drop does not
happen).

Is it normal that the oss write performance is worse than rados write
performance? If not, how can I solve the problem?

Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux