Re: Strange performance drop and low oss performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for your valuable answer!

Is the write cache specific to ceph? Could you please provide some links to
the documentation about the write cache? Thanks!

Do you have any idea about the slow oss speed? Is it normal that the write
performance of object gateway is slower than that of rados cluster? Thanks
in advance!

On Wed, Feb 5, 2020, 10:10 PM Janne Johansson <icepic.dz@xxxxxxxxx> wrote:

> Den ons 5 feb. 2020 kl 11:14 skrev quexian da <daquexian566@xxxxxxxxx>:
>
>> Hello,
>>
>> I'm a beginner on ceph. I set up three ceph clusters on google cloud.
>> Cluster1 has three nodes and each node has three disks. Cluster2 has three
>> nodes and each node has two disks. Cluster3 has five nodes and each node
>> has five disks.
>> All disks are HDD. Disk speed shown by `dd if=/dev/zero of=here bs=1G
>> count=1 oflag=direct` is 117MB/s.
>>
>
>
>> The write performance (shown by rados bench -p scbench 1000 write)
>> before and after the drop are:
>>
>> cluster1: 297MB/s 94.5MB/s
>> cluster2: 304MB/s 67.4MB/s
>> cluster3: 494MB/s 267.6MB/s
>>
>> It looks like the performance before the drop is nodes_num * 100MB/s, and
>> the performance after the drop is about osds_num * 10MB/s. I have no idea
>> why there is such a drop and why the performances before the drop are
>> linear with nodes_num.
>>
>
> You are probably seeing write caching up to some point when buffer ram is
> expired, then get down to more "real" disk speeds, minus overhead from ceph.
> This is why more hosts seem to give more performance in the warmup. (and
> of course, this applies to physical boxes too, whatever write caches are in
> use will be multiplied by the amount of hosts you have which uses such
> caches)
>
> Do mind that ceph is not a cluster aimed at winning single-threaded write
> competitions, but rather having many hosts serve many clients and being
> able to scale that to large numbers.
>
> --
> May the most significant bit of your life be positive.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux