Re: *****SPAM***** Direct disk/Ceph performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

to have a fair test you need to replicate the power loss scenarios ceph does cover and you are currently not:

No memory caches in the os or an the disk are allowed to be used, ceph has to ensure that an object written is actually written, even if a node of your cluster explodes right at that moment and you get a simultaneously black out no data is allowed to be lost.

I have seen multiple GB/s ssds (without hardware power loss) degrade to single digit MB/s due to this.

Ceph greatly scales out with a lot of disks and nodes, but in smaller systems you will definitely notice the lower per disk performance, also latency can be an issue, eg when doing blockwise single threaded writes over a blocking file api.

Greetings,

Kai



On 1/16/22 14:31, Behzad Khoshbakhti wrote:
Hi Marc,
Thanks for your prompt response.
We have test the direct random write for the disk (without Ceph) and it is
200 MB/s. Wonder why we got  80MB/s from Ceph.

Your help is much appreciated.

Regards,
  Behzad

On Sun, Jan 16, 2022 at 11:56 AM Marc <Marc@xxxxxxxxxxxxxxxxx> wrote:


Detailed (somehow) problem description:
Disk size: 1.2 TB
Ceph version: Pacific
Block size: 4 MB
Operation: Sequential write
Replication factor: 1
Direct disk performance: 245 MB/s
Ceph controlled disk performance: 80 MB/s
you are comparing sequential io against random. You should consider that
ceph writes to the drive in a 'random manner'

Sorry for asking this dummy question as I know there are numerous
parameters affecting the performance.
Yes this is written about everywhere on this list.


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux