Re: Direct disk/Ceph performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



And here is the disk information that we base our testing:
HPE EG1200FDJYT 1.2TB 10kRPM 2.5in SAS-6G Enterprise

On Sun, Jan 16, 2022 at 11:23 AM Behzad Khoshbakhti <khoshbakhtib@xxxxxxxxx>
wrote:

> Hi all,
>
> We are curious about the single disk performance which we experience
> performance degradation when the disk is controlled via Ceph.
>
> Problem description:
> We are curious about the Ceph write performance and we have found that
> when we request data to be written via Ceph, it is not using full disk
> potential.
>
> Detailed (somehow) problem description:
> Disk size: 1.2 TB
> Ceph version: Pacific
> Block size: 4 MB
> Operation: Sequential write
> Replication factor: 1
> Direct disk performance: 245 MB/s
> Ceph controlled disk performance: 80 MB/s
>
> It is worth mentioning that the Ceph pool is instructed to use a single
> OSD with a replication factor of 1.
> In terms of networking of Ceph, the Ceph client is connected to the Ceph
> OSD via a 10Gbps Network adapter within the same switch under 1ms latency.
>
> Sorry for asking this dummy question as I know there are numerous
> parameters affecting the performance. I wonder if you could help us to get
> this issue resolved.
>
> Regards,
>  Behzad
>


-- 

Regards
 Behzad Khoshbakhti
 Computer Network Engineer (CCIE #58887)
 +48789397639
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux