Re: RBD Performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vignesh,

So a few questions :-

How many PG’s have you got configured for the Ceph pool that you are testing against?
If this number is not large enough then you may only be using a subset of the devices available to use.

Have you tried the same benchmark without the replication setup?
Trying to see if the extra writes are slowing things down.

Which replication are you using snapshot or journal based?
This can generate a lot of extra disk traffic hence the reason for trying the benchmark without replication

Can you run a benchmark with 100% read and 100% write to see what the differences are between them? 
Trying to see if reads or writes could be slowing the other one down.

How have you configured data protection? Erasure coded or Replicas? 

Is the machine you are running the benchmark on a cloud stack virtual machine or a bare metal physical machine?

What is the network connectivity of the machine you are benchmarking on?






> On 17 Feb 2025, at 05:38, vignesh varma <vignesh.varma.g@xxxxxxxxxxxxx> wrote:
> 
> Hi Team,
> 
> I have set up a 2 Ceph cluster 3-node each cluster with a two-way RBD mirror. In this setup, Ceph 1 is configured two-way mirror to Ceph 2, and vice versa. The RBD pools are integrated with CloudStack.
> 
> The Ceph cluster uses NVMe drives, but I am experiencing very low IOPS performance. I have provided the relevant details below. Could you please guide me on how to optimize the setup to achieve higher IOPS?
> 
> fio --ioengine=libaio --direct=1 --randrepeat=1 --refill_buffers --end_fsync=1 --rwmixread=70 --filename=/root/ceph-rbd --name=write --size=1024m --bs=4k --rw=readwrite --iodepth=32 --numjobs=16 --group_reporting
> 
>  read: IOPS=92.5k, BW=361MiB/s (379MB/s)(11.2GiB/31718msec)
>  write: IOPS=39.7k, BW=155MiB/s (163MB/s)(4922MiB/31718msec); 0 zone resets
> 
> Hardware Specifications:
> 
> - CPU: Intel(R) Xeon(R) Gold 5416S
> - RAM: 125 GB
> - Storage: 8 x 7 TB NVMe disks (Model: UP2A67T6SD004LX)
> [Drive specifications](https://www.techpowerup.com/ssd-specs/union-memory-uh711a-7-5-tb.d1802)
> - Network: 4 x 25 Gbps interfaces configured with LACP bonding
> 
> Each server in the setup is equipped with the above configuration.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux