Re: RBD Performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You mention RBD, but you give FIO a filename.  Are you writing to a file on filesystem on an RBD volume? Are you testing from a VM? From one of the cluster nodes?  Via a KRBD mount?

 Do you get better results with the volume unattached and using the librbd ioengine?

What does the rbd mirror status for this volume look like?  Is it far behind? Which rbd-mirror mode are you using?

Send `ceph osd dump | grep pool` and `ceph -s` for both clusters.


> On Feb 17, 2025, at 12:38 AM, vignesh varma <vignesh.varma.g@xxxxxxxxxxxxx> wrote:
> 
> Hi Team,
> 
> I have set up a 2 Ceph cluster 3-node each cluster with a two-way RBD mirror. In this setup, Ceph 1 is configured two-way mirror to Ceph 2, and vice versa. The RBD pools are integrated with CloudStack.
> 
> The Ceph cluster uses NVMe drives, but I am experiencing very low IOPS performance. I have provided the relevant details below. Could you please guide me on how to optimize the setup to achieve higher IOPS?
> 
> fio --ioengine=libaio --direct=1 --randrepeat=1 --refill_buffers --end_fsync=1 --rwmixread=70 --filename=/root/ceph-rbd --name=write --size=1024m --bs=4k --rw=readwrite --iodepth=32 --numjobs=16 --group_reporting
> 
>  read: IOPS=92.5k, BW=361MiB/s (379MB/s)(11.2GiB/31718msec)
>  write: IOPS=39.7k, BW=155MiB/s (163MB/s)(4922MiB/31718msec); 0 zone resets
> 
> Hardware Specifications:
> 
> - CPU: Intel(R) Xeon(R) Gold 5416S
> - RAM: 125 GB
> - Storage: 8 x 7 TB NVMe disks (Model: UP2A67T6SD004LX)
> [Drive specifications](https://www.techpowerup.com/ssd-specs/union-memory-uh711a-7-5-tb.d1802)
> - Network: 4 x 25 Gbps interfaces configured with LACP bonding
> 
> Each server in the setup is equipped with the above configuration.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux