Re: Experience with 100G Ceph in Proxmox

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello again,

I tried running tests with *--iodepth=16* and *32*.
The values got even worse.

# *IOPS*: *8.7k*
# *Bandwidth*: *34.1MiB/s (35.7MB/s)*
# *Latency*:

 * *Avg*: *7.3ms*
 * *99.9th percentile*: *15.8ms*

# *CPU Usage*: *usr=0.74%, sys=5.60%*


The problem seems to be only inside the VMs.

VMs used:
scsi0: cephvm:vm-6506-disk-1,cache=writeback,iothread=1,size=64G,ssd=1
scsi1: cephvm:vm-6506-disk-2,cache=writeback,iothread=1,size=10T,ssd=1

I try without cache, without iothread. There is no changes.

Then I tested on the *Proxmox host*, and the results were significantly better.

 fio --name=host-test --filename=/dev/rbd0 --ioengine=libaio --rw=randread --bs=4k --numjobs=4 --iodepth=32 --size=1G --runtime=60 --group_reporting

*IOPS*: *1.54M*

# *Bandwidth*: *6032MiB/s (6325MB/s)*
# *Latency*:

 * *Avg*: *39.8µs*
 * *99.9th percentile*: *71µs*

# *CPU Usage*: *usr=22.60%, sys=77.13%*
#

Am 18.03.2025 um 15:27 schrieb Anthony D'Atri:
Which NVMe drive SKUs specifically?

# */dev/nvme6n1* – *KCD61LUL15T3* – 15.36 TB – SN: 6250A02QT5A8
# */dev/nvme5n1* – *KCD61LUL15T3* – 15.36 TB – SN: 42R0A036T5A8
# */dev/nvme4n1* – *KCD61LUL15T3* – 15.36 TB – SN: 6250A02UT5A8
Are you running a recent kernel?
penultimate: 6.8.12-8-pve (VM, yes)
  Have you updated firmware on the NVMe devices?

No.

Thanks again,

best regards,
Gio

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux