Re: Experience with 100G Ceph in Proxmox

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Then I tested on the *Proxmox host*, and the results were significantly better.

My Proxmox prowess is limited, but from my experience with other virtualization platforms, I have to ask if there is any QoS throttling applied to VMs.  With OpenStack or DO  there is often IOPS and/or throughput throttling via libvirt to mitigate noisy neighbors.

> 
>  fio --name=host-test --filename=/dev/rbd0 --ioengine=libaio --rw=randread --bs=4k --numjobs=4 --iodepth=32 --size=1G --runtime=60 --group_reporting
> 
> *IOPS*: *1.54M*
> 
> # *Bandwidth*: *6032MiB/s (6325MB/s)*
> # *Latency*:
> 
> * *Avg*: *39.8µs*
> * *99.9th percentile*: *71µs*
> 
> # *CPU Usage*: *usr=22.60%, sys=77.13%*
> #
> 
> Am 18.03.2025 um 15:27 schrieb Anthony D'Atri:
>> Which NVMe drive SKUs specifically?
> 
> # */dev/nvme6n1* – *KCD61LUL15T3* – 15.36 TB – SN: 6250A02QT5A8
> # */dev/nvme5n1* – *KCD61LUL15T3* – 15.36 TB – SN: 42R0A036T5A8
> # */dev/nvme4n1* – *KCD61LUL15T3* – 15.36 TB – SN: 6250A02UT5A8

Kioxia CD6.  If you were using client-class drives all manner of performance issues would be expected.

Is your server chassis at least PCIe Gen 4?  If it’s Gen 3 that may hamper these drives.

Also, how many of these are in your cluster?  If it’s a small number you might still benefit from chopping each into at least 2 separate OSDs.

And please send `ceph osd dump | grep pool`, having too few PGs wouldn’t do you any favors.


>> Are you running a recent kernel?
> penultimate: 6.8.12-8-pve (VM, yes)

Groovy.  If you were running like a CentOS 6 or CentOS 7 kernel then NVMe issues might be expected as old kernels had rudimentary NVMe support.

>>  Have you updated firmware on the NVMe devices?
> 
> No.

Kioxia appears to not release firmware updates publicly but your chassis brand (Dell, HP, SMCI, etc) might have an update. 
e.g. https://www.dell.com/support/home/en-vc/drivers/driversdetails?driverid=7ny55

 If there is an available update I would strongly suggest applying.

> 
> Thanks again,
> 
> best regards,
> Gio
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux