On CentOS 7 systems with the CephFS kernel client, if the data pool has a `nearfull` status there is a slight reduction in write speeds (possibly 20-50% fewer IOPS). On a similar Rocky 8 system with the CephFS kernel client, if the data pool has `nearfull` status, a similar test shows write speeds at different block sizes shows the IOPS < 150 bottlenecked vs the typical write performance that might be with 20000-30000 IOPS at a particular block size. Is there any way to avoid the extremely bottlenecked IOPS seen on the Rocky 8 system CephFS kernel clients during the `nearfull` condition or to have behavior more similar to the CentOS 7 CephFS clients? Do different OS or Linux kernels have greatly different ways they respond or limit on the IOPS? Are there any options to adjust how they limit on IOPS? Thanks, Matt -- Matt Larson, PhD Madison, WI 53705 U.S.A. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx