Dear, Some users are noticing a low performance, especially when formatting large volumes (around 100GB), apparently the system is healthy and no errors are detected in the logs: [root@cephmon01 ~]# ceph health detail HEALTH_OK except this one that I see repeatedly in one of the OSD servers: ................ Apr 7 18:06:29 cephosd12 ceph-crash[7316]: WARNING:__main__:post /var/lib/ceph/crash/2021-04-22_10:29:50.768989Z_1fb193dd-b492-446b-8064-b579cfe01196 failed: Error ENOTSUP: Module 'crash' is not enabled (required by command 'crash post'): use `ceph mgr module enable crash` to enable it ................ The version of ceph we have in production is: [root@cephmon01 ~]# ceph -v ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable) All servers are connected at 10Gbps, and the cluster currently has 144 SATA disks spread over 15 Servers. I just created a volume within the pool where the user detected the low performance: [root@cephmon01 ~]# rbd bench-write image01_to_remove --pool=volumes rbd: bench-write is deprecated, use rbd bench --io-type write ... bench type write io_size 4096 io_threads 16 bytes 1073741824 pattern sequential SEC OPS OPS/SEC BYTES/SEC 1 10944 10960.07 44892462.35 ...... elapsed: 28 ops: 262144 ops/sec: 9143.56 bytes/sec: 37452007.49 Any idea what could be happening and how to debug this? Regards, I -- ===================================================== Ibán Cabrillo Bartolomé Instituto de Fisica de Cantabria (IFCA-CSIC) Santander, Spain Tel: +34942200969/+34669930421 Responsable del Servicio de Computación Avanzada ====================================================== _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx