Hi Jim, did you check system stat (e.g. iostat, top, etc.) on both osds when you ran osd bench? Those might be able to give you some clues. Moreover, did you compare both osds' configurations? ------------------ Original ------------------ From: "Jim Forde" <jimf@xxxxxxxxx>; Date: Thu, Aug 6, 2020 06:51 AM To: "ceph-users"<ceph-users@xxxxxxx>; Subject: Nautilus slow using "ceph tell osd.* bench" I have 2 clusters. Cluster 1 started at Hammer and has upgraded through the versions all the way to Nautilus 14.2.10 (Luminous to Nautilus in July 2020) . Cluster 2 started as Luminous and is now Nautilus 14.2.2 (Upgraded in September 2019) The clusters are basically identical 5 OSD Nodes with 6 osd's per node. They are both using disk drives. No SSD's. Prior to upgrading Cluster 1 running "ceph tell osd.0 bench -f plain" produced similar results across both clusters. ceph tell osd.0 bench -f plain bench: wrote 1 GiB in blocks of 4 MiB in 0.954819 sec at 1.0 GiB/sec 268 IOPS Now cluster 1 results are terrible, about 25% from before the upgrade. ceph tell osd.0 bench -f plain bench: wrote 1 GiB in blocks of 4 MiB in 4.03434 sec at 254 MiB/sec 63 IOPS Ceph -s shows HEALTH_OK. Dashboard looks good. 2 pools MON Dump min_mon_release 14 (nautilus) OSD Dump require_min_compat_client luminous min_compat_client jewel require_osd_release nautilus Not sure what is causing the slow performance. Ideas? _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx