I would start by viewing "ceph status", drive IO with: "iostat -x 1 /dev/sd{a..z}" and the CPU/RAM usage of the active MDS. If "ceph status" warns that the MDS cache is oversized, that may be an easy fix.
On Thu, Dec 26, 2019 at 7:33 AM renjianxinlover <renjianxinlover@xxxxxxx> wrote:
_______________________________________________hello,recently, after deleting some fs data in a small-scale ceph cluster, some clients IO performance became bad, specially latency. for example, opening a tiny text file by vim maybe consumed nearly twenty seconds, i am not clear about how to diagnose the cause, could anyone give some guidence?
Brs
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com