---------- Forwarded message --------- 发件人: opengers <zijian1012@xxxxxxxxx> Date: 2021年6月22日周二 上午11:12 Subject: Re: In "ceph health detail", what's the diff between MDS_SLOW_METADATA_IO and MDS_SLOW_REQUEST? To: Patrick Donnelly <pdonnell@xxxxxxxxxx> Thanks for the answer, I still have some confusion when I see the explanation of "MDS_SLOW_REQUEST" from the document , as follows ------ MDS_SLOW_REQUEST Message “N slow requests are blocked” Description One or more client requests have not been completed promptly, indicating that the MDS is either running very slowly, or that the RADOS cluster is not acknowledging journal writes promptly, or that there is a bug. Use the ops admin socket command to list outstanding metadata operations. This message appears if any client requests have taken longer than mds_op_complaint_time (default 30s). FROM: https://docs.ceph.com/en/latest/cephfs/health-messages/ ------ "or that the RADOS cluster is not acknowledging journal writes promptly", from this sentence, it seems that "MDS_SLOW_REQUEST" also contains OSD operations by the MDS? Patrick Donnelly <pdonnell@xxxxxxxxxx> 于2021年6月22日周二 上午3:23写道: > Hello, > > On Mon, Jun 21, 2021 at 8:54 AM opengers <zijian1012@xxxxxxxxx> wrote: > > > > *$ *ceph health detail > > HEALTH_WARN 1 MDSs report slow metadata IOs; 1 MDSs report slow > > requests MDS_SLOW_METADATA_IO > > 1 MDSs report slow metadata IOs > > mds.fs-01(mds.0): 3 slow metadata IOs are blocked > 30 secs, > oldest > > blocked for 51123 secs MDS_SLOW_REQUEST 1 MDSs report slow requests > > MDS_SLOW_REQUEST: RPCs from the client to the MDS are "slow", i.e. not > complete in less than 30 seconds. > MDS_SLOW_METADATA_IO: OSD operations by the MDS are not yet complete > after 30 seconds. > > -- > Patrick Donnelly, Ph.D. > He / Him / His > Principal Software Engineer > Red Hat Sunnyvale, CA > GPG: 19F28A586F808C2402351B93C3301A3E258DD79D > > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx