On Tue, May 23, 2017 at 4:27 PM, James Wilkins <James.Wilkins@xxxxxxxxxxxxx> wrote: > Thanks : -) > > If we are seeing this rise unnaturally high (e.g >140K - which corresponds with slow access to CephFS) do you have any recommendations of where we should be looking - is this related to the messenger service and its dispatch/throttle bytes? That is a super long queue. I would be looking at the other counters to see what's ticking upwards quickly (i.e. what is being maxed out and thereby presumably causing a backlog). John > > > -----Original Message----- > From: John Spray [mailto:jspray@xxxxxxxxxx] > Sent: 23 May 2017 13:51 > To: James Wilkins <James.Wilkins@xxxxxxxxxxxxx> > Cc: Users, Ceph <ceph-users@xxxxxxxxxxxxxx> > Subject: Re: MDS Question > > On Tue, May 23, 2017 at 1:42 PM, James Wilkins <James.Wilkins@xxxxxxxxxxxxx> wrote: >> Quick question on CephFS/MDS but I can’t find this documented >> (apologies if it is) >> >> >> >> What does the q: represent in a ceph daemon <socket> perf dump mds >> represent? > > mds]$ git grep "\"q\"" > MDSRank.cc: mds_plb.add_u64(l_mds_dispatch_queue_len, "q", > "Dispatch queue length"); > > That's a quirky bit of naming for sure! > > John > >> >> >> >> [root@hp3-ceph-mds2 ~]# ceph daemon >> /var/run/ceph/ceph-mds.hp3-ceph-mds2.ceph.hostingp3.local.asok perf >> dump mds >> >> { >> >> "mds": { >> >> "request": 10843133, >> >> "reply": 10842472, >> >> "reply_latency": { >> >> "avgcount": 10842472, >> >> "sum": 2678925.337447889 >> >> }, >> >> "forward": 0, >> >> "dir_fetch": 412972, >> >> "dir_commit": 683903, >> >> "dir_split": 0, >> >> "dir_merge": 0, >> >> "inode_max": 7000000, >> >> "inodes": 7000209, >> >> "inodes_top": 808282, >> >> "inodes_bottom": 6191218, >> >> "inodes_pin_tail": 709, >> >> "inodes_pinned": 2055258, >> >> "inodes_expired": 2276343, >> >> "inodes_with_caps": 1905570, >> >> "caps": 2392113, >> >> "subtrees": 2, >> >> "traverse": 12551065, >> >> "traverse_hit": 10346763, >> >> "traverse_forward": 0, >> >> "traverse_discover": 0, >> >> "traverse_dir_fetch": 312666, >> >> "traverse_remote_ino": 0, >> >> "traverse_lock": 41125, >> >> "load_cent": 1090788840, >> >> "q": 4371, >> >> "exported": 0, >> >> "exported_inodes": 0, >> >> "imported": 0, >> >> "imported_inodes": 0 >> >> } >> >> } >> >> >> >> >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com