On Fri, Sep 13, 2019 at 7:09 AM thoralf schulze <t.schulze@xxxxxxxxxxxx> wrote: > > hi there, > > while debugging metadata servers reporting slow requests, we took a stab > at pinning directories of a cephfs like so: > > setfattr -n ceph.dir.pin -v 1 /tubfs/kubernetes/ > setfattr -n ceph.dir.pin -v 0 /tubfs/profiles/ > setfattr -n ceph.dir.pin -v 0 /tubfs/homes > > on the active mds for rank 0, we can see all pinnings like expected: > > ceph daemon /var/run/[rank0].asok get subtrees | jq -c > '.[]|select(.dir.path|contains("/"))|[.dir.path, .export_pin, .auth_first]' > ["/kubernetes",1,1] > ["/homes",0,0] > ["/profiles",0,0] > > while the active mds for rank 1 reports back its own pinnings only: > > ceph daemon /var/run/[rank1].asok get subtrees | jq -c > '.[]|select(.dir.path|contains("/"))|[.dir.path, .export_pin, .auth_first]' > ["/kubernetes",1,1] > ["/.ctdb",-1,1] > > is this to be expected? anecdotical data indicate that the pinning does > work as intended. Each MDS rank can only see subtrees that border the ones its authoritative for. Therefore, you need to gather all subtrees from all ranks and merge to see the entire distribution. This could be made simpler by showing this information in the upcoming `ceph fs top` display. I've created a tracker ticket: https://tracker.ceph.com/issues/41824 -- Patrick Donnelly, Ph.D. He / Him / His Senior Software Engineer Red Hat Sunnyvale, CA GPG: 19F28A586F808C2402351B93C3301A3E258DD79D _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com