On Fri, Sep 25, 2015 at 5:53 PM, Межов Игорь Александрович <megov@xxxxxxxxxx> wrote: > Hi! > > Thanks! > > I have some suggestions for the 1st method: > >>You could get the name prefix for each RBD from rbd info, > Yes, I did it already at the steps 1 and 2. I forgot to mention, that I grab > rbd frefix from 'rbd info' command > > >>then list all objects (run find on the osds?) and then you just need to >> grep the OSDs for each prefix. > So, you advise to run find over ssh for all OSD hosts to traverse OSDs > filesystems and find files (objects), > named with rbd prefix? Am I right? If so, I have two thoughts: (1) it may be > not so fast also, because > even limiting find with rbd prefix and pool index, it have to recursively go > through whole OSD filesytem > hierarchy. And (2) - find will put an additional load to OSD drives. > > > The second method is more attractive and I will try it soon. As we have an > object name, > and can get a crushmap in some usable form to look by ourself, or indirectly > through a > library/api call - finding the chain of object-to-PG-to-OSDs is a local > computational > task, and it can be done without remote calls (accessing OSD hosts, finding, > etc). > > Also, the slow looping through 'ceph osd map <pool> <object>' could be > explained: > for every object we have to spawn process, connecting cluster (with auth), > receiving > maps to client, calculating placement, and ... finally throw it all away > when process > exits. I think this overhead is a main reason of slowness. Internally there is a way to list objects within a specific PG (actually more than one way IIRC), but I don't think anything like that is exposed in a CLI (it might be exposed in librados though). Grabbing an osdmap and iterating with osdmaptool --test-map-object over rbd_data.<prefix>.* is probably the fastest way for you to get what you want. Thanks, Ilya _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com