Re: How to get RBD volume to PG mapping?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Try:

ceph osd map <pool> <rbdname>

Is that it?

Jan

> On 25 Sep 2015, at 15:07, Межов Игорь Александрович <megov@xxxxxxxxxx> wrote:
> 
> Hi!
> 
> Last week I wrote, that one PG in our Firefly stuck in degraded state with 2 replicas instead of 3
> and do not try to backfill or recovery. We try to investigate, what RBD vol's are affected.
> 
> The working plan are inspired by Sebastian Han's snippet 
> (http://www.sebastien-han.fr/blog/2013/11/19/ceph-rbd-objects-placement/)
> and consists of next steps:
> 
> 1. rbd -p <pool>  ls - to list all RBD volumes on the pool
> 2. Get RBD prefix, corresponding the volume
> 3. Get a list of objects, which belongs to our RBD volume
> 4. Issue 'ceph osd map <pool> <objectname>' to get PG for object and OSD placement
> 
> After writing some scripts we face a difficulty: running 'ceph osd map...' and getting object 
> placement takes about 0.5 second, so iterating all 15 millions  objects will take forever.
> 
> Is there any other way to find to what PGs the specified RBD volume are mapped,
> or may be there is a much faster way to do our step 4 instead of calling 'ceph osd map'
> in loop for every object.
> 
> 
> Thanks!
> 
> Megov Igor
> CIO, Yuterra
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux