Re: Ceph pg in inactive state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Den tors 31 okt. 2019 kl 04:22 skrev soumya tr <soumya.324@xxxxxxxxx>:
Thanks 潘东元 for the response.

The creation of a new pool works, and all the PGs corresponding to that pool have active+clean state. 

When I initially set ceph 3 node cluster using juju charms (replication count per object was set to 3), there were issues with ceph-osd services. So I had to delete the units and readd them (I did all of them together, which must have created issues with rebalancing). I assume that the PGs in the inactive state points to the 3 old OSDs which were deleted. 

I assume I will have to create all the pools again. But my concern is about the default pools. 

-------------------------------
pool 1 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 15 flags hashpspool stripe_width 0 application rgw
pool 2 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 19 flags hashpspool stripe_width 0 application rgw
pool 3 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 23 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 27 flags hashpspool stripe_width 0 application rgw
pool 5 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 31 flags hashpspool stripe_width 0 application rgw
pool 6 'default.rgw.intent-log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 35 flags hashpspool stripe_width 0 application rgw
pool 7 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 39 flags hashpspool stripe_width 0 application rgw
pool 8 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 43 flags hashpspool stripe_width 0 application rgw
pool 9 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 47 flags hashpspool stripe_width 0 application rgw
pool 10 'default.rgw.users.email' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 51 flags hashpspool stripe_width 0 application rgw
pool 11 'default.rgw.users.swift' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 55 flags hashpspool stripe_width 0 application rgw
pool 12 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 59 flags hashpspool stripe_width 0 application rgw
pool 13 'default.rgw.buckets.extra' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 last_change 63 flags hashpspool stripe_width 0 application rgw
pool 14 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 4 pgp_num 4 last_change 67 flags hashpspool stripe_width 0 application rgw
-------------------------------

Can you please update if recreating them using rados cli will break anything?


Those pools belong to radosgw, and if they are missing, they will be created on demand the next time radosgw starts up. 
the "defaul" is the name of the radosgw zone, which defaults to... "default". They are not needed by any other part of ceph.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux