> If there is a (planned) documentation of manual rgw bootstrapping, > it would be nice to have also the names of required pools listed there. It will depend on several things, like if you enable swift users, I think they get a pool of their own, so I guess one would need to look in the source for a full list of potential pools made by rgw. > # verify it works > curl http://127.0.0.1:8088 > ceph osd pool ls > # should print at least the following pools: > # .rgw.root > # default.rgw.log > # default.rgw.control > # default.rgw.meta > # ... and maybe also these, after some buckets are created: > # default.rgw.buckets.index > # default.rgw.buckets.non-ec > # default.rgw.buckets.data > ==================================================================== My oldest cluster seems to have these ones: (some may not be relevant anymore) .rgw.root 5 10.0KiB 0 10.8TiB 23 default.rgw.control 6 0B 0 10.8TiB 8 default.rgw.data.root 7 64.7KiB 0 10.8TiB 186 default.rgw.gc 8 0B 0 10.8TiB 32 default.rgw.log 9 685GiB 5.85 10.8TiB 159903 default.rgw.users.uid 10 5.32KiB 0 10.8TiB 28 default.rgw.usage 11 0B 0 10.8TiB 13 default.rgw.users.keys 12 459B 0 10.8TiB 14 default.rgw.meta 13 330KiB 0 10.8TiB 923 default.rgw.buckets.index 14 0B 0 10.8TiB 184 default.rgw.buckets.non-ec 15 0B 0 10.8TiB 828 default.rgw.buckets.data 16 6.39TiB 13.09 42.5TiB 2777682 default.rgw.users.email 23 115B 0 10.8TiB 4 so .log and .usage probably need some traffic before they appear. -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx