Re: Default Pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



RGW tools will automatically deploy these pools, for example, running
radosgw-admin will create them if they don't exist.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Sat, Jan 18, 2020 at 2:48 AM Daniele Riccucci <devster@xxxxxxxxxx> wrote:
>
> Hello,
> I'm still a bit confused by the .rgw.root and the
> default.rgw.{control,meta,log} pools.
> I recently removed the RGW daemon I had running and the aforementioned
> pools, however after a rebalance I suddenly find them again in the
> output of:
>
> $ ceph osd pool ls
> cephfs_data
> cephfs_metadata
> .rgw.root
> default.rgw.control
> default.rgw.meta
> default.rgw.log
>
> Each has 8 pgs but zero usage.
> I was unable to find logs or indications as to which daemon or action
> recreated them or whether it is safe to remove them again, where should
> I look?
> I'm on Nautilus 14.2.5, container deployment.
> Thank you.
>
> Regards,
> Daniele
>
> Il 23/04/19 22:14, David Turner ha scritto:
> > You should be able to see all pools in use in a RGW zone from the
> > radosgw-admin command. This [1] is probably overkill for most, but I
> > deal with multi-realm clusters so I generally think like this when
> > dealing with RGW.  Running this as is will create a file in your current
> > directory for each zone in your deployment (likely to be just one
> > file).  My rough guess for what you would find in that file based on
> > your pool names would be this [2].
> >
> > If you identify any pools not listed from the zone get command, then you
> > can rename [3] the pool to see if it is being created and/or used by rgw
> > currently.  The process here would be to stop all RGW daemons, rename
> > the pools, start a RGW daemon, stop it again, and see which pools were
> > recreated.  Clean up the pools that were freshly made and rename the
> > original pools back into place before starting your RGW daemons again.
> > Please note that .rgw.root is a required pool in every RGW deployment
> > and will not be listed in the zones themselves.
> >
> >
> > [1]
> > for realm in $(radosgw-admin realm list --format=json | jq '.realms[]'
> > -r); do
> >    for zonegroup in $(radosgw-admin --rgw-realm=$realm zonegroup list
> > --format=json | jq '.zonegroups[]' -r); do
> >      for zone in $(radosgw-admin --rgw-realm=$realm
> > --rgw-zonegroup=$zonegroup zone list --format=json | jq '.zones[]' -r); do
> >        echo $realm.$zonegroup.$zone.json
> >        radosgw-admin --rgw-realm=$realm --rgw-zonegroup=$zonegroup
> > --rgw-zone=$zone zone get > $realm.$zonegroup.$zone.json
> >      done
> >    done
> > done
> >
> > [2] default.default.default.json
> > {
> >      "id": "{{ UUID }}",
> >      "name": "default",
> >      "domain_root": "default.rgw.meta",
> >      "control_pool": "default.rgw.control",
> >      "gc_pool": ".rgw.gc",
> >      "log_pool": "default.rgw.log",
> >      "user_email_pool": ".users.email",
> >      "user_uid_pool": ".users.uid",
> >      "system_key": {
> >      },
> >      "placement_pools": [
> >          {
> >              "key": "default-placement",
> >              "val": {
> >                  "index_pool": "default.rgw.buckets.index",
> >                  "data_pool": "default.rgw.buckets.data",
> >                  "data_extra_pool": "default.rgw.buckets.non-ec",
> >                  "index_type": 0,
> >                  "compression": ""
> >              }
> >          }
> >      ],
> >      "metadata_heap": "",
> >      "tier_config": [],
> >      "realm_id": "{{ UUID }}"
> > }
> >
> > [3] ceph osd pool rename <srcpool> <destpool>
> >
> > On Thu, Apr 18, 2019 at 10:46 AM Brent Kennedy <bkennedy@xxxxxxxxxx
> > <mailto:bkennedy@xxxxxxxxxx>> wrote:
> >
> >     Yea, that was a cluster created during firefly...
> >
> >     Wish there was a good article on the naming and use of these, or
> >     perhaps a way I could make sure they are not used before deleting
> >     them.  I know RGW will recreate anything it uses, but I don’t want
> >     to lose data because I wanted a clean system.
> >
> >     -Brent
> >
> >     -----Original Message-----
> >     From: Gregory Farnum <gfarnum@xxxxxxxxxx <mailto:gfarnum@xxxxxxxxxx>>
> >     Sent: Monday, April 15, 2019 5:37 PM
> >     To: Brent Kennedy <bkennedy@xxxxxxxxxx <mailto:bkennedy@xxxxxxxxxx>>
> >     Cc: Ceph Users <ceph-users@xxxxxxxxxxxxxx
> >     <mailto:ceph-users@xxxxxxxxxxxxxx>>
> >     Subject: Re:  Default Pools
> >
> >     On Mon, Apr 15, 2019 at 1:52 PM Brent Kennedy <bkennedy@xxxxxxxxxx
> >     <mailto:bkennedy@xxxxxxxxxx>> wrote:
> >      >
> >      > I was looking around the web for the reason for some of the
> >     default pools in Ceph and I cant find anything concrete.  Here is
> >     our list, some show no use at all.  Can any of these be deleted ( or
> >     is there an article my googlefu failed to find that covers the
> >     default pools?
> >      >
> >      > We only use buckets, so I took out .rgw.buckets, .users and
> >      > .rgw.buckets.index…
> >      >
> >      > Name
> >      > .log
> >      > .rgw.root
> >      > .rgw.gc
> >      > .rgw.control
> >      > .rgw
> >      > .users.uid
> >      > .users.email
> >      > .rgw.buckets.extra
> >      > default.rgw.control
> >      > default.rgw.meta
> >      > default.rgw.log
> >      > default.rgw.buckets.non-ec
> >
> >     All of these are created by RGW when you run it, not by the core
> >     Ceph system. I think they're all used (although they may report
> >     sizes of 0, as they mostly make use of omap).
> >
> >      > metadata
> >
> >     Except this one used to be created-by-default for CephFS metadata,
> >     but that hasn't been true in many releases. So I guess you're
> >     looking at an old cluster? (In which case it's *possible* some of
> >     those RGW pools are also unused now but were needed in the past; I
> >     haven't kept good track of them.) -Greg
> >
> >     _______________________________________________
> >     ceph-users mailing list
> >     ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
> >     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux