Re: RadosGW manual deployment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den mån 29 jan. 2024 kl 08:11 skrev Jan Kasprzak <kas@xxxxxxxxxx>:
>
>         Hi all,
>
> how can radosgw be deployed manually? For Ceph cluster deployment,
> there is still (fortunately!) a documented method which works flawlessly
> even in Reef:
>
> https://docs.ceph.com/en/latest/install/manual-deployment/#monitor-bootstrapping
>
> But as for radosgw, there is no such description, unless I am missing
> something. Even going back to the oldest docs still available at
> docs.ceph.com (mimic), the radosgw installation is described
> only using ceph-deploy:
>
> https://docs.ceph.com/en/mimic/install/install-ceph-gateway/
>
> Is it possible to install a new radosgw instance manually?
> If so, how can I do it?

We are doing it, and I found the same docs issue recently, so Zac
pushed me to provide a skeleton (at least) for such a page. I have
recently made a quincy cluster manual install with RGWs so I will
condense what I did to something that can be used for docs later on
(I'll leave it to Zac to format and merge).

Really short version for you:
Install radosgw debs/rpms on the rgw box(es)

On one of the mons or a box with admin ceph auth run
   ceph auth get-or-create client.short-hostname-of-rgw mon 'allow rw'
osd 'allow rwx'
On each of the rgw box(es)
  create a ceph-user owned dir, for instance like this
  install -d -o ceph -g ceph /var/lib/ceph/radosgw/ceph-$(hostname -s)
  inside this dir, put the key (or the first two lines of it) you got
from the above ceph auth get-or-create
  vi /var/lib/ceph/radosgw/ceph-$(hostname -s)/keyring
  Figure out what URL rgw should answer to and all that in the config
parts, but that would be the same
  for manual and ceph-adm/orchestrated installs.
  and now you should be able to start the service with
  systemctl start ceph-radosgw@$(hostname -s).service

The last part may or may not act up a bit due to two things, one is
that it may have tried starting lots of times after the deb/rpm got
installed, but long before you added they usable key for it, so doing
a slight boxing match with systemd might be in order, to stop the
service, reset-failed on the service and then restarting it. (and
check that it is enabled, so it starts on next boot also)

Secondly, I also tend to run into this issue* where rgw (and other
parts of ceph!) can't create pools if they don't specify PG numbers,
which rgw doesn't do any longer, and if you get this error, you end up
having to create all the pools manually yourself (from a mon/admin
host or the rgw, but doing it from the rgw requires a lot more
specifying username and keyfile locations than the default admin-key
hosts)

*) https://tracker.ceph.com/issues/62770
   This ticket has a VERY SIMPLE method of testing if ceph versions
has this problem or not, just
   run "ceph osd pool create some-name" and see how it fails unless
you add a number behind
   it or not.

   The help is quite clear that all other parameters are meant to be optional:

osd pool create <pool> [<pg_num:int>] [<pgp_num:int>]
[<pool_type:replicated|erasure>] [<erasure_code_profile>] [<rule>]
[<expected_num_objects:int>] [<size:int>] [<pg_num_min:int>]
[<pg_num_max:int>] [<autoscale_mode:on|off|warn>] [--bulk]
[<target_size_bytes:int>] [<target_size_ratio:float>] :  create pool



--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux