Re: Ceph configuration for rgw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just adding this:

ses7-host1:~ # ceph config set client.rgw.ebl-rgw rgw_frontends "beast port=8080"

This change is visible in the config get output:

client.rgw.ebl-rgw        basic     rgw_frontends    beast port=8080


Zitat von Eugen Block <eblock@xxxxxx>:

Hi,

the docs [1] show how to specifiy the rgw configuration via yaml file (similar to OSDs). If you applied it with ceph orch you should see your changes in the 'ceph config dump' output, or like this:

---snip---
ses7-host1:~ # ceph orch ls | grep rgw
rgw.ebl-rgw    ?:80             2/2  33s ago    3M   ses7-host3;ses7-host4

ses7-host1:~ # ceph config get client.rgw.ebl-rgw
WHO MASK LEVEL OPTION VALUE RO global basic container_image registry.fqdn:5000/ses/7.1/ceph/ceph@sha256:... * client.rgw.ebl-rgw basic rgw_frontends beast port=80 * client.rgw.ebl-rgw advanced rgw_realm ebl-rgw *
client.rgw.ebl-rgw        advanced  rgw_zone         ebl-zone
---snip---

As you see the RGWs are clients so you need to consider that when you request the current configuration. But what I find strange is that apparently it only shows the config initially applied, it doesn't show the changes after running 'ceph orch apply -i rgw.yaml' although the changes are applied to the containers after restarting them. I don't know if this is intended but sounds like a bug to me (I haven't checked).

1) When start rgw with cephadm ("orch apply -i <rgw.yaml>"), I have to start the daemon then update configuration file and restart. I don't find a way to achieve this by single step.

I haven't played around too much yet, but you seem to be right, changing the config isn't applied immediately, but only after a service restart ('ceph orch restart rgw.ebl-rgw'). Maybe that's on purpose? So you can change your config now and apply it later when a service interruption is not critical.


[1] https://docs.ceph.com/en/pacific/cephadm/services/rgw/

Zitat von Tony Liu <tonyliu0592@xxxxxxxxxxx>:

Hi,

The cluster is Pacific 16.2.10 with containerized service and managed by cephadm.

"config show" shows running configuration. Who is supported?
mon, mgr and osd all work, but rgw doesn't. Is this expected?
I tried with client.<rgw daemon name from "ceph orch ps"> and without "client",
neither works.

When issue "config show", who connects the daemon and retrieves running config?
Is it mgr or mon?

Config update by "config set" will be populated to the service. Which services are supported by this? I know mon, mgr and osd work, but rgw doesn't. Is this expected? I assume this is similar to "config show", this support needs the capability of mgr/mon
to connect to service daemon?

To get running config from rgw, I always do
"docker exec <rgw container> ceph daemon <socket> config show".
Is that the only way? I assume it's the same to get running config from all services.
Just the matter of supported by mgr/mon or not?

I've been configuring rgw by configuration file. Is that the recommended way?
I tried with configuration db, like "config set", it doesn't seem working.
Is this expected?

I see two cons with configuration file for rgw.
1) When start rgw with cephadm ("orch apply -i <rgw.yaml>"), I have to start the daemon then update configuration file and restart. I don't find a way to achieve this by single step. 2) When "orch daemon redeploy" or upgrade rgw, the configuration file will be re-generated
  and I have to update it again.
Is this all how it's supposed to work or I am missing anything?


Thanks!
Tony
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux