Re: Removing Rados Gateway in ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Eugen,

below is the Version of Ceph I am running

root@ceph-mon1:~# ceph -v
ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific
(stable)
root@ceph-mon1:~# ceph orch ls rgw --export --format yaml
Error ENOENT: No orchestrator configured (try `ceph orch set backend`)
root@ceph-mon1:~#


tried ceph orch set backend , but nothing changed also.

Best Regards

On Mon, Feb 6, 2023 at 2:37 PM Eugen Block <eblock@xxxxxx> wrote:

> Please send responses to the mailing-list.
>
> If the orchestrator is available, please share also this output (mask
> sensitive data):
>
> ceph orch ls rgw --export --format yaml
>
> Which ceph version is this? The command 'ceph dashboard
> get-rgw-api-host' was removed between Octopus and Pacific, that's why
> I asked for your ceph version.
>
> I also forgot that mgr/dashboard/RGW_API_HOST was used until Octopus,
> in Pacific it's not applied anymore. I'll need to check how it is
> determined now.
>
> Zitat von Michel Niyoyita <micou12@xxxxxxxxx>:
>
> > Hello Eugen,
> >
> > Thanks for your reply ,
> >
> > I am trying the shared command but no output .
> >
> > root@ceph-mon1:~# ceph config dump | grep mgr/dashboard/RGW_API_HOST
> > root@ceph-mon1:~# ceph config dump | grep mgr/dashboard/
> >   mgr        advanced  mgr/dashboard/ALERTMANAGER_API_HOST
> > http://10.10.110.196:9093
> >                                                         *
> >   mgr        advanced  mgr/dashboard/GRAFANA_API_PASSWORD     XXXX
> >
> >                                            *
> >   mgr        advanced  mgr/dashboard/GRAFANA_API_SSL_VERIFY   false
> >
> >                                           *
> >   mgr        advanced  mgr/dashboard/GRAFANA_API_URL
> > https://10.10.110.198:3000
> >                                                          *
> >   mgr        advanced  mgr/dashboard/GRAFANA_API_USERNAME     admin
> >
> >                                           *
> >   mgr        advanced  mgr/dashboard/PROMETHEUS_API_HOST
> > http://10.10.110.196:9092
> >                                                         *
> >   mgr        advanced  mgr/dashboard/RGW_API_ACCESS_KEY
> > XXXX
> >                                                          *
> >   mgr        advanced  mgr/dashboard/RGW_API_SECRET_KEY
> > XXXX
> >                                                          *
> >   mgr        advanced  mgr/dashboard/RGW_API_SSL_VERIFY       false
> >
> >                                           *
> >   mgr        advanced  mgr/dashboard/ceph-mon1/server_addr
> 10.10.110.196
> >
> >                                           *
> >   mgr        advanced  mgr/dashboard/ceph-mon2/server_addr
> 10.10.110.197
> >
> >                                           *
> >   mgr        advanced  mgr/dashboard/ceph-mon3/server_addr
> 10.10.110.198
> >
> >                                           *
> >   mgr        advanced  mgr/dashboard/motd                     {"message":
> > "WELCOME TO AOS ZONE 3 STORAGE CLUSTER", "md5":
> > "87149a6798ce42a7e990bc8584a232cd", "severity": "info", "expires": ""}  *
> >   mgr        advanced  mgr/dashboard/server_port              8443
> >
> >                                            *
> >   mgr        advanced  mgr/dashboard/ssl                      true
> >
> >                                            *
> >   mgr        advanced  mgr/dashboard/ssl_server_port          8443
> >
> >
> > for the second one it seems is not valid
> >
> > root@ceph-mon1:~# ceph dashboard get-rgw-api-host
> > no valid command found; 10 closest matches:
> > dashboard set-jwt-token-ttl <seconds:int>
> > dashboard get-jwt-token-ttl
> > dashboard create-self-signed-cert
> > dashboard grafana dashboards update
> > dashboard get-account-lockout-attempts
> > dashboard set-account-lockout-attempts <value>
> > dashboard reset-account-lockout-attempts
> > dashboard get-alertmanager-api-host
> > dashboard set-alertmanager-api-host <value>
> > dashboard reset-alertmanager-api-host
> > Error EINVAL: invalid command
> > root@ceph-mon1:~#
> >
> >
> > Kindly check the output .
> >
> > Best Regards
> >
> > Michel
> >                         *
> >
> > On Mon, Feb 6, 2023 at 2:06 PM Eugen Block <eblock@xxxxxx> wrote:
> >
> >> Hi,
> >>
> >> can you paste the output of:
> >>
> >> ceph config dump | grep mgr/dashboard/RGW_API_HOST
> >>
> >> Does it match your desired setup? Depending on the ceph version (and
> >> how ceph-ansible deploys the services) you could also check:
> >>
> >> ceph dashboard get-rgw-api-host
> >>
> >> I'm not familiar with ceph-ansible, but if you shared your rgw
> >> definitions and the respective ceph output we might be able to assist
> >> resolving this.
> >>
> >> Regards,
> >> Eugen
> >>
> >> Zitat von Michel Niyoyita <micou12@xxxxxxxxx>:
> >>
> >> > Hello team,
> >> >
> >> > I have a ceph cluster deployed using ceph-ansible , running on ubuntu
> >> 20.04
> >> > OS which have 6 hosts , 3 hosts for OSD  and 3 hosts used as monitors
> and
> >> > managers , I have deployed RGW on all those hosts  and
> RGWLOADBALENCER on
> >> > top of them , for testing purpose , I have switched off one OSD , to
> >> check
> >> > if the rest can work properly , The test went well as expected,
> >> > unfortunately while coming back an OSD , the RGW failed to connect
> >> through
> >> > the dashboard. below is the message :
> >> > The Object Gateway Service is not configuredError connecting to Object
> >> > GatewayPlease consult the documentation
> >> > <
> >>
> https://docs.ceph.com/en/latest/mgr/dashboard/#enabling-the-object-gateway-management-frontend
> >> >
> >> > on
> >> > how to configure and enable the Object Gateway management
> functionality.
> >> >
> >> > would like to ask how to solve that issue or how can I proceed to
> remove
> >> > completely RGW and redeploy it after .
> >> >
> >> >
> >> > root@ceph-mon1:~# ceph -s
> >> >   cluster:
> >> >     id:     cb0caedc-eb5b-42d1-a34f-96facfda8c27
> >> >     health: HEALTH_OK
> >> >
> >> >   services:
> >> >     mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 72m)
> >> >     mgr: ceph-mon2(active, since 71m), standbys: ceph-mon3, ceph-mon1
> >> >     osd: 48 osds: 48 up (since 79m), 48 in (since 3d)
> >> >     rgw: 6 daemons active (6 hosts, 1 zones)
> >> >
> >> >   data:
> >> >     pools:   9 pools, 257 pgs
> >> >     objects: 59.49k objects, 314 GiB
> >> >     usage:   85 TiB used, 348 TiB / 433 TiB avail
> >> >     pgs:     257 active+clean
> >> >
> >> >   io:
> >> >     client:   2.0 KiB/s wr, 0 op/s rd, 0 op/s wr
> >> >
> >> > Kindly help
> >> >
> >> > Best Regards
> >> >
> >> > Michel
> >> > _______________________________________________
> >> > ceph-users mailing list -- ceph-users@xxxxxxx
> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux