Re: Removing Rados Gateway in ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

can you paste the output of:

ceph config dump | grep mgr/dashboard/RGW_API_HOST

Does it match your desired setup? Depending on the ceph version (and how ceph-ansible deploys the services) you could also check:

ceph dashboard get-rgw-api-host

I'm not familiar with ceph-ansible, but if you shared your rgw definitions and the respective ceph output we might be able to assist resolving this.

Regards,
Eugen

Zitat von Michel Niyoyita <micou12@xxxxxxxxx>:

Hello team,

I have a ceph cluster deployed using ceph-ansible , running on ubuntu 20.04
OS which have 6 hosts , 3 hosts for OSD  and 3 hosts used as monitors and
managers , I have deployed RGW on all those hosts  and RGWLOADBALENCER on
top of them , for testing purpose , I have switched off one OSD , to check
if the rest can work properly , The test went well as expected,
unfortunately while coming back an OSD , the RGW failed to connect through
the dashboard. below is the message :
The Object Gateway Service is not configuredError connecting to Object
GatewayPlease consult the documentation
<https://docs.ceph.com/en/latest/mgr/dashboard/#enabling-the-object-gateway-management-frontend>
on
how to configure and enable the Object Gateway management functionality.

would like to ask how to solve that issue or how can I proceed to remove
completely RGW and redeploy it after .


root@ceph-mon1:~# ceph -s
  cluster:
    id:     cb0caedc-eb5b-42d1-a34f-96facfda8c27
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 72m)
    mgr: ceph-mon2(active, since 71m), standbys: ceph-mon3, ceph-mon1
    osd: 48 osds: 48 up (since 79m), 48 in (since 3d)
    rgw: 6 daemons active (6 hosts, 1 zones)

  data:
    pools:   9 pools, 257 pgs
    objects: 59.49k objects, 314 GiB
    usage:   85 TiB used, 348 TiB / 433 TiB avail
    pgs:     257 active+clean

  io:
    client:   2.0 KiB/s wr, 0 op/s rd, 0 op/s wr

Kindly help

Best Regards

Michel
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux