Re: ceph-iscsi-cli: cannot remove duplicated gateways.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,  Xiubo.
I'm not clear about the commands that dump and restore an object. Could you
give me an example?
`rados ls -p rbd` shows tons of uuids.
https://docs.ceph.com/en/latest/man/8/rados/

On Mon, Feb 20, 2023 at 9:30 AM Xiubo Li <xiubli@xxxxxxxxxx> wrote:

> Hi
>
> So you are using the default 'rbd' pool to store 'gateway.conf' config
> object.
>
> And the 'gateway.conf' is corrupted. If you couldn't remove it by using
> force option, then you should dump the 'gateway.conf' object from 'rbd'
> pool and then delete the 'ceph-iscsi-gw-1.ipa.pthl.hklocalhost.localdomain'
> related configures manually and then store it back. Then to see will it
> disappear, if not please restart the ceph-iscsi services.
>
> If you are not sure what should be deleted you can send it to me, I will
> help revise it for you.
>
> Thanks
>
> - Xiubo
> On 20/02/2023 09:20, luckydog xf wrote:
>
> Hi,
>>
>
>
>> [root@ceph-iscsi-gw-1 ~]# gwcli ls
>> o- /
>> .........................................................................................................................
>> [...]
>>   o- cluster
>> .........................................................................................................
>> [Clusters: 1]
>>   | o- ceph
>> ............................................................................................................
>> [HEALTH_OK]
>>   |   o- pools
>> .........................................................................................................
>> [Pools: 22]
>>   |   | o- .rgw.root
>> .............................................................. [(x3),
>> Commit: 0.00Y/115222232M (0%), Used: 52K]
>>   |   | o- cinder-ceph ................................................
>> [(x3), Commit: 0.00Y/115222232M (0%), Used: 28703946995700b]
>>   |   | o- default.rgw.buckets.data
>> ............................................. [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 4587b]
>>   |   | o- default.rgw.buckets.extra
>> ............................................ [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 0.00Y]
>>   |   | o- default.rgw.buckets.index
>> ............................................ [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 0.00Y]
>>   |   | o- default.rgw.control
>> .................................................. [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 0.00Y]
>>   |   | o- default.rgw.data.root
>> ................................................ [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 0.00Y]
>>   |   | o- default.rgw.gc
>> ....................................................... [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 0.00Y]
>>   |   | o- default.rgw.intent-log
>> ............................................... [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 0.00Y]
>>   |   | o- default.rgw.log
>> ....................................................... [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 384K]
>>   |   | o- default.rgw.meta
>> ....................................................... [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 24K]
>>   |   | o- default.rgw.usage
>> .................................................... [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 0.00Y]
>>   |   | o- default.rgw.users.email
>> .............................................. [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 0.00Y]
>>   |   | o- default.rgw.users.keys
>> ............................................... [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 0.00Y]
>>   |   | o- default.rgw.users.swift
>> .............................................. [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 0.00Y]
>>   |   | o- default.rgw.users.uid
>> ................................................ [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 0.00Y]
>>   |   | o- device_health_metrics
>> ........................................... [(x3), Commit: 0.00Y/115222232M
>> (0%), Used: 305303172b]
>>   |   | o- glance .......................................................
>> [(x3), Commit: 0.00Y/115222232M (0%), Used: 525074107902b]
>>   |   | o- gnocchi
>> ......................................................... [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 989498938b]
>>   |   | o- nova
>> ................................................................... [(x3),
>> Commit: 0.00Y/115222232M (0%), Used: 12K]
>>   |   | o- rbd ........................................................
>> [(x3), Commit: 20.0T/115222232M (18%), Used: 3274867542037b]
>>   |   | o- scbench
>> .......................................................... [(x3), Commit:
>> 0.00Y/115222232M (0%), Used: 88829964K]
>>   |   o- topology
>> ..............................................................................................
>> [OSDs: 108,MONs: 3]
>>   o- disks
>> .......................................................................................................
>> [20.0T, Disks: 1]
>>   | o- rbd
>> ...........................................................................................................
>> [rbd (20.0T)]
>>   |   o- ceph-iscsi
>> ...............................................................................
>> [rbd/ceph-iscsi (Online, 20.0T)]
>>   o- iscsi-targets
>> ...............................................................................
>> [DiscoveryAuth: None, Targets: 1]
>>     o- iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw
>> ......................................................... [Auth: None,
>> Gateways: 3]
>>       o- disks
>> ..........................................................................................................
>> [Disks: 1]
>>       | o- rbd/ceph-iscsi
>> ............................................................. [Owner:
>> ceph-iscsi-gw-1.ipa.pthl.hk, Lun: 0]
>>       o- gateways
>> ............................................................................................
>> [Up: 2/3, Portals: 3]
>>       | o- ceph-iscsi-gw-1.ipa.pthl.hk
>> .......................................................................
>> [172.16.202.251 (UP)]
>>       | o- ceph-iscsi-gw-1.ipa.pthl.hklocalhost.localdomain
>> ............................................. [172.16.202.251 (UNKNOWN)]
>>       | o- ceph-iscsi-gw-2.ipa.pthl.hk
>> .......................................................................
>> [172.16.202.252 (UP)]
>>       o- host-groups
>> ..................................................................................................
>> [Groups : 0]
>>       o- hosts
>> ......................................................................................
>> [Auth: ACL_DISABLED, Hosts: 0]
>>
>
>
> ===
> [root@ceph-iscsi-gw-1 ~]# cat /etc/ceph/iscsi-gateway.cfg
> [config]
> # Name of the Ceph storage cluster. A suitable Ceph configuration file
> allowing
> # access to the Ceph storage cluster from the gateway node is required, if
> not
> # colocated on an OSD node.
> cluster_name = ceph
>
> # Place a copy of the ceph cluster's admin keyring in the gateway's
> /etc/ceph
> # drectory and reference the filename here
> gateway_keyring = ceph.client.admin.keyring
>
>
> # API settings.
> # The API supports a number of options that allow you to tailor it to your
> # local environment. If you want to run the API under https, you will need
> to
> # create cert/key files that are compatible for each iSCSI gateway node,
> that is
> # not locked to a specific node. SSL cert and key files *must* be called
> # 'iscsi-gateway.crt' and 'iscsi-gateway.key' and placed in the
> '/etc/ceph/' directory
> # on *each* gateway node. With the SSL files in place, you can use
> 'api_secure = true'
> # to switch to https mode.
>
> # To support the API, the bare minimum settings are:
> api_secure = false
>
> # Additional API configuration options are as follows, defaults shown.
> api_user = admin
> api_password = admin
> api_port = 5001
> # API IP
> trusted_ip_list = 172.16.200.251,172.16.200.252
>
>
> --
> Best Regards,
>
> Xiubo Li (李秀波)
>
> Email: xiubli@xxxxxxxxxx/xiubli@xxxxxxx
> Slack: @Xiubo Li
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux