Re: ceph-iscsi on RL9

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm not sure for how long your iscsi gateways will work as it has been deprecated [1]:

The iSCSI gateway is in maintenance as of November 2022. This means that it is no longer in active development and will not be updated to add new features.

Some more information were provided in [2].

To remove an iscsi-gateway you could check the current list:

$ ceph dashboard iscsi-gateway-list
{"gateways": {"test-gateway.my.domain": {"service_url": "http://{USER}:{PASSWORD}@{IP_ADDRESS}:5000"}}}

and then remove them if you can confirm that those are the old ones:

$ ceph dashboard iscsi-gateway-rm test-gateway.my.domain
Success


Regards,
Eugen

[1] https://docs.ceph.com/en/quincy/rbd/iscsi-overview/#ceph-iscsi
[2] https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/GDJJL7VSDUJITPM3JV7RCVXVOIQO2CAN/

Zitat von duluxoz <duluxoz@xxxxxxxxx>:

Hi All,

A follow up: So, I've got all the Ceph Nodes running Reef v18.2.1 on RL9.3, and everything is working - YAH!

Except...

The Ceph Dashboard shows 0 of 3 iSCSI Gateways working, and when I click on that panel it returns a "Page not Found" message - so I *assume* those are the three "original" iSCSI Gateways I had set up under Quincy/RL8.

How do I get rid of them? I think I've removed all references to them (ie tcmu-runner, rbd-target-api, rbd-target-gw) but obviously, something has been missed - could someone please point me in the correct direction to finish "cleaning them up" - thanks.

I've also created (via the Web GUI) three new iSCSI Services, which the GUI says are running. `ceph -s`, however, doesn't show them at all - is this normal?

Also, it is not clear (to me) from the Reef doco if there is anything else that needs to be done to get iSCSI up and running (on the server side - obviously I need to create/update the initiators on the client side). Under the "old manual" way of doing it (ie https://docs.ceph.com/en/reef/rbd/iscsi-target-cli/) there was "extra stuff to do" - does that not apply any more?

And finally, during my investigations I discovered a systemd service for osd.21 loaded but failed - there is no osd.21, so I must have made a typo somewhere in the past (there are only 21 osds in the cluster, so the last one is osd.20). The trouble is I can't seem to find *where* this is defined (ie non of the typical commands, etc (eg `ceph osd destroy osd.21`) csn seem to find it and/or get rid of it) - could someone please help me out with this as well - thanks.

Anything else anyone want to know please ask

Cheers

Dulux-oz
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux