Re: Ceph iSCSI rbd-target.api Failed to Load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/09/2022 12:50, duluxoz wrote:
Hi Guys,

So, I finally got things sorted :-)

Time to eat some crow-pie :-P

Turns out I had two issues, both of which involved typos (don't they always?).

The first was I had transposed two digits of an IP Address in the `iscsi-gateway.cfg` -> `trusted_ip_list`.

The second was that I had called the `iscsi-gateway.cfg` file `isci-gateway.cfg`.

Okay, this should be the reason why the 'api_secure' was using the default value.

Thanks!



DOH!

Thanks for all your help - if I hadn't had a couple of people to bounce ideas off and point out the blindingly obvious (to confirm I wasn't going crazy) then I don;t think I would have found these errors so quickly

Thank you!

Cheers

Dulux-Oz

On 10/09/2022 00:40, Bailey Allison wrote:
Hi Matt,

No problem, looking at the output of gwcli -d there it looks like it's having issues getting the api endpoint, are you able to try running:

curl --user admin:admin -X GET http://X.X.X.X:5000/api

or

curl http://X.X.X.X:5000/api

Replacing the IP address with the node hosting the iSCSI gateway?

It should spit out a bunch of stuff, but it would at least let us know if the api itself is listening or not.

Also here's the output of gwcli -d from our cluster to compare:

root@ubuntu-gw01:~# gwcli -d
Adding ceph cluster 'ceph' to the UI
Fetching ceph osd information
Querying ceph for state information
Refreshing disk information from the config object
- Scanning will use 8 scan threads
- rbd image scan complete: 0s
Refreshing gateway & client information
- checking iSCSI/API ports on ubuntu-gw01
- checking iSCSI/API ports on ubuntu-gw02

1 gateway is inaccessible - updates will be disabled
Querying ceph for state information
Gathering pool stats for cluster 'ceph'

Regards,

Bailey

-----Original Message-----
From: duluxoz <duluxoz@xxxxxxxxx>
Sent: September 9, 2022 4:11 AM
To: Bailey Allison <ballison@xxxxxxxxxxxx>; ceph-users@xxxxxxx
Subject:  Re: Ceph iSCSI rbd-target.api Failed to Load

Hi Bailey,

Sorry for the delay in getting back to you (I had a few non-related issues to resolve) - and thanks for replying.

The results from `gwcli -d`:

~~~
Adding ceph cluster 'ceph' to the UI
Fetching ceph osd information
Querying ceph for state information
REST API failure, code : 500
Unable to access the configuration object Traceback (most recent call last):
    File "/usr/bin/gwcli", line 194, in <module>
      main()
    File "/usr/bin/gwcli", line 108, in main
      "({})".format(settings.config.api_endpoint))
AttributeError: 'Settings' object has no attribute 'api_endpoint'
~~~

Checked all of the other things you mentioned: all good.

Any further ideas?

Cheers

On 08/09/2022 10:08, Bailey Allison wrote:
Hi Dulux-oz,

Are you able to share the output of "gwcli -d" from your iSCSI node?

Just a few things I can think to check off the top of my head, is port 5000 accessible/opened on the node running iSCSI?

I think by default the API tries to listen/use a pool called rbd, so does your cluster have a pool named that? It looks like it does based on your logs but something to check anyways, otherwise I believe you can change the pool it uses within the iscsi-gateway.cfg file though.

If there's any blocklisted OSDs on the node you're running iSCSI on it will also prevent rbd-target-api from starting I have found from experience, but again per your logs it looks like there isn't any.

Just in case it might help I've also attached an iscsi-gateway-cfg file from a cluster we've got with it working here:

# This is seed configuration used by the ceph_iscsi_config modules #
when handling configuration tasks for iscsi gateway(s) # # Please do
not change this file directly since it is managed by Ansible and will
be overwritten [config] api_password = admin api_port = 5000 # API
settings.
# The API supports a number of options that allow you to tailor it to
your # local environment. If you want to run the API under https, you
will need to # create cert/key files that are compatible for each
iSCSI gateway node, that is # not locked to a specific node. SSL cert
and key files *must* be called # 'iscsi-gateway.crt' and
'iscsi-gateway.key' and placed in the '/etc/ceph/' directory # on *each* gateway node. With the SSL files in place, you can use 'api_secure = true'
# to switch to https mode.
# To support the API, the bear minimum settings are:
api_secure = False
# Optional settings related to the CLI/API service api_user = admin
cluster_name = ceph loop_delay = 1 pool = rbd trusted_ip_list =
X.X.X.X,X.X.X.X,X.X.X.X,X.X.X.X

-----Original Message-----
From: duluxoz <duluxoz@xxxxxxxxx>
Sent: September 7, 2022 6:38 AM
To: ceph-users@xxxxxxx
Subject:  Ceph iSCSI rbd-target.api Failed to Load

Hi All,

I've followed the instructions on the CEPH Doco website on Configuring the iSCSI Target. Everything went AOK up to the point where I try to start the rbd-target-api service, which fails (the rbd-target-gw service started OK).

A `systemctl status rbd-target-api` gives:

~~~
rbd-target-api.service - Ceph iscsi target configuration API
      Loaded: loaded (/usr/lib/systemd/system/rbd-target-api.service;
enabled; vendor preset: disabled)
      Active: failed (Result: exit-code) since Wed 2022-09-07 18:07:26 AEST; 1h 5min ago      Process: 32547 ExecStart=/usr/bin/rbd-target-api (code=exited, status=16)
    Main PID: 32547 (code=exited, status=16)

Sep 07 19:19:03 ceph-host1.mydomain.local systemd[1]:
rbd-target-api.service: Start request repeated too quickly.
Sep 07 19:19:03 ceph-host1.mydomain.local systemd[1]:
rbd-target-api.service: Failed with result 'exit-code'.
Sep 07 19:19:03 ceph-host1.mydomain.local systemd[1]: Failed to start Ceph iscsi target configuration API.
~~~

A `journalctl -xe` gives:

~~~
Sep 07 19:19:03 ceph-host1.mydomain.local systemd[1]:
rbd-target-api.service: Start request repeated too quickly.
Sep 07 19:19:03 ceph-host1.mydomain.local systemd[1]:
rbd-target-api.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support:
https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- The unit rbd-target-api.service has entered the 'failed' state with result 'exit-code'. Sep 07 19:19:03 ceph-host1.mydomain.local systemd[1]: Failed to start Ceph iscsi target configuration API.
-- Subject: Unit rbd-target-api.service has failed
-- Defined-By: systemd
-- Support:
https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit rbd-target-api.service has failed.
--
-- The result is failed.
~~~

The `rbd-target-api.log` gives:

~~~
2022-09-07 19:19:01,084DEBUG [common.py:141:_open_ioctx()] -
(_open_ioctx) Opening connection to rbd pool
2022-09-07 19:19:01,086DEBUG [common.py:148:_open_ioctx()] -
(_open_ioctx) connection opened
2022-09-07 19:19:01,105DEBUG [common.py:438:init_config()] -
(init_config) using pre existing config object
2022-09-07 19:19:01,105DEBUG [common.py:141:_open_ioctx()] -
(_open_ioctx) Opening connection to rbd pool
2022-09-07 19:19:01,105DEBUG [common.py:148:_open_ioctx()] -
(_open_ioctx) connection opened
2022-09-07 19:19:01,106DEBUG [common.py:120:_read_config_object()] -
_read_config_object reading the config object
2022-09-07 19:19:01,107DEBUG [common.py:170:_get_ceph_config()] -
(_get_rbd_config) config object contains 'b'{\n"created": "2022/09/07
07:25:58",\n"discovery_auth": {\n"mutual_password":
"",\n"mutual_password_encryption_enabled": false,\n"mutual_username":
"",\n"password": "",\n"password_encryption_enabled": false,\n"username":
""\n},\n"disks": {},\n"epoch": 0,\n"gateways": {},\n"targets":
{},\n"updated": "",\n"version": 11\n}''
2022-09-07 19:19:01,107 INFO [rbd-target-api:2810:run()] - Started the
configuration object watcher
2022-09-07 19:19:01,107 INFO [rbd-target-api:2812:run()] - Checking
for config object changes every 1s
2022-09-07 19:19:01,109 INFO [gateway.py:82:osd_blocklist_cleanup()] -
Processing osd blocklist entries for this node
2022-09-07 19:19:01,497 INFO [gateway.py:140:osd_blocklist_cleanup()]
- No OSD blocklist entries found
2022-09-07 19:19:01,497 INFO [gateway.py:250:define()] - Reading the
configuration object to update local LIO configuration
2022-09-07 19:19:01,497 INFO [gateway.py:261:define()] - Configuration
does not have an entry for this host(ceph-host1.mydomain.local) -
nothing to define to LIO
2022-09-07 19:19:01,507 CRITICAL [rbd-target-api:2942:main()] - Secure API requested but the crt/key files missing/incompatible?
2022-09-07 19:19:01,508 CRITICAL [rbd-target-api:2944:main()] - Unable
to start
2022-09-07 19:19:01,956DEBUG [common.py:141:_open_ioctx()] -
(_open_ioctx) Opening connection to rbd pool
2022-09-07 19:19:01,958DEBUG [common.py:148:_open_ioctx()] -
(_open_ioctx) connection opened
2022-09-07 19:19:01,976DEBUG [common.py:438:init_config()] -
(init_config) using pre existing config object
2022-09-07 19:19:01,976DEBUG [common.py:141:_open_ioctx()] -
(_open_ioctx) Opening connection to rbd pool
2022-09-07 19:19:01,976DEBUG [common.py:148:_open_ioctx()] -
(_open_ioctx) connection opened
2022-09-07 19:19:01,977DEBUG [common.py:120:_read_config_object()] -
_read_config_object reading the config object
2022-09-07 19:19:01,978DEBUG [common.py:170:_get_ceph_config()] -
(_get_rbd_config) config object contains 'b'{\n"created": "2022/09/07
07:25:58",\n"discovery_auth": {\n"mutual_password":
"",\n"mutual_password_encryption_enabled": false,\n"mutual_username":
"",\n"password": "",\n"password_encryption_enabled": false,\n"username":
""\n},\n"disks": {},\n"epoch": 0,\n"gateways": {},\n"targets":
{},\n"updated": "",\n"version": 11\n}''
2022-09-07 19:19:01,979 INFO [rbd-target-api:2810:run()] - Started the
configuration object watcher
2022-09-07 19:19:01,979 INFO [rbd-target-api:2812:run()] - Checking
for config object changes every 1s
2022-09-07 19:19:01,980 INFO [gateway.py:82:osd_blocklist_cleanup()] -
Processing osd blocklist entries for this node
2022-09-07 19:19:02,367 INFO [gateway.py:140:osd_blocklist_cleanup()]
- No OSD blocklist entries found
2022-09-07 19:19:02,368 INFO [gateway.py:250:define()] - Reading the
configuration object to update local LIO configuration
2022-09-07 19:19:02,368 INFO [gateway.py:261:define()] - Configuration
does not have an entry for this host(ceph-host1.mydomain.local) -
nothing to define to LIO
2022-09-07 19:19:02,378 CRITICAL [rbd-target-api:2942:main()] - Secure API requested but the crt/key files missing/incompatible?
2022-09-07 19:19:02,379 CRITICAL [rbd-target-api:2944:main()] - Unable
to start
2022-09-07 19:19:02,960DEBUG [common.py:141:_open_ioctx()] -
(_open_ioctx) Opening connection to rbd pool
2022-09-07 19:19:02,962DEBUG [common.py:148:_open_ioctx()] -
(_open_ioctx) connection opened
2022-09-07 19:19:02,980DEBUG [common.py:438:init_config()] -
(init_config) using pre existing config object
2022-09-07 19:19:02,980DEBUG [common.py:141:_open_ioctx()] -
(_open_ioctx) Opening connection to rbd pool
2022-09-07 19:19:02,980DEBUG [common.py:148:_open_ioctx()] -
(_open_ioctx) connection opened
2022-09-07 19:19:02,981DEBUG [common.py:120:_read_config_object()] -
_read_config_object reading the config object
2022-09-07 19:19:02,982DEBUG [common.py:170:_get_ceph_config()] -
(_get_rbd_config) config object contains 'b'{\n"created": "2022/09/07
07:25:58",\n"discovery_auth": {\n"mutual_password":
"",\n"mutual_password_encryption_enabled": false,\n"mutual_username":
"",\n"password": "",\n"password_encryption_enabled": false,\n"username":
""\n},\n"disks": {},\n"epoch": 0,\n"gateways": {},\n"targets":
{},\n"updated": "",\n"version": 11\n}''
2022-09-07 19:19:02,983 INFO [rbd-target-api:2810:run()] - Started the
configuration object watcher
2022-09-07 19:19:02,983 INFO [rbd-target-api:2812:run()] - Checking
for config object changes every 1s
2022-09-07 19:19:02,985 INFO [gateway.py:82:osd_blocklist_cleanup()] -
Processing osd blocklist entries for this node
2022-09-07 19:19:03,370 INFO [gateway.py:140:osd_blocklist_cleanup()]
- No OSD blocklist entries found
2022-09-07 19:19:03,371 INFO [gateway.py:250:define()] - Reading the
configuration object to update local LIO configuration
2022-09-07 19:19:03,371 INFO [gateway.py:261:define()] - Configuration
does not have an entry for this host(ceph-host1.mydomain.local) -
nothing to define to LIO
2022-09-07 19:19:03,381 CRITICAL [rbd-target-api:2942:main()] - Secure API requested but the crt/key files missing/incompatible?
2022-09-07 19:19:03,381 CRITICAL [rbd-target-api:2944:main()] - Unable
to start ~~~

This `iscsi-gateway.cfg` file reads:

~~~
[config]
cluster_name = ceph
gateway_keyring = ceph.client.admin.keyring

# API settings.
# The api supports a number of options that allow you to tailor it to your # local environment. If you want to run the api under https, you will need to # create crt/key files that are compatible for each gateway node (i.e. not # locked to a specific node). SSL crt and key files *must* be called # iscsi-gateway.crt and iscsi-gateway.key and placed in /etc/ceph on *each* # gateway node. With the SSL files in place, you can use api_secure = true # to switch to https mode.
api_secure = false

# Additional API configuration options are as follows (defaults
shown); # api_user = admin # api_password = admin # api_port = 5000
trusted_ip_list = 192.168.1.101,192.168.1.101,192.168.1.101

# Refer to the ceph-iscsi-config/settings module for more options ~~~

192.168.1.101 is the ip address for the ceph-host1.mydomain.local (and similarly for the other ip addresses / hostnames of the other 2 nodes, which are yet to be installed).

The iSCSI Node is co-located on an OSD Node.

The Cluster is working (apart form the iSCSI part, of course).

So, could someone be kind enough to point out what I'm missing (ie
what's wrong)? - Thanks in advance

Dulux-Oz


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux