Re: Problems getting ceph-iscsi to work

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As I continue to try to get this to work in the first place, I came across this gem in journalctl -xe:

Apr 29 12:00:57 iscsi1 rbd-target-api[20275]: Traceback (most recent call last):
Apr 29 12:00:57 iscsi1 rbd-target-api[20275]:   File "/usr/bin/rbd-target-api", line 2951, in <module>
Apr 29 12:00:57 iscsi1 rbd-target-api[20275]:     main()
Apr 29 12:00:57 iscsi1 rbd-target-api[20275]:   File "/usr/bin/rbd-target-api", line 2861, in main
Apr 29 12:00:57 iscsi1 rbd-target-api[20275]:     osd_state_ok = ceph_gw.osd_blacklist_cleanup()
Apr 29 12:00:57 iscsi1 rbd-target-api[20275]:   File "/usr/lib/python3.6/site-packages/ceph_iscsi_config/gateway.py", line 110, in osd_blacklist_cleanup
Apr 29 12:00:57 iscsi1 rbd-target-api[20275]:     rm_ok = self.ceph_rm_blacklist(blacklist_entry.split(' ')[0])
Apr 29 12:00:57 iscsi1 rbd-target-api[20275]:   File "/usr/lib/python3.6/site-packages/ceph_iscsi_config/gateway.py", line 46, in ceph_rm_blacklist
Apr 29 12:00:57 iscsi1 rbd-target-api[20275]:     if ("un-blacklisting" in result) or ("isn't blacklisted" in result):
Apr 29 12:00:57 iscsi1 rbd-target-api[20275]: TypeError: a bytes-like object is required, not 'str'


Again, the api daemon runs for about 4-5 seconds, then dies.  The above message is the first time I have seen exceptions logged.

Maybe it means something useful here....

Ron

----- Original Message -----
From: "Ron Gage" <ron@xxxxxxxxxxx>
To: "dillaman" <dillaman@xxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxx>
Sent: Wednesday, April 29, 2020 9:47:28 AM
Subject:  Re: Problems getting ceph-iscsi to work

Well, some progress - for what it's worth... 

rbd-target-api ran for about 5 seconds before it failed. It also produced some logs. There are no apparent errors in the logs however: 
2020-04-29 09:41:35,273 DEBUG [common.py:139:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 
2020-04-29 09:41:35,275 DEBUG [common.py:146:_open_ioctx()] - (_open_ioctx) connection opened 
2020-04-29 09:41:35,307 DEBUG [common.py:434:init_config()] - (init_config) created empty config object 
2020-04-29 09:41:35,307 DEBUG [common.py:139:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 
2020-04-29 09:41:35,307 DEBUG [common.py:146:_open_ioctx()] - (_open_ioctx) connection opened 
2020-04-29 09:41:35,307 DEBUG [common.py:118:_read_config_object()] - _read_config_object reading the config object 
2020-04-29 09:41:35,308 DEBUG [common.py:160:_get_ceph_config()] - (_get_rbd_config) config object is empty..seeding it 
2020-04-29 09:41:35,308 DEBUG [common.py:139:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 
2020-04-29 09:41:35,308 DEBUG [common.py:146:_open_ioctx()] - (_open_ioctx) connection opened 
2020-04-29 09:41:35,308 DEBUG [common.py:139:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 
2020-04-29 09:41:35,308 DEBUG [common.py:146:_open_ioctx()] - (_open_ioctx) connection opened 
2020-04-29 09:41:35,308 DEBUG [common.py:448:lock()] - config.lock attempting to acquire lock on gateway.conf 
2020-04-29 09:41:35,318 DEBUG [common.py:118:_read_config_object()] - _read_config_object reading the config object 
2020-04-29 09:41:35,318 DEBUG [common.py:494:_seed_rbd_config()] - _seed_rbd_config found empty config object 
2020-04-29 09:41:35,340 DEBUG [common.py:139:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 
2020-04-29 09:41:35,340 DEBUG [common.py:146:_open_ioctx()] - (_open_ioctx) connection opened 
2020-04-29 09:41:35,340 DEBUG [common.py:471:unlock()] - config.unlock releasing lock on gateway.conf 
2020-04-29 09:41:35,352 DEBUG [common.py:168:_get_ceph_config()] - (_get_rbd_config) config object contains '{"disks": {}, "gateways": {}, "ta 
rgets": {}, "discovery_auth": {"username": "", "password": "", "password_encryption_enabled": false, "mutual_username": "", "mutual_password": "" 
, "mutual_password_encryption_enabled": false}, "version": 11, "epoch": 0, "created": "2020/04/29 13:41:35", "updated": ""}' 
2020-04-29 09:41:35,352 INFO [rbd-target-api:2784:run()] - Started the configuration object watcher 
2020-04-29 09:41:35,353 INFO [rbd-target-api:2786:run()] - Checking for config object changes every 1s 
2020-04-29 09:41:35,356 INFO [gateway.py:66:osd_blacklist_cleanup()] - Processing osd blacklist entries for this node 
2020-04-29 09:41:35,736 INFO [gateway.py:37:ceph_rm_blacklist()] - Removing blacklisted entry for this host : 192.168.0.61:0/217223056 
2020-04-29 09:41:37,104 DEBUG [common.py:139:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 
2020-04-29 09:41:37,106 DEBUG [common.py:146:_open_ioctx()] - (_open_ioctx) connection opened 
2020-04-29 09:41:37,148 DEBUG [common.py:436:init_config()] - (init_config) using pre existing config object 
2020-04-29 09:41:37,149 DEBUG [common.py:139:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 
2020-04-29 09:41:37,149 DEBUG [common.py:146:_open_ioctx()] - (_open_ioctx) connection opened 
2020-04-29 09:41:37,149 DEBUG [common.py:118:_read_config_object()] - _read_config_object reading the config object 
2020-04-29 09:41:37,150 DEBUG [common.py:168:_get_ceph_config()] - (_get_rbd_config) config object contains 'b'{\n "created": "2020/04/29 13:41:35",\n "discovery_auth": {\n "mutual_password": "",\n "mutual_password_encryption_enabled": false,\n "mutual_username": "",\n "password": "",\n "password_encryption_enabled": false,\n "username": ""\n },\n "disks": {},\n "epoch": 0,\n "gateways": {},\n "targets": {},\n "updated": "",\n "version": 11\n}'' 
2020-04-29 09:41:37,150 INFO [rbd-target-api:2784:run()] - Started the configuration object watcher 
2020-04-29 09:41:37,150 INFO [rbd-target-api:2786:run()] - Checking for config object changes every 1s 
2020-04-29 09:41:37,152 INFO [gateway.py:66:osd_blacklist_cleanup()] - Processing osd blacklist entries for this node 
2020-04-29 09:41:37,547 INFO [gateway.py:37:ceph_rm_blacklist()] - Removing blacklisted entry for this host : 192.168.0.61:0/3465155888 
2020-04-29 09:41:39,534 DEBUG [common.py:139:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 
2020-04-29 09:41:39,537 DEBUG [common.py:146:_open_ioctx()] - (_open_ioctx) connection opened 
2020-04-29 09:41:39,556 DEBUG [common.py:436:init_config()] - (init_config) using pre existing config object 
2020-04-29 09:41:39,557 DEBUG [common.py:139:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 
2020-04-29 09:41:39,557 DEBUG [common.py:146:_open_ioctx()] - (_open_ioctx) connection opened 
2020-04-29 09:41:39,557 DEBUG [common.py:118:_read_config_object()] - _read_config_object reading the config object 
2020-04-29 09:41:39,558 DEBUG [common.py:168:_get_ceph_config()] - (_get_rbd_config) config object contains 'b'{\n "created": "2020/04/29 13:41:35",\n "discovery_auth": {\n "mutual_password": "",\n "mutual_password_encryption_enabled": false,\n "mutual_username": "",\n "password": "",\n "password_encryption_enabled": false,\n "username": ""\n },\n "disks": {},\n "epoch": 0,\n "gateways": {},\n "targets": {},\n "updated": "",\n "version": 11\n}'' 
2020-04-29 09:41:39,558 INFO [rbd-target-api:2784:run()] - Started the configuration object watcher 
2020-04-29 09:41:39,558 INFO [rbd-target-api:2786:run()] - Checking for config object changes every 1s 
2020-04-29 09:41:39,560 INFO [gateway.py:66:osd_blacklist_cleanup()] - Processing osd blacklist entries for this node 
2020-04-29 09:41:39,967 INFO [gateway.py:37:ceph_rm_blacklist()] - Removing blacklisted entry for this host : 192.168.0.61:0/1337514082 
~ 
rbd-target-gw still dies instantly and produces no log files... 




From: "Jason Dillaman" <jdillama@xxxxxxxxxx> 
To: "Ron Gage" <ron@xxxxxxxxxxx> 
Cc: "ceph-users" <ceph-users@xxxxxxx> 
Sent: Wednesday, April 29, 2020 9:32:31 AM 
Subject: Re:  Problems getting ceph-iscsi to work 

On Wed, Apr 29, 2020 at 9:27 AM Ron Gage < [ mailto:ron@xxxxxxxxxxx | ron@xxxxxxxxxxx ] > wrote: 


Hi everyone! 

I have been working for the past week or so trying to get ceph-iscsi to work - Octopus release. Even just getting a single node working would be a major victory in this battle but so far, victory has proven elusive. 

My setup: a pair of Dell Optiplex 7010 desktops, each with 16 gig of memory and 1 boot drive (USB 3) and 3 SATA drives (500 Gb SSHD drives). No RAID controllers anywhere. Yes, I know that 3 nodes is the recommended minimum number for a production system - this isn't production (this is just seeing if the darned thing will even work). 

I am using Centos 8.1.1911 for the OS (4.18.0 kernel) with a basic or minimal installation (no X-Window). Single Gigabit ethernet per node. I have 2 MON and 2 Mgr installed and working, and I have a total of 6 OSDs working. I created the RBD pool (named "rbd" per the published instructions), creating it initially with 256 PGs (autoscale decided that 32 was a better choice - whatever). The cluster is green and all 6 OSDs are green (up and in). All deployment is via cephadm and all containers are running via podman. 

Here is where things start to fall apart. 

I was able to find RPM packages for targetcli and python-rtslib (called python3-rtslib) but was not able to find tcmu-runner nor ceph-iscsi packages. OK, no big deal. Time to head over to the manual install guide. 



[ http://download.ceph.com/ceph-iscsi/3/rpm/el8/noarch/ | http://download.ceph.com/ceph-iscsi/3/rpm/el8/noarch/ ] for the latter. 


BQ_BEGIN

I was able to build tcmu-runner, install it and apparently it is running (systemctl says it is active) so that appears to be OK. 

The problem is getting rbd-target-gw and rbd-target-api to work. They appear to build OK and of course, I am able to get them registered with systemd. They universally fail when trying to run them (systemctl start rbd-target-gw or systemctl start rbd-target-api). Both report failure. Looking in journalctl -xe shows no hints at all regarding why they failed (only that they did). Looking in /var/log/rbd-target-api/ show nothing at all (no files). Likewise in /var/log/rbd-target-gw/ (no files). 

HELP!! 

Now, some possibly germane questions: 
1) are any other Ceph services required for ceph-iscsi to work like RADOSgw? 

BQ_END

Nope. 


BQ_BEGIN
2) since there are no apparent packages available for ceph-iscsi, can anything be inferred to the production-readiness of the subsystem? 

BQ_END

See above. 


BQ_BEGIN
3) are there any known errata or missing steps in the instructions for getting ceph-iscsi to work? 

BQ_END

Not to my knowledge. 


BQ_BEGIN

Thanks! 

Ron Gage 

_______________________________________________ 
ceph-users mailing list -- [ mailto:ceph-users@xxxxxxx | ceph-users@xxxxxxx ] 
To unsubscribe send an email to [ mailto:ceph-users-leave@xxxxxxx | ceph-users-leave@xxxxxxx ] 


BQ_END



-- 
Jason 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux