Hi Mike, actually you point to the right log; I can find relevant information in this logfile /var/log/rbd-target-api/rbd-target-api.log: root@ld5505:~# tail -f /var/log/rbd-target-api/rbd-target-api.log 2019-12-04 12:09:52,986 ERROR [rbd-target-api:2918:<module>()] - 'rbd' pool does not exist! 2019-12-04 12:09:52,986 CRITICAL [rbd-target-api:2736:halt()] - Unable to open/read the configuration object 2019-12-04 12:09:53,474 DEBUG [common.py:128:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 2019-12-04 12:09:53,481 ERROR [common.py:133:_open_ioctx()] - (_open_ioctx) rbd does not exist 2019-12-04 12:09:53,481 ERROR [rbd-target-api:2918:<module>()] - 'rbd' pool does not exist! 2019-12-04 12:09:53,481 CRITICAL [rbd-target-api:2736:halt()] - Unable to open/read the configuration object 2019-12-04 12:09:53,977 DEBUG [common.py:128:_open_ioctx()] - (_open_ioctx) Opening connection to rbd pool 2019-12-04 12:09:53,986 ERROR [common.py:133:_open_ioctx()] - (_open_ioctx) rbd does not exist 2019-12-04 12:09:53,986 ERROR [rbd-target-api:2918:<module>()] - 'rbd' pool does not exist! 2019-12-04 12:09:53,986 CRITICAL [rbd-target-api:2736:halt()] - Unable to open/read the configuration object This error message is clear: pool 'rbd' does not exist. However, this pool does exist: root@ld5505:~# ceph osd pool ls detail pool 11 'hdb_backup' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 16384 pgp_num 16384 last_change 404671 lfor 0/0/319352 flags hashpspool,selfmanaged_snaps stripe_width 0 pg_num_min 8192 application rbd removed_snaps [1~3] pool 59 'hdd' replicated size 3 min_size 2 crush_rule 3 object_hash rjenkins pg_num 2048 pgp_num 2048 last_change 319283 lfor 307105/317145/317153 flags hashpspool,selfmanaged_snaps stripe_width 0 pg_num_min 1024 application rbd removed_snaps [1~3] pool 62 'cephfs_data' replicated size 3 min_size 2 crush_rule 3 object_hash rjenkins pg_num 32 pgp_num 32 last_change 319282 lfor 300310/300310/300310 flags hashpspool stripe_width 0 pg_num_min 32 application cephfs pool 63 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 3 object_hash rjenkins pg_num 8 pgp_num 8 last_change 319280 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 8 recovery_priority 5 application cephfs pool 65 'nvme' replicated size 2 min_size 2 crush_rule 2 object_hash rjenkins pg_num 128 pgp_num 128 last_change 319281 flags hashpspool stripe_width 0 pg_num_min 128 application rbd pool 66 'ssd' replicated size 2 min_size 2 crush_rule 4 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 405344 lfor 0/0/405339 flags hashpspool stripe_width 0 pg_num_min 512 application rbd pool 67 'rbd' replicated size 3 min_size 2 crush_rule 3 object_hash rjenkins pg_num 32 pgp_num 32 last_change 457155 flags hashpspool stripe_width 0 application rbd So, the next issue is: Why is the required pool 'rbd' not identified? Remark: Pool 67 'rbd' was renamed, means the original name was 'iscsi'. I hope this is not the root cause. THX Am 05.12.2019 um 19:15 schrieb Mike Christie: > On 12/05/2019 03:16 AM, Thomas Schneider wrote: >> Hi, >> >> I want to setup Ceph iSCSI Gateway and I follow this >> <https://docs.ceph.com/docs/master/rbd/iscsi-overview/> documentation. >> In step "Setup" of process "Configuring the iSCSI target using the >> command line interface >> <https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/>" I cannot >> start service rbd-target-api. >> >> There's no error message in status or anywhere else: >> root@ld5505:~# systemctl status rbd-target-api >> _ rbd-target-api.service - Ceph iscsi target configuration API >> Loaded: loaded (/lib/systemd/system/rbd-target-api.service; enabled; >> vendor preset: enabled) >> Active: failed (Result: exit-code) since Wed 2019-12-04 13:47:51 CET; >> 3min 16s ago >> Process: 4143457 ExecStart=/usr/bin/rbd-target-api (code=exited, >> status=1/FAILURE) >> Main PID: 4143457 (code=exited, status=1/FAILURE) >> >> Dec 04 13:47:51 ld5505 systemd[1]: rbd-target-api.service: Service >> RestartSec=100ms expired, scheduling restart. >> Dec 04 13:47:51 ld5505 systemd[1]: rbd-target-api.service: Scheduled >> restart job, restart counter is at 3. >> Dec 04 13:47:51 ld5505 systemd[1]: Stopped Ceph iscsi target >> configuration API. >> Dec 04 13:47:51 ld5505 systemd[1]: rbd-target-api.service: Start request >> repeated too quickly. >> Dec 04 13:47:51 ld5505 systemd[1]: rbd-target-api.service: Failed with >> result 'exit-code'. >> Dec 04 13:47:51 ld5505 systemd[1]: Failed to start Ceph iscsi target >> configuration API. >> > Check /var/log/messages and journalctl for rbd-target-api messages. > There is also a log in /var/log/rbd-target-api/rbd-target-api.log, but I > thinking we probably crashed before we got anything useful there. > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx