Re: Starting service rbd-target-api fails

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/06/2019 12:10 PM, Mike Christie wrote:
> On 12/06/2019 01:11 AM, Thomas Schneider wrote:
>> Hi Mike,
>>
>> actually you point to the right log; I can find relevant information in
>> this logfile /var/log/rbd-target-api/rbd-target-api.log:
>> root@ld5505:~# tail -f /var/log/rbd-target-api/rbd-target-api.log
>> 2019-12-04 12:09:52,986    ERROR [rbd-target-api:2918:<module>()] -
>> 'rbd' pool does not exist!
>> 2019-12-04 12:09:52,986 CRITICAL [rbd-target-api:2736:halt()] - Unable
>> to open/read the configuration object
>> 2019-12-04 12:09:53,474    DEBUG [common.py:128:_open_ioctx()] -
>> (_open_ioctx) Opening connection to rbd pool
>> 2019-12-04 12:09:53,481    ERROR [common.py:133:_open_ioctx()] -
>> (_open_ioctx) rbd does not exist
>> 2019-12-04 12:09:53,481    ERROR [rbd-target-api:2918:<module>()] -
>> 'rbd' pool does not exist!
>> 2019-12-04 12:09:53,481 CRITICAL [rbd-target-api:2736:halt()] - Unable
>> to open/read the configuration object
>> 2019-12-04 12:09:53,977    DEBUG [common.py:128:_open_ioctx()] -
>> (_open_ioctx) Opening connection to rbd pool
>> 2019-12-04 12:09:53,986    ERROR [common.py:133:_open_ioctx()] -
>> (_open_ioctx) rbd does not exist
>> 2019-12-04 12:09:53,986    ERROR [rbd-target-api:2918:<module>()] -
>> 'rbd' pool does not exist!
>> 2019-12-04 12:09:53,986 CRITICAL [rbd-target-api:2736:halt()] - Unable
>> to open/read the configuration object
>>
>> This error message is clear: pool 'rbd' does not exist.
>>
>> However, this pool does exist:
>> root@ld5505:~# ceph osd pool ls detail
>> pool 11 'hdb_backup' replicated size 3 min_size 2 crush_rule 1
>> object_hash rjenkins pg_num 16384 pgp_num 16384 last_change 404671 lfor
>> 0/0/319352 flags hashpspool,selfmanaged_snaps stripe_width 0 pg_num_min
>> 8192 application rbd
>>         removed_snaps [1~3]
>> pool 59 'hdd' replicated size 3 min_size 2 crush_rule 3 object_hash
>> rjenkins pg_num 2048 pgp_num 2048 last_change 319283 lfor
>> 307105/317145/317153 flags hashpspool,selfmanaged_snaps stripe_width 0
>> pg_num_min 1024 application rbd
>>         removed_snaps [1~3]
>> pool 62 'cephfs_data' replicated size 3 min_size 2 crush_rule 3
>> object_hash rjenkins pg_num 32 pgp_num 32 last_change 319282 lfor
>> 300310/300310/300310 flags hashpspool stripe_width 0 pg_num_min 32
>> application cephfs
>> pool 63 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 3
>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 319280 flags
>> hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 8
>> recovery_priority 5 application cephfs
>> pool 65 'nvme' replicated size 2 min_size 2 crush_rule 2 object_hash
>> rjenkins pg_num 128 pgp_num 128 last_change 319281 flags hashpspool
>> stripe_width 0 pg_num_min 128 application rbd
>> pool 66 'ssd' replicated size 2 min_size 2 crush_rule 4 object_hash
>> rjenkins pg_num 1024 pgp_num 1024 last_change 405344 lfor 0/0/405339
>> flags hashpspool stripe_width 0 pg_num_min 512 application rbd
>> pool 67 'rbd' replicated size 3 min_size 2 crush_rule 3 object_hash
>> rjenkins pg_num 32 pgp_num 32 last_change 457155 flags hashpspool
>> stripe_width 0 application rbd
>>
>> So, the next issue is:
>> Why is the required pool 'rbd' not identified?
>>
>> Remark:
>> Pool 67 'rbd' was renamed, means the original name was 'iscsi'.
>> I hope this is not the root cause.
>>
> 
> I tested renaming it here and it worked for me ok.
> 
> Have you used the pool before? Do you have some leftover
> auth/permissions that did not get setup on the gw nodes?
> 
> If you are using ceph-iscsi 3 or newer, rename the pool back to iscsi or
> try a new one, and do:
> 
> [config]
> pool = iscsi
> 
> in /etc/ceph/iscsi-gateway.cfg on all gw nodes, does it start ok?
> 

With the pool still named rbd, could you also turn on debugging?
Something like

[client]
log to syslog = true
debug rbd = 20/20

in /etc/ceph/ceph.conf. You should get a lot of debug messages in
/var/log/messages.
~
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux