Re: ceph rbd iscsi gwcli Non-existent images

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 10, 2020 at 9:23 AM Steven Vacaroaia <stef97@xxxxxxxxx> wrote:
>
> Thanks but that is why I am puzzled  - the image is there

Can you enable debug logging for the iscsi-gateway-api (add
"debug=true" in the config file), restart the daemons, and retry?

>  rbd -p rbd info vmware01
> rbd image 'vmware01':
>         size 6 TiB in 1572864 objects
>         order 22 (4 MiB objects)
>         id: 16d3f6b8b4567
>         block_name_prefix: rbd_data.16d3f6b8b4567
>         format: 2
>         features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
>         op_features:
>         flags:
>         create_timestamp: Thu Nov 29 13:56:28 2018
>
> On Mon, 10 Aug 2020 at 09:21, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>>
>> On Fri, Aug 7, 2020 at 2:37 PM Steven Vacaroaia <stef97@xxxxxxxxx> wrote:
>> >
>> > Hi,
>> > I would appreciate any help/hints to solve this issue
>> >    iscis (gwcli) cannot see the images anymore
>> >
>> > This configuration worked fine for many months
>> > What changed was that ceph is "nearly full"
>> >
>> > I am in the process of cleaning it up ( by deleting objects from one of the
>> > pools)
>> > and I do see reads and writes on the cluster as well as images info so not
>> > sure what gwcli does not like
>> >
>> > ( targetcli ls not working either - just froze )
>> >
>> > Below some info
>> >
>> > ceph version
>> > ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic
>> > (stable)
>> >
>> >  gwcli --version
>> > gwcli - 2.7
>> >
>> >  ceph osd dump | grep ratio
>> >
>> > full_ratio 0.96
>> > backfillfull_ratio 0.92
>> > nearfull_ratio 0.9
>> >
>> > [root@osd02 ~]# rbd -p rbd info rep01
>> > rbd image 'rep01':
>> >         size 7 TiB in 1835008 objects
>> >         order 22 (4 MiB objects)
>> >         id: 15b366b8b4567
>> >         block_name_prefix: rbd_data.15b366b8b4567
>> >         format: 2
>> >         features: layering, exclusive-lock, object-map, fast-diff,
>> > deep-flatten
>> >         op_features:
>> >         flags:
>> >         create_timestamp: Thu Nov  1 15:57:52 2018
>> > [root@osd02 ~]# rbd -p rbd info vmware01
>> > rbd image 'vmware01':
>> >         size 6 TiB in 1572864 objects
>> >         order 22 (4 MiB objects)
>> >         id: 16d3f6b8b4567
>> >         block_name_prefix: rbd_data.16d3f6b8b4567
>> >         format: 2
>> >         features: layering, exclusive-lock, object-map, fast-diff,
>> > deep-flatten
>> >         op_features:
>> >         flags:
>> >         create_timestamp: Thu Nov 29 13:56:28 2018
>> > [root@osd02 ~]# ceph df
>> > GLOBAL:
>> >     SIZE       AVAIL       RAW USED     %RAW USED
>> >     33 TiB     7.5 TiB       25 TiB         77.16
>> > POOLS:
>> >     NAME                ID     USED        %USED     MAX AVAIL     OBJECTS
>> >     cephfs_metadata     22     173 MiB      0.01       1.4 TiB         469
>> >     cephfs_data         23     1.7 TiB     69.78       775 GiB      486232
>> >     rbd                 24      11 TiB     93.74       775 GiB     2974077
>> > [root@osd02 ~]# ceph health detail
>> > HEALTH_ERR 2 nearfull osd(s); 2 pool(s) nearfull; Module 'prometheus' has
>> > failed: IOError("Port 9283 not free on '10.10.35.20'",)
>> > OSD_NEARFULL 2 nearfull osd(s)
>> >     osd.12 is near full
>> >     osd.17 is near full
>> > POOL_NEARFULL 2 pool(s) nearfull
>> >     pool 'cephfs_data' is nearfull
>> >     pool 'rbd' is nearfull
>> >
>> >
>> >
>> > gwcli
>> > /iscsi-target...nner-21faa413> info
>> > Client Iqn .. iqn.1998-01.com.vmware:banner-21faa413
>> > Ip Address ..
>> > Alias      ..
>> > Logged In  ..
>> > Auth
>> > - chap .. cephuser/PASSWORD
>> > Group Name ..
>> > Luns
>> > - rbd.rep01    .. lun_id=0
>> > - rbd.vmware01 .. lun_id=1
>> >
>> >
>> >
>> > osd02 journal: client update failed on
>> > iqn.1998-01.com.vmware:banner-21faa413 : Non-existent images
>> > ['rbd.vmware01'] requested for iqn.1998-01.com.vmware:banner-21faa413
>> > Aug  7 14:15:39 osd02 journal: 127.0.0.1 - - [07/Aug/2020 14:15:39] "PUT
>> > /api/_clientlun/iqn.1998-01.com.vmware:banner-21faa413 HTTP/1.1" 500 -
>> > Aug  7 14:15:39 osd02 journal: _clientlun change on 127.0.0.1 failed with
>> > 500
>> > Aug  7 14:15:39 osd02 journal: 127.0.0.1 - - [07/Aug/2020 14:15:39] "DELETE
>> > /api/clientlun/iqn.1998-01.com.vmware:banner-21faa413 HTTP/1.1" 500 -
>>
>> You will need to re-create that RBD image "vmware01" using the rbd CLI
>> before the iSCSI GW will function.
>>
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >
>>
>>
>> --
>> Jason
>>


-- 
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux