Trouble about reading gwcli disks state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I would like to ask if anybody knows how to handle the gwcli status below.
- Disks state in gwcli shows as "Unknowm"
- Clients still mounting the "Unknown" disks and seems working normally.

Two of the rbd disks show "Unknown" instead of "Online" in gwcli.
==============================================================================================================
# gwcli ls /disks
o- disks
.......................................................................................................
[77312G, Disks: 10]
  o- ssd-rf2
......................................................................................................
[ssd-rf2 (6.0T)]
  | o- iscsi_01
.................................................................................
[ssd-rf2/iscsi_01 (Unknown, 3.0T)]
  | o- iscsi_02
.................................................................................
[ssd-rf2/iscsi_02 (Unknown, 3.0T)]
  o- ssd-rf3
......................................................................................................
[ssd-rf3 (8.0T)]
    o- iscsi_pool_01
........................................................................
[ssd-rf3/iscsi_pool_01
(Online, 4.0T)]
    o- iscsi_pool_02
........................................................................
[ssd-rf3/iscsi_pool_02
(Online, 4.0T)]
==============================================================================================================

Both "Lock Owner" and "State" are "Unknown" inside info session.
==============================================================================================================
# gwcli /disks/ssd-rf2/iscsi_01 info
Image                 .. iscsi_01
Ceph Cluster          .. ceph
Pool                  .. ssd-rf2
Wwn                   .. 7b441630-2868-47d2-94f1-35efea4cf258
Size H                .. 3.0T
Feature List          .. RBD_FEATURE_LAYERING
                         RBD_FEATURE_EXCLUSIVE_LOCK
                         RBD_FEATURE_OBJECT_MAP
                         RBD_FEATURE_FAST_DIFF
                         RBD_FEATURE_DEEP_FLATTEN
Snapshots             ..
Owner                 .. sds-ctt-gw1
Lock Owner            .. Unknown
State                 .. Unknown
Backstore             .. user:rbd
Backstore Object Name .. ssd-rf2.iscsi_01
Control Values
- hw_max_sectors .. 1024
- max_data_area_mb .. 8
- osd_op_timeout .. 30
- qfull_timeout .. 5
==============================================================================================================

Below is reference output from a noral rbd disk.
==============================================================================================================
# gwcli /disks/ssd-rf3/iscsi_pool_01 info
Image                 .. iscsi_pool_01
Ceph Cluster          .. ceph
Pool                  .. ssd-rf3
Wwn                   .. 20396fed-2aba-422d-99c2-8353b8910fa4
Size H                .. 4.0T
Feature List          .. RBD_FEATURE_LAYERING
                         RBD_FEATURE_EXCLUSIVE_LOCK
                         RBD_FEATURE_OBJECT_MAP
                         RBD_FEATURE_FAST_DIFF
                         RBD_FEATURE_DEEP_FLATTEN
Snapshots             ..
Owner                 .. sds-ctt-gw2
Lock Owner            .. sds-ctt-gw2
State                 .. Online
Backstore             .. user:rbd
Backstore Object Name .. ssd-rf3.iscsi_pool_01
Control Values
- hw_max_sectors .. 1024
- max_data_area_mb .. 8
- osd_op_timeout .. 30
- qfull_timeout .. 5
==============================================================================================================

Nothing special found in the rbd setting.
==============================================================================================================
root@sds-ctt-mon1:/# rbd ls -p ssd-rf2
iscsi_01
iscsi_02
root@sds-ctt-mon1:/# rbd -p ssd-rf2 info iscsi_01
rbd image 'iscsi_01':
        size 3 TiB in 3145728 objects
        order 20 (1 MiB objects)
        snapshot_count: 0
        id: 272654e71f95e9
        block_name_prefix: rbd_data.272654e71f95e9
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten
        op_features:
        flags:
        create_timestamp: Mon Mar  7 05:28:55 2022
        access_timestamp: Tue May 17 02:17:16 2022
        modify_timestamp: Tue May 17 02:17:16 2022
root@sds-ctt-mon1:/# rbd -p ssd-rf3 info iscsi_pool_01
rbd image 'iscsi_pool_01':
        size 4 TiB in 4194304 objects
        order 20 (1 MiB objects)
        snapshot_count: 0
        id: 29bebcd9d3b6aa
        block_name_prefix: rbd_data.29bebcd9d3b6aa
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten
        op_features:
        flags:
        create_timestamp: Tue Aug 11 02:32:37 2020
        access_timestamp: Tue May 17 02:17:31 2022
        modify_timestamp: Tue May 17 02:17:39 2022
root@sds-ctt-mon1:/#
==============================================================================================================

Cluster working healthly.
==============================================================================================================
# ceph health detail
HEALTH_OK
==============================================================================================================
Looking forward to any suggestions.
Thanks.

Regs,
Icy
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux