[Nautilus] no data on secondary zone after bucket reshard.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

We have a replicated cluster in Nautilus. Recently, we resharded a bucket
without stopping the gateways, as a consequence, the bucket on the
secondary zone now reports 0KB usage, even though you can still see the
objects in the data pool.

I was able to reproduce the issue in a lab, so it's easy to show. i
uploaded around 3000 empty text files  :

*Bucket Stats :*

Cluster 1:
 radosgw-admin bucket stats --bucket=testbucket
{
    "bucket": "testbucket",
    "num_shards": 24,
    "tenant": "",
    "zonegroup": "e184d7c4-0ea3-46ee-9aab-52ff7508f533",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": ""
    },
    "id": "604ead4e-53ec-43d4-897e-991bb7df4649.17288.1",
    "marker": "604ead4e-53ec-43d4-897e-991bb7df4649.6133.1",
    "index_type": "Normal",
    "owner": "testuser",
    "ver":
"0#6,1#8,2#6,3#8,4#6,5#8,6#8,7#6,8#6,9#8,10#6,11#6,12#8,13#6,14#6,15#8,16#6,17#6,18#6,19#6,20#10,21#12,22#8,23#6",
    "master_ver":
"0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0,13#0,14#0,15#0,16#0,17#0,18#0,19#0,20#0,21#0,22#0,23#0",
    "mtime": "2021-07-15 16:56:38.760765Z",
    "max_marker":
"0#00000000005.18.1,1#00000000007.54.5,2#00000000005.7.1,3#00000000007.54.5,4#00000000005.7.1,5#00000000007.54.5,6#00000000007.107.5,7#00000000005.15.1,8#00000000005.7.1,9#00000000007.109.5,10#00000000005.21.1,11#00000000005.7.1,12#00000000007.156.5,13#00000000005.7.1,14#00000000005.21.1,15#00000000007.160.5,16#00000000005.14.1,17#00000000005.23.1,18#00000000005.15.1,19#00000000005.22.1,20#00000000009.57.5,21#00000000011.58.5,22#00000000007.163.5,23#00000000005.15.1",
*    "usage": {*
*        "rgw.main": {*
*            "size": 0,*
*            "size_actual": 0,*
*            "size_utilized": 0,*
*            "size_kb": 0,*
*            "size_kb_actual": 0,*
*            "size_kb_utilized": 0,*
*            "num_objects": 3948*
*        }*
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

Cluster2
radosgw-admin bucket stats --bucket=testbucket
{
    "bucket": "testbucket",
    "num_shards": 24,
    "tenant": "",
    "zonegroup": "e184d7c4-0ea3-46ee-9aab-52ff7508f533",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": ""
    },
    "id": "604ead4e-53ec-43d4-897e-991bb7df4649.17288.1",
    "marker": "604ead4e-53ec-43d4-897e-991bb7df4649.6133.1",
    "index_type": "Normal",
    "owner": "testuser",
    "ver":
"0#1,1#5,2#1,3#5,4#1,5#5,6#5,7#1,8#1,9#5,10#1,11#1,12#5,13#1,14#1,15#5,16#1,17#1,18#1,19#1,20#9,21#13,22#5,23#1",
    "master_ver":
"0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0,13#0,14#0,15#0,16#0,17#0,18#0,19#0,20#0,21#0,22#0,23#0",
    "mtime": "2021-07-15 16:56:38.760765Z",
    "max_marker":
"0#,1#00000000004.17.5,2#,3#00000000004.18.5,4#,5#00000000004.18.5,6#00000000004.31.5,7#,8#,9#00000000004.32.5,10#,11#,12#00000000004.126.5,13#,14#,15#00000000004.46.5,16#,17#,18#,19#,20#00000000008.21.5,21#00000000012.26.5,22#00000000004.46.5,23#",
*    "usage": {*
*        }*
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

*RADOS DF *

Cluster1:
POOL_NAME                        USED OBJECTS CLONES COPIES
MISSING_ON_PRIMARY UNFOUND DEGRADED  RD_OPS      RD  WR_OPS      WR USED
COMPR UNDER COMPR
.rgw.root                     3.2 MiB      18      0     54
  0       0        0     792 824 KiB      50  35 KiB        0 B         0 B
*master-zone.rgw.buckets.data      0 B    3935      0  11805*
    0       0        0   11723 7.4 MiB   47387     0 B        0 B         0
B
master-zone.rgw.buckets.index 355 KiB      25      0     75
  0       0        0   26544  38 MiB   27971  19 MiB        0 B         0 B
master-zone.rgw.control           0 B       8      0     24
  0       0        0       0     0 B       0     0 B        0 B         0 B
master-zone.rgw.log            25 MiB     672      0   2016
  0       0        0 1880845 1.8 GiB 1088822 3.6 MiB        0 B         0 B
master-zone.rgw.meta          1.7 MiB      10      0     30
  0       0        0    7877 5.5 MiB      75  30 KiB        0 B         0 B
master-zone.rgw.otp               0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.buckets.extra        0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.buckets.index        0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.buckets.non-ec       0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.control              0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.data.root            0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.gc                   0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.intent-log           0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.log                  0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.meta                 0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.usage                0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.users.email          0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.users.keys           0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.users.swift          0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B
ncsa-ne1.rgw.users.uid            0 B       0      0      0
  0       0        0       0     0 B       0     0 B        0 B         0 B

total_objects    4668
total_used       32 GiB
total_avail      448 GiB
total_space      480 GiB
Cluster2:
[root@cloud-lab-rgw04 ~]# rados df
POOL_NAME                           USED OBJECTS CLONES COPIES
MISSING_ON_PRIMARY UNFOUND DEGRADED  RD_OPS      RD  WR_OPS      WR USED
COMPR UNDER COMPR
.rgw.root                        3.2 MiB      18      0     54
    0       0        0    5283 5.6 MiB      70  60 KiB        0 B         0
B
ncsa-ne1.rgw.buckets.data            0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.buckets.extra           0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.buckets.index           0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.buckets.non-ec          0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.control                 0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.data.root               0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.gc                      0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.intent-log              0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.log                     0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.meta                    0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.usage                   0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.users.email             0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.users.keys              0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.users.swift             0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
ncsa-ne1.rgw.users.uid               0 B       0      0      0
    0       0        0       0     0 B       0     0 B        0 B         0
B
*secondary-zone.rgw.buckets.data      0 B    3935      0  11805*
      0       0        0     242 140 KiB   55283     0 B        0 B
 0 B
secondary-zone.rgw.buckets.index 2.9 MiB      26      0     78
    0       0        0   35120  36 MiB   24034  15 MiB        0 B         0
B
secondary-zone.rgw.control           0 B       8      0     24
    0       0        0       0     0 B       0     0 B        0 B         0
B
secondary-zone.rgw.log            37 MiB     766      0   2298
    0       0        0 2307976 2.2 GiB 1459774 3.1 MiB        0 B         0
B
secondary-zone.rgw.meta          1.7 MiB      11      0     33
    0       0        0    8868 6.6 MiB     131  44 KiB        0 B         0
B

total_objects    4764
total_used       31 GiB
total_avail      449 GiB
total_space      480 GiB


Is there a way I can fix this situation? make Cluster2 realize he still has
the objects related to ''testbucket'' ? or at least a way where i can re-do
the synchronization from scratch on that bucket?  I already tried  "data
sync init" and "metadata sync init" but it didn't do the trick.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux