RGW: Reshard index of non-master zones in multi-site

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Following the update of one secondary site from 12.2.8 to 12.2.11, the
following warning have come up.

HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
    1 large objects found in pool '.rgw.buckets.index'
    Search the cluster log for 'Large omap object found' for more details.

listomapkeys confirms this.

.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.36605032.1: 2828737

And there's "bucket_index_max_shards = 0" in the multisite configuration map.

Have ran radosgw-admin reshard on all buckets, setting size to 12.
Likewise, bucket_index_max_shards = 12 in the maps and committed the
period.  This followed by bi purge of the old bucket index.

I can see all new files in the index have been sync'd with all
secondaries, however they are all empty.

On the master:

.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.11: 70517
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.10: 69940
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.3: 69992
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.0: 70184
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.6: 70276
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.2: 69695
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.4: 70251
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.7: 69916
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.5: 69677
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.1: 70569
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.9: 70151
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.8: 70312

On the secondaries:

.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.11: 72
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.10: 90
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.3: 42
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.36605032.1: 2828737
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.0: 33
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.6: 51
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.2: 51
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.4: 54
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.7: 69
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.5: 60
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.1: 48
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.9: 60
.dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1.8: 66

Have ran bucket sync, metadata sync, data sync, nothing changes.

How would you synchronize the bucket index from master to secondaries?
 Is it safe to remove the old index on the secondaries?

I have noticed that both the 366... and 908... ids show up here:

# radosgw-admin metadata list bucket.instance
[
    "mybucket:0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1",
    "mybucket:0ef1a91a-4aee-427e-bdf8-30589abb2d3e.36605032.1",
]

# radosgw-admin metadata get bucket:mybucket
{
    "key": "bucket:mybucket",
    "ver": {
        "tag": "_RTDLJ2lyzp0KcHkL_hE4t3Z",
        "ver": 2
    },
    "mtime": "2019-02-04 14:03:47.830500Z",
    "data": {
        "bucket": {
            "name": "mybucket",
            "marker": "0ef1a91a-4aee-427e-bdf8-30589abb2d3e.36605032.1",
            "bucket_id": "0ef1a91a-4aee-427e-bdf8-30589abb2d3e.90887297.1",
            "tenant": "",
            "explicit_placement": {
                "data_pool": "",
                "data_extra_pool": "",
                "index_pool": ""
            }
        },
        "owner": "mybucket",
        "creation_time": "2018-03-27 19:05:22.776182Z",
        "linked": "true",
        "has_bucket_info": "false"
    }
}

Is this the reason why resharding hasn't propagated?

-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux