More informations: I have a overlimit bucket and the error belongs to this bucket. fill_status=OVER 100% objects_per_shard: 363472 (I use default 100K per shard) num_shards: 750 I'm deleting objects from this bucket with absolute path and I dont use dynamic bucket resharding due to multisite. I've reviewed the code and I think I see the error because these objects not exist on index. Can anyone explain the code and the error please? OSD LOG: cls_rgw.cc:1102: ERROR: read_key_entry() idx=�1000_matches/xdir/05/21/27260.jpg ret=-2 https://github.com/ceph/ceph/blob/master/src/cls/rgw/cls_rgw.cc public: BIVerObjEntry(cls_method_context_t& _hctx, const cls_rgw_obj_key& _key) : hctx(_hctx), key(_key), initialized(false) { } int init(bool check_delete_marker = true) { int ret = read_key_entry(hctx, key, &instance_idx, &instance_entry, check_delete_marker && key.instance.empty()); /* this is potentially a delete marker, for null objects we keep separate instance entry for the delete markers */ if (ret < 0) { CLS_LOG(0, "ERROR: read_key_entry() idx=%s ret=%d", instance_idx.c_str(), ret); return ret; } initialized = true; CLS_LOG(20, "read instance_entry key.name=%s key.instance=%s flags=%d", instance_entry.key.name.c_str(), instance_entry.key.instance.c_str(), instance_entry.flags); return 0; } rgw_bucket_dir_entry& get_dir_entry() { return instance_entry; } by morphin <morphinwithyou@xxxxxxxxx>, 15 Nis 2021 Per, 02:19 tarihinde şunu yazdı: > Hello everyone! > > I'm running nautilus 14.2.16 and I'm using RGW with Beast frontend. > I see this eror log in every SSD osd which is using for rgw index. > Can you please tell me what is the problem? > > OSD LOG: > cls_rgw.cc:1102: ERROR: read_key_entry() > idx=�1000_matches/xdir/05/21/27260.jpg ret=-2 > cls_rgw.cc:1102: ERROR: read_key_entry() > idx=�1000_matches/xdir/05/21/27253.jpg ret=-2 > > > RADOSGW LOG: > 2021-04-15 01:53:54.385 7f2e0f8e7700 1 beast: 0x55a4439f8710: > 10.151.101.15 - - [2021-04-15 01:53:54.0.385327s] "HEAD > /xdir/04/13/704745.jpg HTTP/1.1" 200 0 - "aws-sdk-java/1.11.638 > Linux/3.10.0-1062.12.1.el7.x86_64 > Java_HotSpot(TM)_64-Bit_Server_VM/25.251-b08 java/1.8.0_251 groovy/2.4.3 > vendor/Oracle_Corporation" - > 2021-04-15 01:53:54.385 7f2d8b7df700 1 ====== starting new request > req=0x55a4439f8710 ===== > 2021-04-15 01:53:54.405 7f2e008c9700 1 ====== req done req=0x55a43dbc6710 > op status=0 http_status=204 latency=0.300003s ====== > 2021-04-15 01:53:54.405 7f2e008c9700 1 beast: 0x55a43dbc6710: > 10.151.101.15 - - [2021-04-15 01:53:54.0.405327s] "DELETE > /xdir/05/21/21586.gz HTTP/1.1" 204 0 - "aws-sdk-java/1.11.638 > Linux/3.10.0-1062.12.1.el7.x86_64 > Java_HotSpot(TM)_64-Bit_Server_VM/25.251-b08 java/1.8.0_251 groovy/2.4.3 > vendor/Oracle_Corporation" - > 2021-04-15 01:53:54.405 7f2d92fee700 1 ====== starting new request > req=0x55a43dbc6710 ===== > 2021-04-15 01:53:54.405 7f2d92fee700 0 WARNING: couldn't find acl header > for object, generating default > 2021-04-15 01:53:54.405 7f2d92fee700 1 ====== req done req=0x55a43dbc6710 > op status=0 http_status=200 latency=0s ====== > 2021-04-15 01:53:54.405 7f2d92fee700 1 beast: 0x55a43dbc6710: > 10.151.101.15 - - [2021-04-15 01:53:54.0.405327s] "HEAD > /xdir/2013/11/20/2a67508e-d7dd-4e0f-b959-d7575d5f65b1 HTTP/1.1" 200 0 - > "aws-sdk-java/1.11.638 Linux/3.10.0-1160.11.1.el7.x86_64 > Java_HotSpot(TM)_64-Bit_Server_VM/25.281-b09 java/1.8.0_281 groovy/2.5.6 > vendor/Oracle_Corporation" - > > > CEPH OSD DF > ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL > %USE VAR PGS STATUS > 19 ssd 0.87320 1.00000 894 GiB 436 GiB 101 GiB 332 GiB 2.5 GiB 458 > GiB 48.75 1.84 115 up > 208 ssd 0.87329 1.00000 894 GiB 161 GiB 87 GiB 73 GiB 978 MiB 733 > GiB 18.00 0.68 113 up > 199 ssd 0.87320 1.00000 894 GiB 272 GiB 106 GiB 163 GiB 2.4 GiB 623 > GiB 30.37 1.14 123 up > 202 ssd 0.87329 1.00000 894 GiB 239 GiB 73 GiB 165 GiB 1.4 GiB 655 > GiB 26.77 1.01 106 up > 39 ssd 0.87320 1.00000 894 GiB 450 GiB 87 GiB 361 GiB 2.3 GiB 444 > GiB 50.36 1.90 113 up > 207 ssd 0.87329 1.00000 894 GiB 204 GiB 100 GiB 98 GiB 6.0 GiB 691 > GiB 22.76 0.86 118 up > 59 ssd 0.87320 1.00000 894 GiB 372 GiB 107 GiB 263 GiB 3.0 GiB 522 > GiB 41.64 1.57 122 up > 203 ssd 0.87329 1.00000 894 GiB 206 GiB 79 GiB 124 GiB 2.4 GiB 689 > GiB 23.00 0.87 117 up > 79 ssd 0.87320 1.00000 894 GiB 447 GiB 103 GiB 342 GiB 1.8 GiB 447 > GiB 49.97 1.88 120 up > 206 ssd 0.87329 1.00000 894 GiB 200 GiB 81 GiB 119 GiB 1.0 GiB 694 > GiB 22.38 0.84 94 up > 99 ssd 0.87320 1.00000 894 GiB 333 GiB 87 GiB 244 GiB 2.0 GiB 562 > GiB 37.19 1.40 106 up > 205 ssd 0.87329 1.00000 894 GiB 316 GiB 83 GiB 232 GiB 1.1 GiB 579 > GiB 35.29 1.33 117 up > 114 ssd 0.87329 1.00000 894 GiB 256 GiB 100 GiB 154 GiB 1.7 GiB 638 > GiB 28.61 1.08 113 up > 200 ssd 0.87329 1.00000 894 GiB 266 GiB 100 GiB 165 GiB 1.1 GiB 628 > GiB 29.76 1.12 128 up > 139 ssd 0.87320 1.00000 894 GiB 234 GiB 79 GiB 153 GiB 1.7 GiB 660 > GiB 26.14 0.98 104 up > 204 ssd 0.87329 1.00000 894 GiB 173 GiB 113 GiB 59 GiB 1.2 GiB 721 > GiB 19.37 0.73 124 up > 119 ssd 0.87329 1.00000 894 GiB 248 GiB 108 GiB 139 GiB 1.9 GiB 646 > GiB 27.76 1.05 130 up > 159 ssd 0.87329 1.00000 894 GiB 196 GiB 94 GiB 99 GiB 2.6 GiB 699 > GiB 21.87 0.82 109 up > 179 ssd 0.87329 1.00000 894 GiB 427 GiB 81 GiB 341 GiB 4.7 GiB 467 > GiB 47.73 1.80 114 up > 201 ssd 0.87329 1.00000 894 GiB 346 GiB 102 GiB 242 GiB 1.8 GiB 548 > GiB 38.71 1.46 128 up > > CEPH IOSTAT > > +---------------+---------------+---------------+---------------+---------------+---------------+ > | Read | Write | Total | Read IOPS | Write > IOPS | Total IOPS | > > +---------------+---------------+---------------+---------------+---------------+---------------+ > | 329 MiB/s | 39 MiB/s | 368 MiB/s | 109027 | > 1646 | 110673 | > | 329 MiB/s | 39 MiB/s | 368 MiB/s | 109027 | > 1646 | 110673 | > | 331 MiB/s | 39 MiB/s | 371 MiB/s | 114915 | > 1631 | 116547 | > | 331 MiB/s | 39 MiB/s | 371 MiB/s | 114915 | > 1631 | 116547 | > | 308 MiB/s | 42 MiB/s | 350 MiB/s | 108469 | > 1635 | 110104 | > | 308 MiB/s | 42 MiB/s | 350 MiB/s | 108469 | > 1635 | 110104 | > | 291 MiB/s | 44 MiB/s | 335 MiB/s | 105828 | > 1687 | 107516 | > > > > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx