Thank you! This was helpful.
I opted for a manual reshard:
[root@cephmon-s03 ~]# radosgw-admin bucket reshard --bucket=d2ff913f5b6542cda307c9cd6a95a214/NAME_segments --num-shards=3
tenant: d2ff913f5b6542cda307c9cd6a95a214
bucket name: backups_sql_dswhseloadrepl_segments
old bucket instance id: 80bdfc66-d1fd-418d-b87d-5c8518a0b707.340850308.51
new bucket instance id: 80bdfc66-d1fd-418d-b87d-5c8518a0b707.948621036.1
total entries: 1000 2000 3000 3228
2020-10-22 08:40:26.353 7fb197fc66c0 1 execute INFO: reshard of bucket "backups_sql_dswhseloadrepl_segments" from "d2ff913f5b6542cda307c9cd6a95a214/backups_sql_dswhseloadrepl_segments:80bdfc66-d1fd-418d-b87d-5c8518a0b707.340850308.51" to "d2ff913f5b6542cda307c9cd6a95a214/backups_sql_dswhseloadrepl_segments:80bdfc66-d1fd-418d-b87d-5c8518a0b707.948621036.1" completed successfully
[root@cephmon-s03 ~]# radosgw-admin buckets reshard list
[]
[root@cephmon-s03 ~]# radosgw-admin buckets reshard status --bucket=d2ff913f5b6542cda307c9cd6a95a214/NAME_segments
[
{
"reshard_status": "not-resharding",
"new_bucket_instance_id": "",
"num_shards": -1
},
{
"reshard_status": "not-resharding",
"new_bucket_instance_id": "",
"num_shards": -1
},
{
"reshard_status": "not-resharding",
"new_bucket_instance_id": "",
"num_shards": -1
}
]
[root@cephmon-s03 ~]#
This kicked of an autoscale event. Would the reshard presumably start after the autoscaling is complete?
peter
On 10/21/20, 3:19 PM, "DHilsbos@xxxxxxxxxxxxxx" <DHilsbos@xxxxxxxxxxxxxx> wrote:
This email originates outside Virgin Pulse.
Peter;
Look into bucket sharding.
Thank you,
Dominic L. Hilsbos, MBA
Director – Information Technology
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx
https://nam02.safelinks.protection.outlook.com/?url="" />
From: Peter Eisch [mailto:peter.eisch@xxxxxxxxxxxxxxx]
Sent: Wednesday, October 21, 2020 12:39 PM
To: ceph-users@xxxxxxx
Subject: Large map object found
Hi,
My rgw.buckets.index has the cluster in WARN. I'm either not understanding the real issue or I'm making it worse, or both.
OMAP_BYTES: 70461524
OMAP_KEYS: 250874
I thought I'd head this off by deleting rgw objects which would normally get deleted in the near future but this only seemed to make the values grow. Before I deleted lots of objects the values were:
OMAP_BYTES: 65450132
OMAP_KEYS: 209843
I read the default is 200k but I haven't read the proper way to manage this situation. What reading should I dive into? I could probably craft up a command to increase the value to clear the warning but I'm guessing this might not be great long-term.
Other errata which might matter:
Size: 3
Pool: nvme
CLASS SIZE AVAIL USED RAW USED %RAW USED
nvme 256 TiB 165 TiB 91 TiB 91 TiB 35.53
Errata: the complete statements:
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS* LOG STATE SINCE VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP
43.d 2 0 0 0 0 70461524 250874 3070 active+clean 36m 185904'456870 185904:1357091 [99,90,48]p99 [99,90,48]p99 2020-10-21 13:53:42.102363 2020-10-21 13:53:42.102363
Thanks!
peter
Peter Eisch
Senior Site Reliability Engineer
T
1.612.445.5135
virginpulse.com
Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland | United Kingdom | USA
Confidentiality Notice: The information contained in this e-mail, including any attachment(s), is intended solely for use by the designated recipient(s). Unauthorized use, dissemination, distribution, or reproduction of this message by anyone other than the intended recipient(s), or a person designated as responsible for delivering such messages to the intended recipient, is strictly prohibited and may be unlawful. This e-mail may contain proprietary, confidential or privileged information. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Virgin Pulse, Inc. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender and delete this e-mail message.
v2.66
I opted for a manual reshard:
[root@cephmon-s03 ~]# radosgw-admin bucket reshard --bucket=d2ff913f5b6542cda307c9cd6a95a214/NAME_segments --num-shards=3
tenant: d2ff913f5b6542cda307c9cd6a95a214
bucket name: backups_sql_dswhseloadrepl_segments
old bucket instance id: 80bdfc66-d1fd-418d-b87d-5c8518a0b707.340850308.51
new bucket instance id: 80bdfc66-d1fd-418d-b87d-5c8518a0b707.948621036.1
total entries: 1000 2000 3000 3228
2020-10-22 08:40:26.353 7fb197fc66c0 1 execute INFO: reshard of bucket "backups_sql_dswhseloadrepl_segments" from "d2ff913f5b6542cda307c9cd6a95a214/backups_sql_dswhseloadrepl_segments:80bdfc66-d1fd-418d-b87d-5c8518a0b707.340850308.51" to "d2ff913f5b6542cda307c9cd6a95a214/backups_sql_dswhseloadrepl_segments:80bdfc66-d1fd-418d-b87d-5c8518a0b707.948621036.1" completed successfully
[root@cephmon-s03 ~]# radosgw-admin buckets reshard list
[]
[root@cephmon-s03 ~]# radosgw-admin buckets reshard status --bucket=d2ff913f5b6542cda307c9cd6a95a214/NAME_segments
[
{
"reshard_status": "not-resharding",
"new_bucket_instance_id": "",
"num_shards": -1
},
{
"reshard_status": "not-resharding",
"new_bucket_instance_id": "",
"num_shards": -1
},
{
"reshard_status": "not-resharding",
"new_bucket_instance_id": "",
"num_shards": -1
}
]
[root@cephmon-s03 ~]#
This kicked of an autoscale event. Would the reshard presumably start after the autoscaling is complete?
peter
| |||||||
| |||||||
| |||||||
| |||||||
| |||||||
|
This email originates outside Virgin Pulse.
Peter;
Look into bucket sharding.
Thank you,
Dominic L. Hilsbos, MBA
Director – Information Technology
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx
https://nam02.safelinks.protection.outlook.com/?url="" />
From: Peter Eisch [mailto:peter.eisch@xxxxxxxxxxxxxxx]
Sent: Wednesday, October 21, 2020 12:39 PM
To: ceph-users@xxxxxxx
Subject: Large map object found
Hi,
My rgw.buckets.index has the cluster in WARN. I'm either not understanding the real issue or I'm making it worse, or both.
OMAP_BYTES: 70461524
OMAP_KEYS: 250874
I thought I'd head this off by deleting rgw objects which would normally get deleted in the near future but this only seemed to make the values grow. Before I deleted lots of objects the values were:
OMAP_BYTES: 65450132
OMAP_KEYS: 209843
I read the default is 200k but I haven't read the proper way to manage this situation. What reading should I dive into? I could probably craft up a command to increase the value to clear the warning but I'm guessing this might not be great long-term.
Other errata which might matter:
Size: 3
Pool: nvme
CLASS SIZE AVAIL USED RAW USED %RAW USED
nvme 256 TiB 165 TiB 91 TiB 91 TiB 35.53
Errata: the complete statements:
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS* LOG STATE SINCE VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP
43.d 2 0 0 0 0 70461524 250874 3070 active+clean 36m 185904'456870 185904:1357091 [99,90,48]p99 [99,90,48]p99 2020-10-21 13:53:42.102363 2020-10-21 13:53:42.102363
Thanks!
peter
Peter Eisch
Senior Site Reliability Engineer
T
1.612.445.5135
virginpulse.com
Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland | United Kingdom | USA
Confidentiality Notice: The information contained in this e-mail, including any attachment(s), is intended solely for use by the designated recipient(s). Unauthorized use, dissemination, distribution, or reproduction of this message by anyone other than the intended recipient(s), or a person designated as responsible for delivering such messages to the intended recipient, is strictly prohibited and may be unlawful. This e-mail may contain proprietary, confidential or privileged information. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Virgin Pulse, Inc. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender and delete this e-mail message.
v2.66
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx