Re: Large map object found

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, the OMAP warning has cleared after running the deep-scrub, with all the swiftness.

Thanks again!



Peter Eisch
Senior Site Reliability Engineer
T
1.612.445.5135
Facebook
LinkedIn
Twitter
virginpulse.com
Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland | United Kingdom | USA
Confidentiality Notice: The information contained in this e-mail, including any attachment(s), is intended solely for use by the designated recipient(s). Unauthorized use, dissemination, distribution, or reproduction of this message by anyone other than the intended recipient(s), or a person designated as responsible for delivering such messages to the intended recipient, is strictly prohibited and may be unlawful. This e-mail may contain proprietary, confidential or privileged information. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Virgin Pulse, Inc. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender and delete this e-mail message.
v2.66
On 10/23/20, 10:48 AM, "DHilsbos@xxxxxxxxxxxxxx" <DHilsbos@xxxxxxxxxxxxxx> wrote:

Peter;

As with many things in Ceph, I don’t believe it’s a hard and fast rule (i.e. noon power of 2 will work). I believe the issues are performance, and balance. I can't confirm that. Perhaps someone else on the list will add their thoughts.

Has your warning gone away?

Thank you,

Dominic L. Hilsbos, MBA
Director – Information Technology
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx
https://nam02.safelinks.protection.outlook.com/?url="" />

From: Peter Eisch [mailto:peter.eisch@xxxxxxxxxxxxxxx]
Sent: Friday, October 23, 2020 5:41 AM
To: Dominic Hilsbos; ceph-users@xxxxxxx
Subject: Re: Large map object found

Perfect -- many thanks Dominic!

I haven't found a doc which notes the --num-shards needs to be a power of two. It isn't I don't believe you -- just haven't seen that anywhere.

peter


Peter Eisch​

Senior Site Reliability Engineer


T

1.612.445.5135












virginpulse.com


Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland | United Kingdom | USA

Confidentiality Notice: The information contained in this e-mail, including any attachment(s), is intended solely for use by the designated recipient(s). Unauthorized use, dissemination, distribution, or reproduction of this message by anyone other than the intended recipient(s), or a person designated as responsible for delivering such messages to the intended recipient, is strictly prohibited and may be unlawful. This e-mail may contain proprietary, confidential or privileged information. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Virgin Pulse, Inc. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender and delete this e-mail message.


v2.66

On 10/22/20, 10:24 AM, "DHilsbos@xxxxxxxxxxxxxx" <DHilsbos@xxxxxxxxxxxxxx> wrote:

Peter;

I believe shard counts should be powers of two.

Also, resharding makes the buckets unavailable, but occurs very quickly. As such it is not done in the background, but in the foreground, for a manual reshard.

Notice the statement: "reshard of bucket <bucket_name> from <original_object> to <new_object> completed successfully." It's done.

The warning notice won't go away until a scrub is completed to determine that a large OMAP object no longer exists.

Thank you,

Dominic L. Hilsbos, MBA
Director – Information Technology
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx
https://nam02.safelinks.protection.outlook.com/?url="" />

From: Peter Eisch [mailto:peter.eisch@xxxxxxxxxxxxxxx]
Sent: Thursday, October 22, 2020 8:04 AM
To: Dominic Hilsbos; ceph-users@xxxxxxx
Subject: Re: Large map object found

Thank you! This was helpful.

I opted for a manual reshard:

[root@cephmon-s03 ~]# radosgw-admin bucket reshard --bucket=d2ff913f5b6542cda307c9cd6a95a214/NAME_segments --num-shards=3
tenant: d2ff913f5b6542cda307c9cd6a95a214
bucket name: backups_sql_dswhseloadrepl_segments
old bucket instance id: 80bdfc66-d1fd-418d-b87d-5c8518a0b707.340850308.51
new bucket instance id: 80bdfc66-d1fd-418d-b87d-5c8518a0b707.948621036.1
total entries: 1000 2000 3000 3228
2020-10-22 08:40:26.353 7fb197fc66c0 1 execute INFO: reshard of bucket "backups_sql_dswhseloadrepl_segments" from "d2ff913f5b6542cda307c9cd6a95a214/backups_sql_dswhseloadrepl_segments:80bdfc66-d1fd-418d-b87d-5c8518a0b707.340850308.51" to "d2ff913f5b6542cda307c9cd6a95a214/backups_sql_dswhseloadrepl_segments:80bdfc66-d1fd-418d-b87d-5c8518a0b707.948621036.1" completed successfully

[root@cephmon-s03 ~]# radosgw-admin buckets reshard list
[]
[root@cephmon-s03 ~]# radosgw-admin buckets reshard status --bucket=d2ff913f5b6542cda307c9cd6a95a214/NAME_segments
[
{
"reshard_status": "not-resharding",
"new_bucket_instance_id": "",
"num_shards": -1
},
{
"reshard_status": "not-resharding",
"new_bucket_instance_id": "",
"num_shards": -1
},
{
"reshard_status": "not-resharding",
"new_bucket_instance_id": "",
"num_shards": -1
}
]
[root@cephmon-s03 ~]#

This kicked of an autoscale event. Would the reshard presumably start after the autoscaling is complete?

peter



Peter Eisch​

Senior Site Reliability Engineer


T

1.612.445.5135












virginpulse.com


Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland | United Kingdom | USA

Confidentiality Notice: The information contained in this e-mail, including any attachment(s), is intended solely for use by the designated recipient(s). Unauthorized use, dissemination, distribution, or reproduction of this message by anyone other than the intended recipient(s), or a person designated as responsible for delivering such messages to the intended recipient, is strictly prohibited and may be unlawful. This e-mail may contain proprietary, confidential or privileged information. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Virgin Pulse, Inc. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender and delete this e-mail message.


v2.66

On 10/21/20, 3:19 PM, "DHilsbos@xxxxxxxxxxxxxx" <DHilsbos@xxxxxxxxxxxxxx> wrote:

This email originates outside Virgin Pulse.


Peter;

Look into bucket sharding.

Thank you,

Dominic L. Hilsbos, MBA
Director – Information Technology
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx
https://nam02.safelinks.protection.outlook.com/?url="" />

From: Peter Eisch [mailto:peter.eisch@xxxxxxxxxxxxxxx]
Sent: Wednesday, October 21, 2020 12:39 PM
To: ceph-users@xxxxxxx
Subject: Large map object found

Hi,

My rgw.buckets.index has the cluster in WARN. I'm either not understanding the real issue or I'm making it worse, or both.

OMAP_BYTES: 70461524
OMAP_KEYS: 250874

I thought I'd head this off by deleting rgw objects which would normally get deleted in the near future but this only seemed to make the values grow. Before I deleted lots of objects the values were:

OMAP_BYTES: 65450132
OMAP_KEYS: 209843

I read the default is 200k but I haven't read the proper way to manage this situation. What reading should I dive into? I could probably craft up a command to increase the value to clear the warning but I'm guessing this might not be great long-term.

Other errata which might matter:
Size: 3
Pool: nvme
CLASS SIZE AVAIL USED RAW USED %RAW USED
nvme 256 TiB 165 TiB 91 TiB 91 TiB 35.53

Errata: the complete statements:

PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS* LOG STATE SINCE VERSION REPORTED UP ACTING SCRUB_STAMP DEEP_SCRUB_STAMP
43.d 2 0 0 0 0 70461524 250874 3070 active+clean 36m 185904'456870 185904:1357091 [99,90,48]p99 [99,90,48]p99 2020-10-21 13:53:42.102363 2020-10-21 13:53:42.102363

Thanks!

peter
Peter Eisch​

Senior Site Reliability Engineer


T

1.612.445.5135












virginpulse.com


Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland | United Kingdom | USA

Confidentiality Notice: The information contained in this e-mail, including any attachment(s), is intended solely for use by the designated recipient(s). Unauthorized use, dissemination, distribution, or reproduction of this message by anyone other than the intended recipient(s), or a person designated as responsible for delivering such messages to the intended recipient, is strictly prohibited and may be unlawful. This e-mail may contain proprietary, confidential or privileged information. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Virgin Pulse, Inc. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender and delete this e-mail message.


v2.66





_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux