Yeah, regarding data distribution increasing the pg_num of the data
pool is recommended. But could you also share the output of:
ceph pg ls-by-pool default.rgw.meta
That's where the large omap was reported, maybe you'll need to
increase the pg_num for that pool as well. Personally, I always
disable the autoscaler.
Zitat von Mark Johnson <markj@xxxxxxxxx>:
Thanks Bailey,
With regards to the PG count, we've been relying on PG autoscale and
it is currently enabled. I figure I'd need to disable autoscale and
manually increase the PG count on the default.rgw.buckets.data pool,
correct? We're coming from our existing clusters running Jewel to
this new Quincy cluster and have no prior experience with autoscale
so we were just assuming autoscale would manage PG counts better
than us doing it manually. As you can probably guess, we don't have
much experience with Ceph.
Regards,
Mark Johnson
On Mon, 2023-07-31 at 21:54 -0300, Bailey Allison wrote:
[You don't often get email from
ballison@xxxxxxxxxxxx<mailto:ballison@xxxxxxxxxxxx>. Learn why this
is important at https://aka.ms/LearnAboutSenderIdentification ]
Hi,
It appears you have quite a low PG count on your cluster (approx. 20
PGs per each OSD).
Usually is recommended to have about 100-150 per each OSD. With a
lower PG count you can have issues with balancing data and cause
errors such as large OMAP objects.
Might not be the fix in this case but either way would still
recommend increasing PGs on your pools.
If you look at the OMAP value in your ceph osd df you can see that
some OSDs have 2GB while some have 500MB. Even for data some drives
are holding 900GB while others 2TB.
You will have to issue a deep-scrub on the PGs as well to get
updated OMAP data once the PGs are increased.
Regards,
Bailey
-----Original Message-----
From: Mark Johnson <markj@xxxxxxxxx<mailto:markj@xxxxxxxxx>>
Sent: July 31, 2023 9:01 PM
To: eblock@xxxxxx<mailto:eblock@xxxxxx>;
ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
Subject: Re: 1 Large omap object found
Sure thing. Thanks for the reply.
ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 291 TiB 244 TiB 47 TiB 47 TiB 16.02
TOTAL 291 TiB 244 TiB 47 TiB 47 TiB 16.02
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED
MAX AVAIL
.mgr 1 1 459 MiB 116 1.3 GiB 0
65 TiB
.rgw.root 2 32 1.3 KiB 4 48 KiB 0
65 TiB
default.rgw.log 3 32 5.3 KiB 209 468 KiB 0
65 TiB
default.rgw.control 4 32 0 B 8 0 B 0
65 TiB
default.rgw.meta 5 32 452 MiB 828.75k 10 GiB 0
65 TiB
default.rgw.buckets.index 6 32 17 GiB 4.56M 51 GiB 0.03
65 TiB
default.rgw.buckets.data 7 128 15 TiB 54.51M 46 TiB 19.24
65 TiB
cephfs_metadata 8 16 258 MiB 98 775 MiB 0
65 TiB
cephfs_data 9 32 1.9 GiB 998 5.6 GiB 0
65 TiB
ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL %USE VAR PGS STATUS
0 hdd 7.27739 1.00000 7.3 TiB 1.6 TiB 1.6 TiB 550 MiB 12
GiB 5.7 TiB 21.70 1.35 21 up
1 hdd 7.27739 1.00000 7.3 TiB 995 GiB 986 GiB 1.1 GiB 7.6
GiB 6.3 TiB 13.35 0.83 28 up
2 hdd 7.27739 1.00000 7.3 TiB 996 GiB 986 GiB 2.1 GiB 7.9
GiB 6.3 TiB 13.37 0.83 22 up
3 hdd 7.27739 1.00000 7.3 TiB 1.3 TiB 1.3 TiB 513 MiB 10
GiB 5.9 TiB 18.35 1.15 28 up
4 hdd 7.27739 1.00000 7.3 TiB 1.1 TiB 1.1 TiB 527 MiB 8.3
GiB 6.2 TiB 15.02 0.94 22 up
5 hdd 7.27739 1.00000 7.3 TiB 1.8 TiB 1.8 TiB 1.5 GiB 14
GiB 5.5 TiB 25.01 1.56 28 up
6 hdd 7.27739 1.00000 7.3 TiB 746 GiB 739 GiB 1.0 GiB 5.8
GiB 6.5 TiB 10.01 0.63 20 up
7 hdd 7.27739 1.00000 7.3 TiB 1.1 TiB 1.1 TiB 1.1 GiB 8.8
GiB 6.2 TiB 15.04 0.94 20 up
8 hdd 7.27739 1.00000 7.3 TiB 871 GiB 864 GiB 544 MiB 6.7
GiB 6.4 TiB 11.69 0.73 27 up
9 hdd 7.27739 1.00000 7.3 TiB 1.3 TiB 1.3 TiB 3.3 GiB 11
GiB 5.9 TiB 18.37 1.15 28 up
30 hdd 7.27739 1.00000 7.3 TiB 1.8 TiB 1.8 TiB 1.6 GiB 14
GiB 5.5 TiB 25.01 1.56 35 up
31 hdd 7.27739 1.00000 7.3 TiB 747 GiB 739 GiB 2.2 GiB 6.2
GiB 6.5 TiB 10.03 0.63 20 up
32 hdd 7.27739 1.00000 7.3 TiB 996 GiB 987 GiB 1.5 GiB 7.9
GiB 6.3 TiB 13.37 0.83 26 up
33 hdd 7.27739 1.00000 7.3 TiB 995 GiB 985 GiB 1.5 GiB 7.7
GiB 6.3 TiB 13.35 0.83 25 up
34 hdd 7.27739 1.00000 7.3 TiB 750 GiB 742 GiB 2.1 GiB 5.7
GiB 6.5 TiB 10.07 0.63 25 up
35 hdd 7.27739 1.00000 7.3 TiB 2.1 TiB 2.0 TiB 571 MiB 15
GiB 5.2 TiB 28.36 1.77 34 up
36 hdd 7.27739 1.00000 7.3 TiB 1.3 TiB 1.3 TiB 1.5 GiB 10
GiB 5.9 TiB 18.37 1.15 31 up
37 hdd 7.27739 1.00000 7.3 TiB 1.1 TiB 1.1 TiB 524 MiB 8.2
GiB 6.2 TiB 14.99 0.94 26 up
38 hdd 7.27739 1.00000 7.3 TiB 1.6 TiB 1.6 TiB 1.6 GiB 12
GiB 5.7 TiB 21.70 1.35 28 up
39 hdd 7.27739 1.00000 7.3 TiB 1.5 TiB 1.4 TiB 2.4 GiB 11
GiB 5.8 TiB 20.04 1.25 30 up
10 hdd 7.27739 1.00000 7.3 TiB 1.3 TiB 1.3 TiB 1.6 GiB 10
GiB 5.9 TiB 18.34 1.14 26 up
12 hdd 7.27739 1.00000 7.3 TiB 1.3 TiB 1.3 TiB 1 KiB 9.9
GiB 5.9 TiB 18.37 1.15 25 up
14 hdd 7.27739 1.00000 7.3 TiB 1.5 TiB 1.4 TiB 593 MiB 10
GiB 5.8 TiB 19.98 1.25 22 up
16 hdd 7.27739 1.00000 7.3 TiB 997 GiB 987 GiB 2.2 GiB 7.5
GiB 6.3 TiB 13.38 0.84 19 up
18 hdd 7.27739 1.00000 7.3 TiB 1.1 TiB 1.1 TiB 1.1 GiB 8.7
GiB 6.2 TiB 15.02 0.94 26 up
20 hdd 7.27739 1.00000 7.3 TiB 1.6 TiB 1.6 TiB 1.1 GiB 12
GiB 5.7 TiB 21.68 1.35 26 up
22 hdd 7.27739 1.00000 7.3 TiB 1.3 TiB 1.3 TiB 559 MiB 10
GiB 5.9 TiB 18.34 1.14 22 up
24 hdd 7.27739 1.00000 7.3 TiB 872 GiB 864 GiB 1020 MiB 6.8
GiB 6.4 TiB 11.70 0.73 23 up
26 hdd 7.27739 1.00000 7.3 TiB 749 GiB 741 GiB 1.8 GiB 6.3
GiB 6.5 TiB 10.05 0.63 25 up
28 hdd 7.27739 1.00000 7.3 TiB 1.3 TiB 1.3 TiB 1.5 GiB 10
GiB 5.9 TiB 18.36 1.15 32 up
11 hdd 7.27739 1.00000 7.3 TiB 1.1 TiB 1.1 TiB 2.6 GiB 8.5
GiB 6.2 TiB 15.02 0.94 23 up
13 hdd 7.27739 1.00000 7.3 TiB 1.3 TiB 1.3 TiB 2.2 GiB 10
GiB 5.9 TiB 18.38 1.15 36 up
15 hdd 7.27739 1.00000 7.3 TiB 995 GiB 986 GiB 1.1 GiB 7.7
GiB 6.3 TiB 13.35 0.83 25 up
17 hdd 7.27739 1.00000 7.3 TiB 623 GiB 618 GiB 419 KiB 5.0
GiB 6.7 TiB 8.35 0.52 23 up
19 hdd 7.27739 1.00000 7.3 TiB 870 GiB 863 GiB 513 MiB 6.6
GiB 6.4 TiB 11.67 0.73 21 up
21 hdd 7.27739 1.00000 7.3 TiB 1.1 TiB 1.1 TiB 1.5 GiB 8.6
GiB 6.2 TiB 15.02 0.94 25 up
23 hdd 7.27739 1.00000 7.3 TiB 746 GiB 739 GiB 564 MiB 5.8
GiB 6.5 TiB 10.01 0.62 22 up
25 hdd 7.27739 1.00000 7.3 TiB 1.1 TiB 1.1 TiB 2.1 GiB 8.4
GiB 6.2 TiB 15.03 0.94 24 up
27 hdd 7.27739 1.00000 7.3 TiB 1.2 TiB 1.2 TiB 532 MiB 9.1
GiB 6.1 TiB 16.68 1.04 23 up
29 hdd 7.27739 1.00000 7.3 TiB 1.1 TiB 1.1 TiB 1.1 GiB 8.4
GiB 6.2 TiB 14.99 0.94 19 up
TOTAL 291 TiB 47 TiB 46 TiB 51 GiB 359
GiB 244 TiB 16.02
MIN/MAX VAR: 0.52/1.77 STDDEV: 4.56
On Mon, 2023-07-31 at 09:22 +0000, Eugen Block wrote:
Hi,
can you share some more details like 'ceph df' and 'ceph osd df'? I
don't have too much advice yet, but to see all entries in your meta
pool you need add the --all flag because those objects are stored in
namespaces:
rados -p default.rgw.meta ls --all
That pool contains user and bucket information (example):
# rados -p default.rgw.meta ls --all
users.uid admin.buckets
users.keys c0fba3ea7d9c4321b5205752c85baa85 users.uid admin
users.keys JBWPRAPP1AQG471AMGC4 users.uid
e434b82737cf4138b899c0785b49112d.buckets
users.uid e434b82737cf4138b899c0785b49112d
Zitat von Mark Johnson <markj@xxxxxxxxx<mailto:markj@xxxxxxxxx>>:
I've been going round and round in circles trying to work this one
out but I'm getting nowhere. We're running a 4 node quincy cluster
(17.2.6) which recently reported the following:
ceph.log-20230729.gz:2023-07-28T08:31:42.390003+0000 osd.26
(osd.26)
13834 : cluster [WRN] Large omap object found. Object:
5:6c65dd84:users.uid::callrecordings$callrecordings_rw.buckets:head
PG: 5.21bba636 (5.16) Key count: 378454 Size (bytes): 75565579
This happened a week or so ago (only the key count was only just
over the 200000 threshold on that occasion) and after much searching
around, I found an article that suggested a deep scrub on the pg
would likely resolve the issue, so I forced a deep scrub and shortly
after, the warning cleared. Came into the office today to discover
the above. It's on the same PG as before which is in the
default.rgw.meta pool. This time, after forcing a deep-scrub on
that PG, nothing changed. I did it a second time just to be sure
but got the same result.
I keep finding a suse article that simply suggests increasing the
threshold to the previous default of 2,000,000, but other articles I
read say it was lowered for a reason and that by the time it hits
that figure, it's too late so I don't want to just mask it.
Problem
is that I don't really understand it. I found a thread here from a
bit over two years ago but their issue was in the
default.rgw.buckets.index pool. A step in the solution was to list
out the problematic object id and check the objects per shard
however, if I issue the command "rados -p default.rgw.meta ls" it
returns nothing. I get a big list from "rados -p
default.rgw.buckets.index ls" just nothing from the first pool. I
think it may be because the meta pool isn't indexed based on
something I read, but I really don't know what I'm talking about
tbh.
I don't know if this is helpful, but if I list out all the PGs for
that pool, there are 32 PGs and 5.16 shows 80186950 bytes and
401505
keys. PG 5.c has 75298 and 384 keys. The remaining 30 PGs show
zero bytes and zero keys. I'm really not sure how to troubleshoot
and resolve from here. For the record, dynamic resharding is
enabled in that no options have been set in the config and that is
the default setting.
Based on the suse article I mentioned which also references the
default.rgw.meta pool, I'm gathering our issue is because we have so
many buckets that are all owned by the one user and the solution is
either:
* delete unused buckets
* create multiple users and spread buckets evenly across all users
(not something we can do)
* increase the threshold to stop the warning
Problem is that I'm having trouble verifying this is the issue. I've
tried dumping out bucket stats to a file (radosgw-admin bucket stats
bucket_stats.txt) but after three hours this is still running with
no output.
Thanks for your time,
Mark
_______________________________________________
ceph-users mailing list --
ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> To unsubscribe send an
email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list --
ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> To unsubscribe send an
email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list --
ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx> To unsubscribe send an
email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx