Re: cephfs 1 large omap objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Paul, Nigel,

I'm also seeing "HEALTH_WARN 6 large omap objects" warnings with cephfs
after upgrading to 14.2.4:

The affected osd's are used (only) by the metadata pool:

POOL    ID STORED OBJECTS USED   %USED  MAX AVAIL
mds_ssd  1 64 GiB 1.74M   65 GiB 4.47   466 GiB

See below for more log details.

While I'm glad we can silence the warning, should I be worried about the
values reported in the log causing real problems?

many thanks

Jake

[root@ceph1 ~]# zgrep "Large omap object found" /var/log/ceph/ceph.log*

/log/ceph/ceph.log-20191022.gz:2019-10-21 15:43:45.800608 osd.2 (osd.2)
262 : cluster [WRN] Large omap object found. Object:
1:e5134dd5:::10007b4b304.02400000:head Key count: 524005 Size (bytes):
242090310
/var/log/ceph/ceph.log-20191022.gz:2019-10-21 15:43:48.440425 osd.2
(osd.2) 263 : cluster [WRN] Large omap object found. Object:
1:e5347802:::1000861ecf6.00000000:head Key count: 395404 Size (bytes):
182676204
/var/log/ceph/ceph.log-20191025.gz:2019-10-24 23:53:25.348227 osd.2
(osd.2) 58 : cluster [WRN] Large omap object found. Object:
1:2f12e2d8:::10007b4b304.01800000:head Key count: 1041988 Size (bytes):
481398012
/var/log/ceph/ceph.log-20191026.gz:2019-10-25 10:54:57.478636 osd.2
(osd.2) 69 : cluster [WRN] Large omap object found. Object:
1:effe741b:::1000763dfe6.00000000:head Key count: 640788 Size (bytes):
296043612
/var/log/ceph/ceph.log-20191026.gz:2019-10-25 19:57:11.894099 osd.3
(osd.3) 326 : cluster [WRN] Large omap object found. Object:
1:4b4f7436:::10007b4b304.02000000:head Key count: 522689 Size (bytes):
241482318
/var/log/ceph/ceph.log-20191027.gz:2019-10-27 02:30:10.648346 osd.3
(osd.3) 351 : cluster [WRN] Large omap object found. Object:
1:a47c6896:::1000894a736.00000000:head Key count: 768126 Size (bytes):
354873768
On 10/8/19 10:27 AM, Paul Emmerich wrote:
> Hi,
> 
> the default for this warning changed recently (see other similar
> threads on the mailing list), it was 2 million before 14.2.3.
> 
> I don't think the new default of 200k is a good choice, so increasing
> it is a reasonable work-around.
> 
> Paul
> 


-- 
Jake Grimmett
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux