Re: cephf_metadata: Large omap object found

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The warning threshold recently changed, I'd just increase it in this
particular case. It just means you have lots of open files.

I think there's some work going on to split the openfiles object into
multiple, so that problem will be fixed.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Mon, Feb 3, 2020 at 5:39 PM Yoann Moulin <yoann.moulin@xxxxxxx> wrote:
>
> Hello,
>
> I have this message on my new ceph cluster in Nautilus. I have a cephfs with a copy of ~100TB in progress.
>
> > /var/log/ceph/artemis.log:2020-02-03 16:22:49.970437 osd.66 (osd.66) 1137 : cluster [WRN] Large omap object found. Object: 8:579bf162:::mds3_openfiles.0:head PG: 8.468fd9ea (8.2a) Key count: 206548 Size (bytes): 6691941
>
> > /var/log/ceph/artemis-osd.66.log:2020-02-03 16:22:49.966 7fe77af62700  0 log_channel(cluster) log [WRN] : Large omap object found. Object: 8:579bf162:::mds3_openfiles.0:head PG: 8.468fd9ea (8.2a) Key count: 206548 Size (bytes): 6691941
>
> I found this thread about a similar issue in the archives of the list
> https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/JUFYDCQ2AHFA23NFJQY743ELJHG2N5DI/
>
> But I'm not sure what I can do in my situation, can I increase osd_deep_scrub_large_omap_object_key_threshold or it's a bad idea?
>
> Thanks for your help.
>
> Here some useful (I guess) information:
>
> > Filesystem                          Size  Used Avail Use% Mounted on
> > 10.90.37.4,10.90.37.6,10.90.37.8:/  329T   32T  297T  10% /artemis
>
> > artemis@icitsrv5:~$ ceph -s
> >   cluster:
> >     id:     815ea021-7839-4a63-9dc1-14f8c5feecc6
> >     health: HEALTH_WARN
> >             1 large omap objects
> >
> >   services:
> >     mon: 3 daemons, quorum iccluster003,iccluster005,iccluster007 (age 2w)
> >     mgr: iccluster021(active, since 7h), standbys: iccluster009, iccluster023
> >     mds: cephfs:5 5 up:active
> >     osd: 120 osds: 120 up (since 5d), 120 in (since 5d)
> >     rgw: 8 daemons active (iccluster003.rgw0, iccluster005.rgw0, iccluster007.rgw0, iccluster013.rgw0, iccluster015.rgw0, iccluster019.rgw0, iccluster021.rgw0, iccluster023.rgw0)
> >
> >   data:
> >     pools:   10 pools, 2161 pgs
> >     objects: 72.02M objects, 125 TiB
> >     usage:   188 TiB used, 475 TiB / 662 TiB avail
> >     pgs:     2157 active+clean
> >              4    active+clean+scrubbing+deep
> >
> >   io:
> >     client:   31 KiB/s rd, 803 KiB/s wr, 31 op/s rd, 184 op/s wr
>
> > artemis@icitsrv5:~$ ceph health detail
> > HEALTH_WARN 1 large omap objects
> > LARGE_OMAP_OBJECTS 1 large omap objects
> >     1 large objects found in pool 'cephfs_metadata'
> >     Search the cluster log for 'Large omap object found' for more details.
>
>
> > artemis@icitsrv5:~$ ceph fs status
> > cephfs - 3 clients
> > ======
> > +------+--------+--------------+---------------+-------+-------+
> > | Rank | State  |     MDS      |    Activity   |  dns  |  inos |
> > +------+--------+--------------+---------------+-------+-------+
> > |  0   | active | iccluster015 | Reqs:    0 /s |  251k |  251k |
> > |  1   | active | iccluster001 | Reqs:    3 /s | 20.2k | 19.1k |
> > |  2   | active | iccluster017 | Reqs:    1 /s |  116k |  112k |
> > |  3   | active | iccluster019 | Reqs:    0 /s |  263k |  263k |
> > |  4   | active | iccluster013 | Reqs:  123 /s | 16.3k | 16.3k |
> > +------+--------+--------------+---------------+-------+-------+
> > +-----------------+----------+-------+-------+
> > |       Pool      |   type   |  used | avail |
> > +-----------------+----------+-------+-------+
> > | cephfs_metadata | metadata | 13.9G |  135T |
> > |   cephfs_data   |   data   | 51.3T |  296T |
> > +-----------------+----------+-------+-------+
> > +-------------+
> > | Standby MDS |
> > +-------------+
> > +-------------+
> > MDS version: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)
> > root@iccluster019:~# ceph --cluster artemis daemon osd.13 config show | grep large_omap
> >     "osd_deep_scrub_large_omap_object_key_threshold": "200000",
> >     "osd_deep_scrub_large_omap_object_value_sum_threshold": "1073741824",
>
> > artemis@icitsrv5:~$ rados -p cephfs_metadata listxattr mds3_openfiles.0
> > artemis@icitsrv5:~$ rados -p cephfs_metadata getomapheader mds3_openfiles.0
> > header (42 bytes) :
> > 00000000  13 00 00 00 63 65 70 68  20 66 73 20 76 6f 6c 75  |....ceph fs volu|
> > 00000010  6d 65 20 76 30 31 31 01  01 0d 00 00 00 14 63 00  |me v011.......c.|
> > 00000020  00 00 00 00 00 01 00 00  00 00                    |..........|
> > 0000002a
>
> Best regards,
>
> --
> Yoann Moulin
> EPFL IC-IT
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux