Re: Large OMAP Objects in default.rgw.log pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That can happen if you have lot of objects with swift object expiry (TTL) enabled. You can 'listomapkeys' on these log pool objects and check for the objects that have registered for TTL as omap entries. I know this is the case with at least Jewel version.

Thanks,
-Pavan.

On 3/7/19, 10:09 PM, "ceph-users on behalf of Brad Hubbard" <ceph-users-bounces@xxxxxxxxxxxxxx on behalf of bhubbard@xxxxxxxxxx> wrote:

    On Fri, Mar 8, 2019 at 4:46 AM Samuel Taylor Liston <sam.liston@xxxxxxxx> wrote:
    >
    > Hello All,
    >         I have recently had 32 large map objects appear in my default.rgw.log pool.  Running luminous 12.2.8.
    >
    >         Not sure what to think about these.    I’ve done a lot of reading about how when these normally occur it is related to a bucket needing resharding, but it doesn’t look like my default.rgw.log pool  has anything in it, let alone buckets.  Here’s some info on the system:
    >
    > [root@elm-rgw01 ~]# ceph versions
    > {
    >     "mon": {
    >         "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)": 5
    >     },
    >     "mgr": {
    >         "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)": 1
    >     },
    >     "osd": {
    >         "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)": 192
    >     },
    >     "mds": {},
    >     "rgw": {
    >         "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)": 1
    >     },
    >     "overall": {
    >         "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0) luminous (stable)": 199
    >     }
    > }
    > [root@elm-rgw01 ~]# ceph osd pool ls
    > .rgw.root
    > default.rgw.control
    > default.rgw.meta
    > default.rgw.log
    > default.rgw.buckets.index
    > default.rgw.buckets.non-ec
    > default.rgw.buckets.data
    > [root@elm-rgw01 ~]# ceph health detail
    > HEALTH_WARN 32 large omap objects
    > LARGE_OMAP_OBJECTS 32 large omap objects
    >     32 large objects found in pool 'default.rgw.log'
    >     Search the cluster log for 'Large omap object found' for more details.—
    >
    > Looking closer at these object they are all of size 0.  Also that pool shows a capacity usage of 0:
    
    The size here relates to data size. Object map (omap) data is metadata
    so an object of size 0 can have considerable omap data associated with
    it (the omap data is stored separately from the object in a key/value
    database). The large omap warning in health detail output should tell
    you " "Search the cluster log for 'Large omap object found' for more
    details." If you do that you should get the names of the specific
    objects involved. You can then use the rados commands listomapkeys and
    listomapvals to see the specifics of the omap data. Someone more
    familiar with rgw can then probably help you out on what purpose they
    serve.
    
    HTH.
    
    >
    > (just a sampling of the 236 objects at size 0)
    >
    > [root@elm-mon01 ceph]# for i in `rados ls -p default.rgw.log`; do echo ${i}; rados stat -p default.rgw.log ${i};done
    > obj_delete_at_hint.0000000078
    > default.rgw.log/obj_delete_at_hint.0000000078 mtime 2019-03-07 11:39:19.000000, size 0
    > obj_delete_at_hint.0000000070
    > default.rgw.log/obj_delete_at_hint.0000000070 mtime 2019-03-07 11:39:19.000000, size 0
    > obj_delete_at_hint.0000000104
    > default.rgw.log/obj_delete_at_hint.0000000104 mtime 2019-03-07 11:39:20.000000, size 0
    > obj_delete_at_hint.0000000026
    > default.rgw.log/obj_delete_at_hint.0000000026 mtime 2019-03-07 11:39:19.000000, size 0
    > obj_delete_at_hint.0000000028
    > default.rgw.log/obj_delete_at_hint.0000000028 mtime 2019-03-07 11:39:19.000000, size 0
    > obj_delete_at_hint.0000000040
    > default.rgw.log/obj_delete_at_hint.0000000040 mtime 2019-03-07 11:39:19.000000, size 0
    > obj_delete_at_hint.0000000015
    > default.rgw.log/obj_delete_at_hint.0000000015 mtime 2019-03-07 11:39:19.000000, size 0
    > obj_delete_at_hint.0000000069
    > default.rgw.log/obj_delete_at_hint.0000000069 mtime 2019-03-07 11:39:19.000000, size 0
    > obj_delete_at_hint.0000000095
    > default.rgw.log/obj_delete_at_hint.0000000095 mtime 2019-03-07 11:39:19.000000, size 0
    > obj_delete_at_hint.0000000003
    > default.rgw.log/obj_delete_at_hint.0000000003 mtime 2019-03-07 11:39:19.000000, size 0
    > obj_delete_at_hint.0000000047
    > default.rgw.log/obj_delete_at_hint.0000000047 mtime 2019-03-07 11:39:19.000000, size 0
    >
    >
    > [root@elm-mon01 ceph]# rados df
    > POOL_NAME                  USED    OBJECTS   CLONES COPIES     MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS    RD      WR_OPS    WR
    > .rgw.root                  1.09KiB         4      0         12                  0       0        0     14853 9.67MiB         0     0B
    > default.rgw.buckets.data    444TiB 166829939      0 1000979634                  0       0        0 362357590  859TiB 909188749 703TiB
    > default.rgw.buckets.index       0B       358      0       1074                  0       0        0 729694496 1.04TiB 522654976     0B
    > default.rgw.buckets.non-ec      0B       182      0        546                  0       0        0 194204616  148GiB  97962607     0B
    > default.rgw.control             0B         8      0         24                  0       0        0         0      0B         0     0B
    > default.rgw.log                 0B       236      0        708                  0       0        0  33268863 3.01TiB  18415356     0B
    > default.rgw.meta           16.2KiB        67      0        201                  0       0        0 466663427  371GiB     27647 146KiB
    >
    > total_objects    166830794
    > total_used       668TiB
    > total_avail      729TiB
    > total_space      1.36PiB
    >
    >
    > Does anyone know the importance of these objects or if they can safely be deleted?
    > Thank you,
    >
    > Sam Liston (sam.liston@xxxxxxxx)
    > ========================================
    > Center for High Performance Computing
    > 155 S. 1452 E. Rm 405
    > Salt Lake City, Utah 84112 (801)232-6932
    > ========================================
    >
    >
    >
    >
    > _______________________________________________
    > ceph-users mailing list
    > ceph-users@xxxxxxxxxxxxxx
    > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    
    
    
    -- 
    Cheers,
    Brad
    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux