Disproportionate Metadata Size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

On one of our Ceph clusters, some OSDs have been marked as full. Since this is a staging cluster that does not have much data on it, this is strange.

Looking at the full OSDs through “ceph osd df” I figured out that the space is mostly used by metadata:

    SIZE: 122 GiB
    USE: 118 GiB
    DATA: 2.4 GiB
    META: 116 GiB

We run mimic, and for the affected OSDs we use a db device (nvme) in addition to the primary device (hdd).

In the logs we see the following errors:

    2020-05-12 17:10:26.089 7f183f604700  1 bluefs _allocate failed to allocate 0x400000 on bdev 1, free 0x0; fallback to bdev 2
    2020-05-12 17:10:27.113 7f183f604700  1 bluestore(/var/lib/ceph/osd/ceph-8) _balance_bluefs_freespace gifting 0x180a000000~400000 to bluefs
    2020-05-12 17:10:27.153 7f183f604700  1 bluefs add_block_extent bdev 2 0x180a000000~400000

We assume it is an issue with Rocksdb, as the following call will quickly fix the problem:

    ceph daemon osd.8 compact

The question is, why is this happening? I would think that “compact" is something that runs automatically from time to time, but I’m not sure.

Is it on us to run this regularly?

Any pointers are welcome. I’m quite new to Ceph :)

Cheers,

Denis
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux