Re: Cephfs metadta pool suddenly full (100%) !

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I've never encountered this before.
To troubleshoot you could try to identify if this was caused by the
MDS writing to the metadata pool (e.g. maybe the mds log?), or if it
was some operation in the OSD which consumed too much space (e.g.
something like compaction?).

Can you find any unusual messages in the ceph.log or mds or osd logs
during the hours when the OSDs were growing?

Cheers, Dan

On Tue, Jun 1, 2021 at 12:24 PM Hervé Ballans
<herve.ballans@xxxxxxxxxxxxx> wrote:
>
> Hi all,
>
> Ceph  Nautilus 14.2.16.
>
> We encounter a strange and critical poblem since this morning.
>
> Our cephfs metadata pool suddenly grew from 2,7% to 100%! (in less than
> 5 hours) while there is no significant activities on the OSD data !
>
> Here are some numbers:
>
> # ceph df
> RAW STORAGE:
>      CLASS     SIZE        AVAIL       USED        RAW USED %RAW USED
>      hdd       205 TiB     103 TiB     102 TiB      102 TiB         49.68
>      nvme      4.4 TiB     2.2 TiB     2.1 TiB      2.2 TiB         49.63
>      TOTAL     210 TiB     105 TiB     104 TiB      104 TiB         49.68
>
> POOLS:
>      POOL                     ID     PGS      STORED OBJECTS
> USED        %USED      MAX AVAIL
>      cephfs_data_home          7      512      11 TiB 22.58M      11
> TiB      18.31        17 TiB
>      cephfs_metadata_home      8      128     724 GiB 2.32M     724
> GiB     100.00           0 B
>      rbd_backup_vms            9     1024      19 TiB 5.00M      19
> TiB      37.08        11 TiB
>
>
> The cephfs_data uses less than the half of the storage space, and there
> was no significant increase during the period (and before) where
> metadata became full.
>
> Is someone already encounter that ?
>
> Currently, I have no idea how I can solve this problem. The restart of
> associated OSD and mds services have not been useful.
>
> Let me know if you want more informations or logs.
>
> Thank you for your help.
>
> Regards,
> Hervé
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux