Re: CephFS metadata pool size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

can you check if this thread [1] applies to your situation? You don't have multi-active MDS enabled, but maybe it's still some journal trimming, or maybe misbehaving clients? In your first post there were health warnings regarding cache pressure and cache size. Are those resolved?

[1] https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/7U27L27FHHPDYGA6VNNVWGLTXCGP7X23/#VOOV235D4TP5TEOJUWHF4AVXIOTHYQQE

Zitat von Lars Köppel <lars.koeppel@xxxxxxxxxx>:

Hello everyone,

short update to this problem.
The zapped OSD is rebuilt and it has now 1.9 TiB (the expected size ~50%).
The other 2 OSDs are now at 2.8 respectively 3.2 TiB. They jumped up and
down a lot but the higher one has now also reached 'nearfull' status. How
is this possible? What is going on?

Does anyone have a solution how to fix this without zapping the OSD?

Best regards,
Lars


[image: ariadne.ai Logo] Lars Köppel
Developer
Email: lars.koeppel@xxxxxxxxxx
Phone: +49 6221 5993580 <+4962215993580>
ariadne.ai (Germany) GmbH
Häusserstraße 3, 69115 Heidelberg
Amtsgericht Mannheim, HRB 744040
Geschäftsführer: Dr. Fabian Svara
https://ariadne.ai
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux