osd process threads stack up on osds failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi cephers,
after one OSD node crash (6 OSDs in total), we experienced an increase
of approximately 230-260 threads for every other OSD node. We have 26
OSD nodes with 6 OSDs per node, so this is approximately 40 threads
per osd. The OSD node has joined the cluster after 15-20 minutes.

The only workaround I have found so far is to restart the OSDs of the
cluster, but this is a quite heavy operation. Could you help me
understand if the behaviour described above is an expected one and
what could be the reason for this? Does ceph cleanup appropriately osd
processes threads?

Extra info: all threads are in sleeping state right now and context
switches have been stabilized at the pre-crash levels

Regards,
Kostis
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux