Re: ceph mgr memory leak

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have the same mgr memory leak problem. I doubt it’s related to the PID which is used to identify peer address.
Maybe you cloud try to set the ‘PidMode’ to ‘host’ in your deployment.

> 2020年7月28日 上午2:44,Frank Ritchie <frankaritchie@xxxxxxxxx> 写道:
> 
> Hi all,
> 
> When running containerized Ceph (Nautilus) is anyone else seeing a
> constant memory leak in the ceph-mgr pod with constant ms_handle_reset
> errors in the logs for the backup mgr instance?
> 
> ---
> 0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
> 0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
> 0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
> ---
> 
> I see a couple of related reports with no activity:
> 
> 
> https://tracker.ceph.com/issues/36471
> https://tracker.ceph.com/issues/40260
> 
> and one related merge that doesn't seem to have corrected the issue:
> 
> https://github.com/ceph/ceph/pull/24233
> 
> thx
> Frank
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux