Is there any way for ceph-osd to control the max fds?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In one env, which is deployed through container, i found the ceph-osd always
be suicide due to "error (24) Too many open files"

Then i increased the LimitNOFILE for the container from 65k to 655k, which could fix the issue.
But the FDs increase all the time. now the max number is around 155k. I am afraid
it will increase forever. 

I also found there is a option `max_open_files`, but it seems only used for upstart scripts at OS level, and the default value is
16384 now[2]. whereas if you are using systemd, it will never load from the `max_open_files` options and fixed to 1048576 in default[1].
So i guess if the ceph-osd live long enough, it will still read the OS level strict and suicide at the end.

So here is the question
1. since almost all os distro already moved to systemd, so max_open_files is uselss now.
2. is there any mechanism that ceph-osd could release some fds?



 

--
Regards,
Jeffrey Zhang
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux