Re: CephFS mds cache pressure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



xiaoxi chen <superdebugger@...> writes:

> 
> Hmm, I asked in the ML some days before,:) likely you hit the kernel bug 
which fixed by commit 5e804ac482 "ceph: don't invalidate page cache when 
inode is no longer used”.  This fix is in 4.4 but not in 4.2. I haven't got a 
chance to play with 4.4 , it would be great if you can have a try.
> For MDS OOM issue, we did a MDS RSS vs #Inodes scaling test, the result 
showing around 4MB per 1000 Inodes, so your MDS likely can hold up to 2~3 
Million inodes. But yes, even with the fix if the client misbehavior (open 
and hold a lot of inodes, doesn't respond to cache pressure message), MDS can 
go over the throttling and then killed by OOM
> 

Hello!
I will install a newer kernel version and I will increase the ram a bit just 
to see how it handles it.
Thanks!

> > To: ceph-users <at> lists.ceph.com> From: castrofjoao-
Re5JQEeQqe8AvxtiuMwx3w@xxxxxxxxxxxxxxxx> Date: Tue, 28 Jun 2016 21:34:03 
+0000> Subject: Re:  CephFS mds cache pressure> > Hey John,> > 
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)> 4.2.0-36-
generic> > Thanks!> > _______________________________________________> ceph-
users mailing list> ceph-users-idqoXFIVOFJgJs9I8MT0rw@xxxxxxxxxxxxxxxx> 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
>  		 	   		  
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@...
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux