Re: cephfs kernel driver - failing to respond to cache pressure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Adding that all of my ceph components are version:
10.2.2-0ubuntu0.16.04.2

Openstack is Mitaka on Ubuntu 16.04x. Manila file share is 1:2.0.0-0ubuntu1

My scenario is that I have a 3-node ceph cluster running openstack mitaka. Each node has 256gb ram, 14tb raid 5 array. I have 30 VMs running in openstack; all are mounted to the Manila file share using cephfs native kernel client driver. Each VM user has put 10-20 gb of files on the share, but most of this is back-up, so IO requirement is very low. However, I initially tried using ceph-fuse but performance latency was poor. Moving to kernel client driver for mounting the share has improved performance greatly. However, I am getting the cache pressure issue.

Can someone help me with the math to properly size the mds cache? How do I know if the cache size is too small (I think very few files in-use at any given time) versus the clients are broken and not releasing cache properly?

Thank you!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux