CephFS - Couple of questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

 

Hoping to pick any users brains in relation to production CephFS deployments as we’re preparing to deploy CephFS to replace Gluster for our container based storage needs.

 

(Target OS is Centos7  for both servers/clients & latest jewel release)

 

o) Based on our performance testing we’re seeing the kernel client by far out-performs the fuse client – older mailing list posts from 2014 suggest this is expected, is the recommendation still to use the kernel client?

 

o) Ref: http://docs.ceph.com/docs/master/cephfs/experimental-features/ lists multiple MDS as experimental – I’m assuming this refers to multiple active MDS and having one active / X standby is a valid/stable configuration?  (We haven’t noticed any issues during testing – just wanting to be sure).

 

Cheers,

 

 

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux