On Wed, Nov 16, 2016 at 8:55 AM, James Wilkins <James.Wilkins@xxxxxxxxxxxxx> wrote: > Hello, > > > > Hoping to pick any users brains in relation to production CephFS deployments > as we’re preparing to deploy CephFS to replace Gluster for our container > based storage needs. > > > > (Target OS is Centos7 for both servers/clients & latest jewel release) > > > > o) Based on our performance testing we’re seeing the kernel client by far > out-performs the fuse client – older mailing list posts from 2014 suggest > this is expected, is the recommendation still to use the kernel client? The kernel client does usually beat the fuse client in benchmarks, but the practical difference depends on how data/metadata heavy your workload is, and how much your workload concentrates through a single client vs. having multiple less-loaded clients. Many everyday workloads would not notice the difference. In general I recommend that you use the fuse client unless its performance becomes an issue for you, in which case you go down the road of working out whether you are comfortable with using a recent enough kernel to have the latest cephfs fixes (or switching to a distro that has backports in its stable kernel). John > > > o) Ref: http://docs.ceph.com/docs/master/cephfs/experimental-features/ lists > multiple MDS as experimental – I’m assuming this refers to multiple active > MDS and having one active / X standby is a valid/stable configuration? (We > haven’t noticed any issues during testing – just wanting to be sure). > > > > Cheers, > > > > > > > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com