Hi all, I just finished setting up a new Ceph cluster (Luminous 12.2.7, 3xMON nodes and 6xOSDs nodes, BlueStore OSD on sata hdd with WAL/DB on separated NVMe devices, 2x10 Gbs network per node, 3 replicas by pool) I created a CephFS pool : data pool uses hdd OSDs and metadata pool uses dedicated NVMe OSDs. I deployed 3 MDS demons (2 active + 1 failover). My Ceph cluster is in 'HEALTH_OK' state, for now everything seems to be working perfectly. My question is regarding the cephfs client, and especially the
huge performance gap between the fuse client and the kernel one. Here is an example from a client (connected to 10 Gbs on the same LAN) : CephFS Fuse client : # ceph-fuse -m FIRST_MON_NODE_IP:6789
/mnt/ceph_newhome/ CephFS Kernel driver : # umount
/mnt/ceph_newhome I'm impressed by the write speed with the kernel driver and, as I must be able to use this kernel driver on my client systems, I'm statisfied...but I would like to know if such a difference is normal, or are there options/optimizations that improve the IO speed with the fuse client ? (I'm thinking in particular of the recovery scenario where the kernel driver is no longer mounted following a system update/upgrade and where I have to use the fuse client as a temporary replacement...) Thanks for your suggestions,
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com