CephFS : fuse client vs kernel driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I just finished setting up a new Ceph cluster (Luminous 12.2.7, 3xMON nodes and 6xOSDs nodes, BlueStore OSD on sata hdd with WAL/DB on separated NVMe devices, 2x10 Gbs network per node, 3 replicas by pool)

I created a CephFS pool : data pool uses hdd OSDs and metadata pool uses dedicated NVMe OSDs. I deployed 3 MDS demons (2 active + 1 failover).

My Ceph cluster is in 'HEALTH_OK' state, for now everything seems to be working perfectly.

My question is regarding the cephfs client, and especially the huge performance gap between the fuse client and the kernel one.
On the same writing test, done one after the other, I find a factor 55 between the 2 !

Here is an example from a client (connected to 10 Gbs on the same LAN) :

CephFS Fuse client :

# ceph-fuse -m FIRST_MON_NODE_IP:6789 /mnt/ceph_newhome/
# time sh -c "dd if=/dev/zero of=/mnt/ceph_newhome/test_io_fuse_mount.tmp bs=4k count=2000000 && sync"
2000000+0 records in
2000000+0 records out
8192000000 bytes (8.2 GB, 7.6 GiB) copied, 305.57 s, 26.8 MB/s

real    5m5.607s
user    0m1.784s
sys     0m28.584s

CephFS Kernel driver :

# umount /mnt/ceph_newhome
# mount -t ceph FIRST_MON_NODE_IP:6789:/ /mnt/ceph_newhome -o name=admin,secret=`ceph-authtool -p /etc/ceph/ceph.client.admin.keyring`
# time sh -c "dd if=/dev/zero of=/mnt/ceph_newhome/test_io_kernel_mount.tmp bs=4k count=2000000 && sync"
2000000+0 records in
2000000+0 records out
8192000000 bytes (8.2 GB, 7.6 GiB) copied, 5.47228 s, 1.5 GB/s

real    0m15.161s
user    0m0.444s
sys     0m5.024s

I'm impressed by the write speed with the kernel driver and, as I must be able to use this kernel driver on my client systems, I'm statisfied...but I would like to know if such a difference is normal, or are there options/optimizations that improve the IO speed with the fuse client ? (I'm thinking in particular of the recovery scenario where the kernel driver is no longer mounted following a system update/upgrade and where I have to use the fuse client as a temporary replacement...)

Thanks for your suggestions,
Hervé



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux