Odd CephFS Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have been benchmarking CephFS and comparing it Rados to see the performance difference and how much overhead CephFS has. However, we are getting odd results when using more than 1 OSD server (each OSDS has only one disk) using CephFS but using Rados everything appears normal. These tests are run on the same Ceph Cluster.  

                CephFS	        Rados
OSDS	Thread 16	Thread 16
1	        289                  316
2	        139	                 546
3	        143	                 728
4	        142	                 844

CephFS is being benchmarked using: fio --name=seqwrite --rw=write --direct=1 --ioengine=libaio --bs=4M --numjobs=16  --size=1G  --group_reporting
Rados is being benchmarked using: rados bench -p cephfs_data 10 write -t 16

If you could provide some help or insight into why this is happening or how to stop it, that would be much appreciated. 

Kind regards,

Gabryel
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux