Re: cephfs, low performances

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 18 December 2015 at 15:48, Don Waterloo <don.waterloo@xxxxxxxxx> wrote:


On 17 December 2015 at 21:36, Francois Lafont <flafdivers@xxxxxxx> wrote:
Hi,

I have ceph cluster currently unused and I have (to my mind) very low performances.
I'm not an expert in benchs, here an example of quick bench:

---------------------------------------------------------------
# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=readwrite --filename=rw.data --bs=4k --iodepth=64 --size=300MB --readwrite=randrw --rwmixread=50
readwrite: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.1.3

 ...

I am seeing the same sort of issue.
If i run your 'fio' command sequence on my cephfs, i see ~120 iops.
If i run it on one of the underlying osd (e.g. in /var... on the mount point of the xfs), i get ~20k iops.


If i run:
rbd -p mypool create speed-test-image --size 1000
rbd -p mypool bench-write speed-test-image

I get 

bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern seq
  SEC       OPS   OPS/SEC   BYTES/SEC
    1     79053  79070.82  323874082.50
    2    144340  72178.81  295644410.60
    3    221975  73997.57  303094057.34
elapsed:    10  ops:   262144  ops/sec: 26129.32  bytes/sec: 107025708.32

which is *much* faster than the cephfs. 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux