Re: cephfs, low performances

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 20 December 2015 at 08:35, Francois Lafont <flafdivers@xxxxxxx> wrote:
Hello,

On 18/12/2015 23:26, Don Waterloo wrote:

> rbd -p mypool create speed-test-image --size 1000
> rbd -p mypool bench-write speed-test-image
>
> I get
>
> bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern seq
>   SEC       OPS   OPS/SEC   BYTES/SEC
>     1     79053  79070.82  323874082.50
>     2    144340  72178.81  295644410.60
>     3    221975  73997.57  303094057.34
> elapsed:    10  ops:   262144  ops/sec: 26129.32  bytes/sec: 107025708.32
>
> which is *much* faster than the cephfs.

Me too, I have better performance with rbd (~1400 iops with the fio command
in my first message instead of ~575 iops with the same fio command and cephfs).

I did a bit more work on this.

On cephfs-fuse, I get ~700 iops.
On cephfs kernel, I get ~120 iops.
These were both on 4.3 kernel

So i backed up to 3.16 kernel on the client. And observed the same results.

So ~20K iops w/ rbd, ~120iops w/ cephfs.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux