Re: cephfs, low performances

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On 18/12/2015 23:26, Don Waterloo wrote:

> rbd -p mypool create speed-test-image --size 1000
> rbd -p mypool bench-write speed-test-image
> 
> I get
> 
> bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern seq
>   SEC       OPS   OPS/SEC   BYTES/SEC
>     1     79053  79070.82  323874082.50
>     2    144340  72178.81  295644410.60
>     3    221975  73997.57  303094057.34
> elapsed:    10  ops:   262144  ops/sec: 26129.32  bytes/sec: 107025708.32
> 
> which is *much* faster than the cephfs.

Me too, I have better performance with rbd (~1400 iops with the fio command
in my first message instead of ~575 iops with the same fio command and cephfs).

The question is: is it normal if I have ~575 iops with cephfs and my config?
Indeed, I imagine that rbd has better performance than cephfs and, after all
maybe my value of iops is normal. I don't know...

I have tried to edit the crushmap to put the cephfsmetadata pool only in the
5 SSD. It seems to improve slightly the performance and, with the fio command
of my first message, I have ~650 iops now but it still seems to me bad, no?

Currently I'm searching any option in ceph.conf or any mount option to improve
performance with cephfs via ceph-fuse. In the archive of "ceph-users", I have
seen the options "client cache size" and "client oc size" which would be used
by ceph-fuse.

Is it correct?

I don't see anything in the documentation. Where should I put these parameters?
In the ceph.conf of the client which mounts the cephfs via fuse? In the [global]
section? I have tried that but it seems to be not ignored. Indeed I have tried
to put these parameters in the [global] section of ceph.conf (in the client node)
and I have set very very small value like this:

[global]
  client cache size = 1024
  client oc size    = 1024

and I thought it highly decreases the performance but there is absolutely no
effect and I have the same result (ie ~650 iops) so I think the parameters are
just ignored. Is it the right place to put these parameters?

Furthermore, do you know mount options which can improve perf (for cephfs mount
via ceph-fuse)?

It seems to me that the mount option noacl existed but ceph-fuse doesn't know this
mount option (I have no need to acl). I haven't found the list of mount options
in the web. I just can display a short list with the command "ceph-fuse -h". I
have tried to change the max_* options but without effect.

Thanks in advance for your help.

-- 
François Lafont
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux