Re: performance exporting RBD over NFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 18, 2018 at 8:54 AM, Marc Boisis <marc.boisis@xxxxxxxxxx> wrote:
> Hi,
>
> I want to export rbd over nfs in a 10Gb network. Server and Client are DELL R620 with 10Gb nics.
> rbd cache is disabled ont the server.
>
> NFS server write bandwith on his rbd is 1196MB/s
>
> NFS client write bandwith on the rbd export is only 233MB/s.
> NFS client write bandwith on a "local-server-disk" export is  839MB/s
>
> my bench is: fio --time_based --name=benchmark --size=20G --runtime=30 --filename=/video1/fiobench --ioengine=libaio --randrepeat=0 --iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=4 --rw=write --blocksize=256k --group_reporting
> my export: /video1 X.X.X.X(rw,sync,no_root_squash)
> mount : type nfs (rw,noatime,nodiratime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=X.X.X.X,mountvers=3,mountport=20048,mountproto=tcp,local_lock=none,addr=X.X.X.X)
> rbd: rbd image 'video1':
>         size 5120 GB in 1310720 objects
>         order 22 (4096 kB objects)
>         block_name_prefix: rbd_data.1c9dc674b0dc51
>         format: 2
>         features: layering
>         flags:
>
>
> My conclusion:
>         - rbd write performance is good
>         - nfs write permormance is good
>         - nfs write on rbd performance is bad
>
> do you encounter the same problem ?
>
> Marc
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


HI Marc,

We do this quite a bit.  It seems, a lot is dependent on how the
client writes the data.  For example, VMWare's small IO is pretty hard
on RBD devices, considering there is also the filesystem overhead that
serves NFS.  When taking into account the single or multiple streams
(Ceph is great at multiple streams, but single stream performance will
take a good deal of tuning), and the IO size, the results tend to
match.

Good low latency WAL/DB/Journal devices seem to help a lot in real life IO.

--
Alex Gorbachev
Storcium
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux