Re: cephfs, low performances

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 28, 2015 at 1:24 PM, Francois Lafont <flafdivers@xxxxxxx> wrote:
> Hi,
>
> Sorry for my late answer.
>
> On 23/12/2015 03:49, Yan, Zheng wrote:
>
>>>> fio tests AIO performance in this case. cephfs does not handle AIO
>>>> properly, AIO is actually SYNC IO. that's why cephfs is so slow in
>>>> this case.
>>>
>>> Ah ok, thanks for this very interesting information.
>>>
>>> So, in fact, the question I ask myself is: how to test my cephfs
>>> to know if I have correct (or not) perfs as regard my hardware
>>> configuration?
>>>
>>> Because currently, in fact, I'm unable to say if I have correct perf
>>> (not incredible but in line with my hardware configuration) or if I
>>> have a problem. ;)
>>>
>>
>> It's hard to tell. basically data IO performance on cephfs should be
>> similar to data IO performance on rbd.
>
> Ok, so in a client node, I have mounted cephfs (via ceph-fuse) and a rados
> block device formatted in XFS. If I have well understood, cephfs uses sync
> IO (not async IO) and, with ceph-fuse, cephfs can't make O_DIRECT IO. So, I
> have tested this fio command in cephfs _and_ in rbd:
>
>     fio --randrepeat=1 --ioengine=sync --direct=0 --gtod_reduce=1 --name=readwrite \
>         --filename=rw.data --bs=4k --iodepth=1 --size=300MB --readwrite=randrw     \
>
>
> and indeed with cephfs _and_ rbd, I have approximatively the same result:
> - cephfs => ~516 iops
> - rbd    => ~587 iops
>
> Is it consistent?
>
yes

> That being said, I'm unable to know if it's good performance as regard my hardware
> configuration. I'm curious to know the result in other clusters with the same fio
> command.

This fio command is check performance of single thread SYNC IO. If you
want to check overall throughput, you can try using buffered IO or
increasing thread number.

FYI, I have written a patch to add AIO support to cephfs kernel client:
https://github.com/ceph/ceph-client/commits/testing

>
> Another point: I have noticed something which is very strange for me. It's about
> the rados block device and this fio command:
>
>     # In this case, I use libaio and (direct == 0)
>     fio --randrepeat=1 --ioengine=libaio --direct=0 --gtod_reduce=1 --name=readwrite \
>         --filename=rw.data --bs=4k --iodepth=16 --size=300MB --readwrite=randrw      \
>         --rwmixread=50
>
> This command in the rados block device gives me ~570 iops. But the curious thing
> is that I have better iops if I just change "--direct=0" to "--direct=1" in the
> command above. In this case, I have ~1400 iops. I don't understand this difference.
> So, I have better perfs with "--direct=1":
>
> * --direct=1 => ~1400 iops
> * --direct=0 => ~570 iops
>
> Why I have this behavior? I thought it will be the opposite (better perfs with
> --direct=0). Is it normal?
>
linux kernel only supports AIO for fd opened in O_DIRECT mode, when
file is not opened in O_DIRECT mode, AIO is actually SYNC IO.

Regards
Yan, Zheng


> --
> François Lafont
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux