Re: cephfs, low performances

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 21, 2015 at 11:46 PM, Don Waterloo <don.waterloo@xxxxxxxxx> wrote:
> On 20 December 2015 at 22:47, Yan, Zheng <ukernel@xxxxxxxxx> wrote:
>>
>> >> ---------------------------------------------------------------
>> >>
>>
>>
>> fio tests AIO performance in this case. cephfs does not handle AIO
>> properly, AIO is actually SYNC IO. that's why cephfs is so slow in
>> this case.
>>
>> Regards
>> Yan, Zheng
>>
>
> OK, so i changed fio engine to 'sync' for the comparison of a single
> underlying osd vs the cephfs.
>
> the cephfs w/ sync is ~ 115iops / ~500KB/s.

This is normal because you were doing single thread sync IO. If
round-trip time for each OSD request is about 10ms (network latency),
you can only have about 100 IOPS.

> the underlying osd storage w/ sync is 6500 iops/270MB/s.
>
> I also don't think this explains why cephfs-fuse faster (~5x faster, but
> still ~100x slower than it should be).
>

Direct IO is used in your test case. ceph-fuse does not handle
direct-IO correctly, user space cache is used in direct-IO case.

Regards
Yan, Zheng


> If i get rid of fio and use tried-and-true dd:
> time dd if=/dev/zero of=rw.data bs=256k count=10000
> on the underlying osd storage shows 426MB/s.
> on the cephfs, it gets 694MB/s.
>
> hmm.
>
> so i guess my 'lag' issue of slow requests is unrelated and is my real
> problem.
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux