Re: improve single job sequencial read performance.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 7, 2018 at 8:37 PM, Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx> wrote:
> On Wed, Mar 7, 2018 at 9:43 AM, Cassiano Pilipavicius
> <cassiano@xxxxxxxxxxx> wrote:
>> Hi all, this issue already have been discussed in older threads and I've
>> already tried most of the solutions proposed in older threads.
>>
>>
>> I have a small and  old ceph cluster (slarted in hammer and upgraded until
>> luminous 12.2.2) , connected thru single 1gbe link shared (I know this is
>> not optimal but for my workload it is handling the load reasonably well). I
>> use for RBD for small VMs in libvirtu/qemu.
>>
>> My problem is... If i need to copy a large file (cp, dd, tar), the read
>> speed is very low (15MB/s). I've tested the write speed of a single job with
>> dd zero (direct) > file and the speed is good enought for my environment
>> (80MB/s)
>>
>> If I run paralell jobs, I can saturate the network connection, the speed
>> scales with the number of jobs. I've tried setting read ahead on ceph.conf
>> and in the guest O.S
>>
>> I've never heard any report of a cluster using single 1gbe, maybe this speed
>> is what should I expect? The next week I will be upgrading the network for 2
>> x 10gbe (private and public) but I would like to know if I have any issue
>> that I need to address before, as the problem can be masked by the network
>> upgrade.
>>
>> If anyone can throw some light or point me in any direction or tell me....
>> this is what you should expect.... I really apreciate. If anyone need more
>> info please let me know.
>
> Workarounds I have heard of or used:
>
> 1. Use fancy striping and parallelize that way
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-April/017744.html
>
> 2. Use lvm and set up a striped volume over multiple RBDs
>
> 3. Weird but we had seen improvement in sequential speeds with larger
> object size (16 MB) in the past
>
> 4. Caching solutions may help smooth out peaks and valleys of IO -
> bcache, flashcache and we have successfully used EnhanceIO with
> writethrough mode
>
> 5. Better SSD journals help if using filestore
>
> 6. Caching controllers, e.g. Areca
>
> --
> Alex Gorbachev
> Storcium
>
>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux