Re: How to improve single thread sequential reads?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have you tried setting read_ahead_kb to bigger number for both client/OSD side if you are using krbd ?
In case of librbd, try the different config options for rbd cache..

Thanks & Regards
Somnath

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Alex Gorbachev
Sent: Sunday, August 16, 2015 7:07 PM
To: Nick Fisk
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  How to improve single thread sequential reads?

Hi Nick,

On Thu, Aug 13, 2015 at 4:37 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:
>> -----Original Message-----
>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
>> Of Nick Fisk
>> Sent: 13 August 2015 18:04
>> To: ceph-users@xxxxxxxxxxxxxx
>> Subject:  How to improve single thread sequential reads?
>>
>> Hi,
>>
>> I'm trying to use a RBD to act as a staging area for some data before
> pushing
>> it down to some LTO6 tapes. As I cannot use striping with the kernel
> client I
>> tend to be maxing out at around 80MB/s reads testing with DD. Has
>> anyone got any clever suggestions of giving this a bit of a boost, I
>> think I need
> to get it
>> up to around 200MB/s to make sure there is always a steady flow of
>> data to the tape drive.
>
> I've just tried the testing kernel with the blk-mq fixes in it for
> full size IO's, this combined with bumping readahead up to 4MB, is now
> getting me on average 150MB/s to 200MB/s so this might suffice.
>
> On a personal interest, I would still like to know if anyone has ideas
> on how to really push much higher bandwidth through a RBD.

Some settings in our ceph.conf that may help:

osd_op_threads = 20
osd_mount_options_xfs = rw,noatime,inode64,logbsize=256k filestore_queue_max_ops = 90000 filestore_flusher = false filestore_max_sync_interval = 10 filestore_sync_flush = false

Regards,
Alex

>
>>
>> Rbd-fuse seems to top out at 12MB/s, so there goes that option.
>>
>> I'm thinking mapping multiple RBD's and then combining them into a
>> mdadm
>> RAID0 stripe might work, but seems a bit messy.
>>
>> Any suggestions?
>>
>> Thanks,
>> Nick
>>
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux