Re: Ceph performance pattern

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am using O_DIRECT=1

-----Original Message-----
From: Mark Nelson [mailto:mnelson@xxxxxxxxxx] 
Sent: Wednesday, July 27, 2016 8:33 AM
To: EP Komarla <Ep.Komarla@xxxxxxxxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Ceph performance pattern

Ok.  Are you using O_DIRECT?  That will disable readahead on the client, but if you don't use O_DIRECT you won't get the benefit of iodepth=16. 
See fio's man page:

"Number of I/O units to keep in flight against the file. Note that increasing iodepth beyond 1 will not affect synchronous ioengines (except for small degress when verify_async is in use). Even async engines my impose OS restrictions causing the desired depth not to be achieved. This may happen on Linux when using libaio and not setting direct=1, since buffered IO is not async on that OS. Keep an eye on the IO depth distribution in the fio output to verify that the achieved depth is as expected. Default: 1."

IE, how you are testing could really affect the ability to do client-side readahead and may affect how much client-side concurrency you are getting.

Mark

On 07/27/2016 10:14 AM, EP Komarla wrote:
> I am using aio engine in fio.
>
> Fio is working on rbd images
>
> - epk
>
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf 
> Of Mark Nelson
> Sent: Tuesday, July 26, 2016 6:27 PM
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Ceph performance pattern
>
> Hi epk,
>
> Which ioengine are you using?  if it's librbd, you might try playing with librbd readahead as well:
>
> # don't disable readahead after a certain number of bytes rbd 
> readahead disable after bytes = 0
>
> # Set the librbd readahead to whatever:
> rbd readahead max bytes = 4194304
>
> If it's with kvm+guests, you may be better off playing with the guest readahead but you can try the librbd readahead if you want.
>
> Another thing to watch out for is fragmentation.  btrfs OSDs for example will fragment terribly after small random writes to RBD images due to how copy-on-write works.  That can cause havoc with RBD sequential reads in general.
>
> Mark
>
>
> On 07/26/2016 06:38 PM, EP Komarla wrote:
>> Hi,
>>
>>
>>
>> I am showing below fio results for Sequential Read on my Ceph cluster.
>> I am trying to understand this pattern:
>>
>>
>>
>> - why there is a dip in the performance for block sizes 32k-256k?
>>
>> - is this an expected performance graph?
>>
>> - have you seen this kind of pattern before
>>
>>
>>
>>
>>
>> My cluster details:
>>
>> Ceph: Hammer release
>>
>> Cluster: 6 nodes (dual Intel sockets) each with 20 OSDs and 4 SSDs (5 
>> OSD journals on one SSD)
>>
>> Client network: 10Gbps
>>
>> Cluster network: 10Gbps
>>
>> FIO test:
>>
>> - 2 Client servers
>>
>> - Sequential Read
>>
>> - Run time of 600 seconds
>>
>> - Filesize = 1TB
>>
>> - 10 rbd images per client
>>
>> - Queue depth=16
>>
>>
>>
>> Any ideas on tuning this cluster?  Where should I look first?
>>
>>
>>
>> Thanks,
>>
>>
>>
>> - epk
>>
>>
>>
>>
>> Legal Disclaimer:
>> The information contained in this message may be privileged and 
>> confidential. It is intended to be read only by the individual or 
>> entity to whom it is addressed or by their designee. If the reader of 
>> this message is not the intended recipient, you are on notice that 
>> any distribution of this message, in any form, is strictly 
>> prohibited. If you have received this message in error, please 
>> immediately notify the sender and delete or destroy any copy of this message!
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> Legal Disclaimer:
> The information contained in this message may be privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete or destroy any copy of this message!
>

Legal Disclaimer:
The information contained in this message may be privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete or destroy any copy of this message!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux