Re: Strange krbd behaviour with queue depths

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mark,

Sorry if I am showing my ignorance here, is there some sort of flag or tool that generates this from fio?

Nick

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Mark Nelson
Sent: 06 March 2015 15:06
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Strange krbd behaviour with queue depths

Interesting.  We've seen things like this on the librbd side in the past, but I don't think I've seen this kind of behavior in the kernel client.  what does the latency historgram look like when going from 1->2?

Mark

On 03/06/2015 08:10 AM, Nick Fisk wrote:
> Just tried cfq, deadline and noop which more or less all show 
> identical results
>
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf 
> Of Alexandre DERUMIER
> Sent: 06 March 2015 11:59
> To: Nick Fisk
> Cc: ceph-users
> Subject: Re:  Strange krbd behaviour with queue depths
>
> Hi, do you have tried with differents io schedulers to compare ?
>
>
> ----- Mail original -----
> De: "Nick Fisk" <nick@xxxxxxxxxx>
> À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> Envoyé: Jeudi 5 Mars 2015 18:17:27
> Objet:  Strange krbd behaviour with queue depths
>
>
>
> I’m seeing a strange queue depth behaviour with a kernel mapped RBD, librbd does not show this problem.
>
>
>
> Cluster is comprised of 4 nodes, 10GB networking, not including OSDs as test sample is small so fits in page cache.
>
>
>
> Running fio against a kernel mapped RBD
>
> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 
> --name=test --filename=/dev/rbd/cache1/test2 --bs=4k 
> --readwrite=randread --iodepth=1 --runtime=10 --size=1g
>
>
>
> Queue Depth
>
> IOPS
>
>
> 1
>
> 2021
>
>
> 2
>
> 288
>
>
> 4
>
> 376
>
>
> 8
>
> 601
>
>
> 16
>
> 1272
>
>
> 32
>
> 2467
>
>
> 64
>
> 16901
>
>
> 128
>
> 44060
>
>
>
> See how initially I get a very high number of IOs at queue depth 1, but this drops dramatically as soon as I start increasing the queue depth. It’s not until a depth or 32 IOs that I start to get similar performance. Incidentally when changing the read type to sequential instead of random the oddity goes away.
>
>
>
> Running fio with librbd engine and the same test options I get the 
> following
>
>
>
> Queue Depth
>
> IOPS
>
>
> 1
>
> 1492
>
>
> 2
>
> 3232
>
>
> 4
>
> 7099
>
>
> 8
>
> 13875
>
>
> 16
>
> 18759
>
>
> 32
>
> 17998
>
>
> 64
>
> 18104
>
>
> 128
>
> 18589
>
>
>
>
>
> As you can see the performance scales up nicely, although the top end IO’s seem limited to around 18k. I don’t know if this is due to kernel/userspace performance differences or if there is a lower max queue depth limit in librbd.
>
>
>
> Both tests were run on a small sample size to force the OSD data into page cache to rule out any device latency.
>
>
>
> Does anyone know why kernel mapped RBD’s show this weird behaviour? I don’t think it can be OSD/ceph config related as it only happens with krbd’s.
>
>
>
> Nick
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> Nick Fisk
> Technical Support Engineer
>
> System Professional Ltd
> tel: 01825 830000
> mob: 07711377522
> fax: 01825 830001
> mail: Nick.Fisk@xxxxxxxxxxxxx
> web: www.sys-pro.co.uk<http://www.sys-pro.co.uk>
>
> IT SUPPORT SERVICES | VIRTUALISATION | STORAGE | BACKUP AND DR | IT 
> CONSULTING
>
> Registered Office:
> Wilderness Barns, Wilderness Lane, Hadlow Down, East Sussex, TN22 4HU 
> Registered in England and Wales.
> Company Number: 04754200
>
>
> Confidentiality: This e-mail and its attachments are intended for the above named only and may be confidential. If they have come to you in error you must take no action based on them, nor must you copy or show them to anyone; please reply to this e-mail and highlight the error.
>
> Security Warning: Please note that this e-mail has been created in the knowledge that Internet e-mail is not a 100% secure communications medium. We advise that you understand and observe this lack of security when e-mailing us.
>
> Viruses: Although we have taken steps to ensure that this e-mail and attachments are free from any virus, we advise that in keeping with good computing practice the recipient should ensure they are actually virus free. Any views expressed in this e-mail message are those of the individual and not necessarily those of the company or any of its subsidiaries.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux