Re: Ceph Blog Articles

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sascha,

Here is what I used

[global]
ioengine=rbd
randrepeat=0
clientname=admin
pool=<poolname>
rbdname=test
invalidate=0    # mandatory
rw=write
bs=64k
direct=1
time_based=1
runtime=360
numjobs=1

[rbd_iodepth1]
iodepth=1

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Sascha Vogt
> Sent: 05 December 2016 14:08
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Ceph Blog Articles
> 
> Hi Nick,
> 
> thanks for sharing your results. Would you be able to share the fio args you used for benchmarking (especially the ones for the
> screenshot you shared in the write latency post)?
> 
> What I found is that when I do some 4k write benchmarks my lat stdev is much higher then the average (also wider range for min vs
> max than). So I wondered if it's my parameters or the cluster.
> 
> Greetings
> -Sascha-
> 
> Am 11.11.2016 um 20:33 schrieb Nick Fisk:
> > Hi All,
> >
> > I've recently put together some articles around some of the performance testing I have been doing.
> >
> > The first explores the high level theory behind latency in a Ceph infrastructure and what we have managed to achieve.
> >
> > http://www.sys-pro.co.uk/ceph-write-latency/
> >
> > The second explores some of results we got from trying to work out how much CPU a Ceph IO uses.
> >
> > http://www.sys-pro.co.uk/how-many-mhz-does-a-ceph-io-need/
> >
> > I hope they are of interest to someone.
> >
> > I'm currently working on a couple more explaining the choices behind
> > the hardware that got us 700us write latency and what we finally built.
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux