Hi, Yes, I used fio, here is the fio file I used for the latency test [global] ioengine=rbd randrepeat=0 clientname=admin rbdname=test2 invalidate=0 # mandatory rw=write bs=4k direct=1 time_based=1 runtime=360 numjobs=1 [rbd_iodepth1] iodepth=1 > -----Original Message----- > From: Fulvio Galeazzi [mailto:fulvio.galeazzi@xxxxxxx] > Sent: 14 November 2016 09:25 > To: nick@xxxxxxxxxx; 'Ceph Users' <ceph-users@xxxxxxxxxxxxxx> > Subject: Re: Ceph Blog Articles > > Hallo Nick, very interesting reading, thanks! > What are you using for measuring performance? Base "fio" or something > else? Would you be willing to attach to the article the relevant part of > the benchmark tool configuration? > Thanks! > > Fulvio > > -------- Original Message -------- > Subject: Ceph Blog Articles > From: Nick Fisk <nick@xxxxxxxxxx> > To: 'Ceph Users' <ceph-users@xxxxxxxxxxxxxx> > Date: 11/11/2016 08:33 PM > > > Hi All, > > > > I've recently put together some articles around some of the performance testing I have been doing. > > > > The first explores the high level theory behind latency in a Ceph infrastructure and what we have managed to achieve. > > > > http://www.sys-pro.co.uk/ceph-write-latency/ > > > > The second explores some of results we got from trying to work out how much CPU a Ceph IO uses. > > > > http://www.sys-pro.co.uk/how-many-mhz-does-a-ceph-io-need/ > > > > I hope they are of interest to someone. > > > > I'm currently working on a couple more explaining the choices behind the hardware that got us 700us write latency and what we > > finally built. > > > > Nick > > > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com