Re: Ceph Blog Articles

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sascha,

Have you got any write back caching enabled? That time looks very fast, almost too fast to me. It looks like some of the writes
completed in around 70us which is almost the same as a single hop of 10G networking, where you would have at least 2 hops
(Client->OSD1->OSD2).

What are your write cache settings for qemu?

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Sascha Vogt
> Sent: 06 December 2016 12:14
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Ceph Blog Articles
> 
> Hi Nick,
> 
> thanks for the parameters. As you were kind enough to share them, I thought I'll share my results. I tested within a virtual
machine
> with the kvm rbd driver and used the following command line:
> 
> > fio --name=fio-test --randrepeat=0 --invalidate=0 --rw=write --bs=64k
> > --direct=1 --time_based=1 --runtime=360 --iodepth=1 --numjobs=1
> 
> And got the following results
> 
> >   write: io=126641MB, bw=360224KB/s, iops=5628, runt=360001msec
> >     clat (usec): min=74, max=227697, avg=172.33, stdev=661.52
> >      lat (usec): min=75, max=227698, avg=174.31, stdev=661.55
> 
> I find it interesting that my stdev is so much higher than my average.
> Maybe it's due to the cluster setup. We have 2x10 GbE shared between OpenStack data, Ceph client and Ceph data (separated via
> VLANs), all on a single switch (so no additional hops). Also the pool we are effectively writing to (flash based cache pool in
front of an
> HDD pool, but big enough to not experience any flushes / evictions during the
> test) is a 30 OSD / 15 NVMe disk - size 2 one (journal and data are on the same partition on the NVMes, each one has 4 partitions,
so
> no file based journal but raw partition)
> 
> Greetings
> -Sascha-
> 
> Am 05.12.2016 um 17:16 schrieb Nick Fisk:
> > Hi Sascha,
> >
> > Here is what I used
> >
> > [global]
> > ioengine=rbd
> > randrepeat=0
> > clientname=admin
> > pool=<poolname>
> > rbdname=test
> > invalidate=0    # mandatory
> > rw=write
> > bs=64k
> > direct=1
> > time_based=1
> > runtime=360
> > numjobs=1
> >
> > [rbd_iodepth1]
> > iodepth=1
> >
> >> -----Original Message-----
> >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> >> Of Sascha Vogt
> >> Sent: 05 December 2016 14:08
> >> To: ceph-users@xxxxxxxxxxxxxx
> >> Subject: Re:  Ceph Blog Articles
> >>
> >> Hi Nick,
> >>
> >> thanks for sharing your results. Would you be able to share the fio
> >> args you used for benchmarking (especially the ones for the screenshot you shared in the write latency post)?
> >>
> >> What I found is that when I do some 4k write benchmarks my lat stdev
> >> is much higher then the average (also wider range for min vs max than). So I wondered if it's my parameters or the cluster.
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux