Re: What does the differences in osd benchmarks mean?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nathan,

yes the osd hosts are dual-socket machines. But does this make such difference?

osd.0: bench: wrote 1 GiB in blocks of 4 MiB in 15.0133 sec at  68 MiB/sec 17 IOPS
osd.1: bench: wrote 1 GiB in blocks of 4 MiB in 6.98357 sec at 147 MiB/sec 36 IOPS

Doubling the IOPS?

Thanks,
Lars

Thu, 27 Jun 2019 11:16:31 -0400
Nathan Fish <lordcirth@xxxxxxxxx> ==> Ceph Users <ceph-users@xxxxxxxxxxxxxx> :
> Are these dual-socket machines? Perhaps NUMA is involved?
> 
> On Thu., Jun. 27, 2019, 4:56 a.m. Lars Täuber, <taeuber@xxxxxxx> wrote:
> 
> > Hi!
> >
> > In our cluster I ran some benchmarks.
> > The results are always similar but strange to me.
> > I don't know what the results mean.
> > The cluster consists of 7 (nearly) identical hosts for osds. Two of them
> > have one an additional hdd.
> > The hdds are from identical type. The ssds for the journal and wal are of
> > identical type. The configuration is identical (ssd-db-lv-size) for each
> > osd.
> > The hosts are connected the same way to the same switches.
> > This nautilus cluster was set up with ceph-ansible 4.0 on debian buster.
> >
> > This are the results of
> > # ceph --format plain tell osd.* bench
> >
> > osd.0: bench: wrote 1 GiB in blocks of 4 MiB in 15.0133 sec at 68 MiB/sec
> > 17 IOPS
> > osd.1: bench: wrote 1 GiB in blocks of 4 MiB in 6.98357 sec at 147 MiB/sec
> > 36 IOPS
> > osd.2: bench: wrote 1 GiB in blocks of 4 MiB in 6.80336 sec at 151 MiB/sec
> > 37 IOPS
> > osd.3: bench: wrote 1 GiB in blocks of 4 MiB in 12.0813 sec at 85 MiB/sec
> > 21 IOPS
> > osd.4: bench: wrote 1 GiB in blocks of 4 MiB in 8.51311 sec at 120 MiB/sec
> > 30 IOPS
> > osd.5: bench: wrote 1 GiB in blocks of 4 MiB in 6.61376 sec at 155 MiB/sec
> > 38 IOPS
> > osd.6: bench: wrote 1 GiB in blocks of 4 MiB in 14.7478 sec at 69 MiB/sec
> > 17 IOPS
> > osd.7: bench: wrote 1 GiB in blocks of 4 MiB in 12.9266 sec at 79 MiB/sec
> > 19 IOPS
> > osd.8: bench: wrote 1 GiB in blocks of 4 MiB in 15.2513 sec at 67 MiB/sec
> > 16 IOPS
> > osd.9: bench: wrote 1 GiB in blocks of 4 MiB in 9.26225 sec at 111 MiB/sec
> > 27 IOPS
> > osd.10: bench: wrote 1 GiB in blocks of 4 MiB in 13.6641 sec at 75 MiB/sec
> > 18 IOPS
> > osd.11: bench: wrote 1 GiB in blocks of 4 MiB in 13.8943 sec at 74 MiB/sec
> > 18 IOPS
> > osd.12: bench: wrote 1 GiB in blocks of 4 MiB in 13.235 sec at 77 MiB/sec
> > 19 IOPS
> > osd.13: bench: wrote 1 GiB in blocks of 4 MiB in 10.4559 sec at 98 MiB/sec
> > 24 IOPS
> > osd.14: bench: wrote 1 GiB in blocks of 4 MiB in 12.469 sec at 82 MiB/sec
> > 20 IOPS
> > osd.15: bench: wrote 1 GiB in blocks of 4 MiB in 17.434 sec at 59 MiB/sec
> > 14 IOPS
> > osd.16: bench: wrote 1 GiB in blocks of 4 MiB in 11.7184 sec at 87 MiB/sec
> > 21 IOPS
> > osd.17: bench: wrote 1 GiB in blocks of 4 MiB in 12.8702 sec at 80 MiB/sec
> > 19 IOPS
> > osd.18: bench: wrote 1 GiB in blocks of 4 MiB in 20.1894 sec at 51 MiB/sec
> > 12 IOPS
> > osd.19: bench: wrote 1 GiB in blocks of 4 MiB in 9.60049 sec at 107
> > MiB/sec 26 IOPS
> > osd.20: bench: wrote 1 GiB in blocks of 4 MiB in 15.0613 sec at 68 MiB/sec
> > 16 IOPS
> > osd.21: bench: wrote 1 GiB in blocks of 4 MiB in 17.6074 sec at 58 MiB/sec
> > 14 IOPS
> > osd.22: bench: wrote 1 GiB in blocks of 4 MiB in 16.39 sec at 62 MiB/sec
> > 15 IOPS
> > osd.23: bench: wrote 1 GiB in blocks of 4 MiB in 15.2747 sec at 67 MiB/sec
> > 16 IOPS
> > osd.24: bench: wrote 1 GiB in blocks of 4 MiB in 10.2462 sec at 100
> > MiB/sec 24 IOPS
> > osd.25: bench: wrote 1 GiB in blocks of 4 MiB in 13.5297 sec at 76 MiB/sec
> > 18 IOPS
> > osd.26: bench: wrote 1 GiB in blocks of 4 MiB in 7.46824 sec at 137
> > MiB/sec 34 IOPS
> > osd.27: bench: wrote 1 GiB in blocks of 4 MiB in 11.2216 sec at 91 MiB/sec
> > 22 IOPS
> > osd.28: bench: wrote 1 GiB in blocks of 4 MiB in 16.6205 sec at 62 MiB/sec
> > 15 IOPS
> > osd.29: bench: wrote 1 GiB in blocks of 4 MiB in 10.1477 sec at 101
> > MiB/sec 25 IOPS
> >
> >
> > The different runs differ by ±1 IOPS.
> > Why are the osds 1,2,4,5,9,19,26 faster than the others?
> >
> > Restarting an osd did change the result.
> >
> > Could someone give me hint where to look further to find the reason?
> >
> > Thanks
> > Lars
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >  


-- 
                            Informationstechnologie
Berlin-Brandenburgische Akademie der Wissenschaften
Jägerstraße 22-23                      10117 Berlin
Tel.: +49 30 20370-352           http://www.bbaw.de
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux