Re: how to judge the results? - rados bench comparison

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



25 Gbit/s doesn't have a significant latency advantage over 10 Gbit/s.

For reference: a point-to-point 10 Gbit/s fiber link takes around 300
ns of processing for rx+tx on standard Intel X520 NICs (measured it),
so not much to save here.
Then there's serialization latency which changes from 0.8ns/byte to
0.32ns/byte, i.e., for a small 4kb IO there's an advantage of only
about 2µs.

That's not really significant unless you run all your storage on
NVDIMMs or in RAM or something.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Wed, Apr 17, 2019 at 10:52 AM Christian Balzer <chibi@xxxxxxx> wrote:
>
> On Wed, 17 Apr 2019 10:39:10 +0200 Lars Täuber wrote:
>
> > Wed, 17 Apr 2019 09:52:29 +0200
> > Stefan Kooman <stefan@xxxxxx> ==> Lars Täuber <taeuber@xxxxxxx> :
> > > Quoting Lars Täuber (taeuber@xxxxxxx):
> > > > > I'd probably only use the 25G network for both networks instead of
> > > > > using both. Splitting the network usually doesn't help.
> > > >
> > > > This is something i was told to do, because a reconstruction of failed
> > > > OSDs/disks would have a heavy impact on the backend network.
> > >
> > > Opinions vary on running "public" only versus "public" / "backend".
> > > Having a separate "backend" network might lead to difficult to debug
> > > issues when the "public" network is working fine, but the "backend" is
> > > having issues and OSDs can't peer with each other, while the clients can
> > > talk to all OSDs. You will get slow requests and OSDs marking each other
> > > down while they are still running etc.
> >
> > This I was not aware of.
> >
> Split networks are usually more trouble than their worth and as stated
> only help when your OSD speeds exceed the network bandwidth _and_ you
> can't do a CLAG bonding over switches that support it, gaining both
> additional bandwidth and redundancy.
>
> >
> > > In your case with only 6 spinners max per server there is no way you
> > > will every fill the network capacity of a 25 Gb/s network: 6 * 250 MB/s
> > > (for large spinners) should be just enough to fill a 10 Gb/s link. A
> > > redundant 25 Gb/s link would provide 50 Gb/s of bandwith, enough for
> > > both OSD replication traffic and client IO.
> >
> > The reason for the choice for the 25GBit network was because a remark of someone, that the latency in this ethernet is way below that of 10GBit. I never double checked this.
> >
> Correct, 25Gb/s is a split of 100GB/s, inheriting the latency advantages
> from it.
> So if you do a lot of small IOPS, this will help.
>
> But only completely so if everything is on the same boat.
>
> So if you clients (or most of them at least) can be on 25GB/s as well,
> that would be the best situation, with a non-split network.
>
> Christian
>
> >
> > >
> > > My 2 cents,
> > >
> > > Gr. Stefan
> > >
> >
> > Cheers,
> > Lars
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
> Christian Balzer        Network/Systems Engineer
> chibi@xxxxxxx           Rakuten Communications
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux