Re: how to judge the results? - rados bench comparison

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The standard argument that it helps preventing recovery traffic from
clogging the network and impacting client traffic is missleading:

* write client traffic relies on the backend network for replication
operations: your client (write) traffic is impacted anyways if the
backend network is full
* you are usually not limited by network speed for recovery (except
for 1 gbit networks), and if you are you probably want to reduce
recovery speed anyways if you would run into that limit

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Wed, Apr 17, 2019 at 10:39 AM Lars Täuber <taeuber@xxxxxxx> wrote:
>
> Wed, 17 Apr 2019 09:52:29 +0200
> Stefan Kooman <stefan@xxxxxx> ==> Lars Täuber <taeuber@xxxxxxx> :
> > Quoting Lars Täuber (taeuber@xxxxxxx):
> > > > I'd probably only use the 25G network for both networks instead of
> > > > using both. Splitting the network usually doesn't help.
> > >
> > > This is something i was told to do, because a reconstruction of failed
> > > OSDs/disks would have a heavy impact on the backend network.
> >
> > Opinions vary on running "public" only versus "public" / "backend".
> > Having a separate "backend" network might lead to difficult to debug
> > issues when the "public" network is working fine, but the "backend" is
> > having issues and OSDs can't peer with each other, while the clients can
> > talk to all OSDs. You will get slow requests and OSDs marking each other
> > down while they are still running etc.
>
> This I was not aware of.
>
>
> > In your case with only 6 spinners max per server there is no way you
> > will every fill the network capacity of a 25 Gb/s network: 6 * 250 MB/s
> > (for large spinners) should be just enough to fill a 10 Gb/s link. A
> > redundant 25 Gb/s link would provide 50 Gb/s of bandwith, enough for
> > both OSD replication traffic and client IO.
>
> The reason for the choice for the 25GBit network was because a remark of someone, that the latency in this ethernet is way below that of 10GBit. I never double checked this.
>
>
> >
> > My 2 cents,
> >
> > Gr. Stefan
> >
>
> Cheers,
> Lars
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux