On Wed, 17 Apr 2019 11:22:08 +0200 Lars Täuber wrote: > Wed, 17 Apr 2019 10:47:32 +0200 > Paul Emmerich <paul.emmerich@xxxxxxxx> ==> Lars Täuber <taeuber@xxxxxxx> : > > The standard argument that it helps preventing recovery traffic from > > clogging the network and impacting client traffic is missleading: > > What do you mean by "it"? I don't know the standard argument. > Do you mean separating the networks or do you mean having both together in one switched network? > He means separated networks, obviously. > > > > * write client traffic relies on the backend network for replication > > operations: your client (write) traffic is impacted anyways if the > > backend network is full > > This I understand as an argument for separating the networks and the backend network being faster than the frontend network. > So in case of reconstruction there should be some bandwidth left in the backend for the traffic that is used for the client IO. > You need to run the numbers and look at the big picture. As mentioned already, this is all moot in your case. 6 HDDs at realistically 150MB/s each, if they were all doing sequential I/O. which they aren't. But the for the sake of argument lest say that one of your nodes can read (or write, not both at the same time) 900MB/s. That's still less than half of a single 25Gb/s link. And that very hypothetical data rate (it's not sequential, you will concurrent operations and thus seeks) is all your node can handle, if it all going into recovery/rebalancing your clients are starved because of that, not bandwidth exhaustion. > > > * you are usually not limited by network speed for recovery (except > > for 1 gbit networks), and if you are you probably want to reduce > > recovery speed anyways if you would run into that limit > > > > Paul > > > > Lars > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Rakuten Communications _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com