On 31/10/2018 12:12, Adam Goryachev wrote:
We won't have 10G ethernet here, just a single 1G ethernet. It is only our DR system, so crappy performance is not an issue for a few days or so while we source better/faster equipment to get back to a fully working/functional system.
Let me get this right. You're buying expensive SSDs to populate a DR system? Is that really a good idea?
I don't know the figures, but how many rotating rust disks would you need in a raid 6 to be able to read fast enough to saturate a 1G ethernet? Then look at how much of that workload is streaming new data to write, and how much is a dataset being actively modified?
I don't know what state it's in, but there's also the journal work that was meant, among other things, to "close the raid 5 write hole", but part of the idea behind that was also to enable sticking an SSD cache in front of a rotating disk back end to speed up the array.
I'd check out that journal, and see if just sticking one SSD in front of your rotating rust would give you a decent performance boost. If that on its own is enough to saturate your 1G link, there's no point trying to speed up the disk subsystem any more. Actually, I've just looked up SATA on wikipedia, and it looks like if you could stream directly from disk to ethernet, you wouldn't even stress SATA v1 to flood a 1G connection!
And looking at the specs for a Seagate Ironwolf, they are apparently capable of about 200MB sustained streaming, which equally would flood a 1G link.
It looks to me like your ethernet is already the bottleneck, and worrying about the disks is addressing completely the wrong problem.
Cheers, Wol