Re: Large Linux RAID System (lots of drives)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/11/18 8:20 am, Wol's lists wrote:
On 31/10/2018 12:12, Adam Goryachev wrote:
We won't have 10G ethernet here, just a single 1G ethernet. It is only our DR system, so crappy performance is not an issue for a few days or so while we source better/faster equipment to get back to a fully working/functional system.

Thank you for your reply, you do a lot of great work on this list ;)

Let me get this right. You're buying expensive SSDs to populate a DR system? Is that really a good idea?

Nope, sorry, that wasn't what I said (or at least, not what I meant).

We need additional capacity for our primary SAN, so we are replacing both the primary and secondary san ssd's (each has 8 x 800GB SSD), with new 1.9T ssd's (5 in each). Therefore, we will have 16 x 800G ssd's spare from the 2 old servers.


I don't know the figures, but how many rotating rust disks would you need in a raid 6 to be able to read fast enough to saturate a 1G ethernet? Then look at how much of that workload is streaming new data to write, and how much is a dataset being actively modified?

The problem is that the workload is small random IO. (ie, multiple windows VM's doing SQL server, terminal server, mail server, etc tasks). We have tried rust previously on the live servers, and rapidly upgraded, currently, even the primary servers with all SSD's can get behind at times.

I don't know what state it's in, but there's also the journal work that was meant, among other things, to "close the raid 5 write hole", but part of the idea behind that was also to enable sticking an SSD cache in front of a rotating disk back end to speed up the array.

I'm not ready to deploy new technology like this yet... The point of the DR is to be 100% it will work when we need it (ie, we are already in the middle of a disaster, potentially without me or anyone else sufficiently technical to work through whatever minor issues crop up when you least expect it).
It looks to me like your ethernet is already the bottleneck, and worrying about the disks is addressing completely the wrong problem.

TBH, I'm not that concerned about performance for the DR, but since we have the SSD's available/spare, it would be a waste not to use them. The alternative would be to sell them, but I'm not sure we would get sufficient money back, thus it's better to just re-task them instead.

Regards,
Adam


--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux