RE: Correct RAID options

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It is off topic for this list - but the one question I would say is what happens if you lose the new RAID5 setup ? If you lose a single disk - the rebuild time on that amount of data is going to be enormous - I hope you are at least using Enterprise Level SATA drives. I would definitely be looking at RAID 6 for a solution like this.

As to more throughput - surely it would be better to expand outwards - i.e. add more servers to distribute the write workload ?

You have not stated what the server specs are or how their processors are coping with the throughput requirements.

The other alternative which sounds like it would be a smarter long term alternative would be to look at a SAN solution - the Dell Equallogic systems offer tiered storage for workloads exactly as you describe and can front end the write process with SSD storage.

Craig

-----Original Message-----
From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-owner@xxxxxxxxxxxxxxx] On Behalf Of Chris Knipe
Sent: Wednesday, 20 August 2014 4:39 AM
To: linux-raid@xxxxxxxxxxxxxxx
Subject: Correct RAID options

Hi All,

I'm sitting with a bit of a catch 22 and need some feedback / inputs please.
This isn't strictly md related as all servers has MegaRAID SAS controllers with BBUs and I am running hardware raid.  So my apologies about the off
topic posting, but the theory remains the same I presume.   All the servers
store millions of small (< 2mb) files, in a structured directory structure to keep the amount of files per directory in check.

Firstly, I have a bunch (3) of front end servers, all configured in RAID10 and consisting of 8 x 4TB SATAIII drives.  Up to now they have performed very well, with roughly 30% reads and 70% writes.  This is absolutely fine as RAID10 does give much better write performance and we expect this.  I can't recall what the benches said when I tested this many, many months ago, but it was good and IO wait even under heavy heavy usage is very little...

The problem now is coming in that the servers are reaching their capacity and the arrays are starting to fill up.  Deleting files, isn't really an option for me as I want to keep them as long as possible.  So, let's get a server to archive data on.

So, a new server, 15 x 4TB SATAIII drives again, on a MegaRAID controller.
With the understanding that the "archives" will be read more than written to (we only write here once we move data from the RAID10 arrays), I opted for
RAID5 rather.  The higher spindle count surely should count for something.
Well.  The server was configured, array initialised, and tests shows more than 1gb/s in write speeds - faster than the RAID10 arrays.  I am pleased!

What's the problem?  Well the front end servers does an enormous amount of random read/writes (30/70 split), 24x7.  Some 3 million files are added
(written) per day, of which roughly 30% are read again.  So, the majority of the IO activity is writing to disk.  With all the writing going on, there is effectively zero IO left for reading data.  I can't read (or should we say
"move") data off the server faster than what it is being written.  The moment I start to do any amount of significant read requests, the IO wait jumps through the roof and the write speeds obviously also crawl to a halt.
I suspect due to the seek time on the spindles, which does make sense and all of that.  So there still isn't really any problem here that we don't know about already.

Now, I realise that this is a really, really open question in terms of interpretation, but what raid levels with high spindle counts (say 8, 12 or
15 or so) will provide for the best "overall" and balanced read/write performance in terms of random IO?  I do not necessarily need blistering performance in terms of speeds due to the small file sizes, but I do need blistering fast performance in terms of IOPS and random read/writes...  All file systems currently EXT4 and all raid disks running with a 64K block size.

Many thanks, and once again my apologise for my theoretical question rather than md specific question.

--
Chris.






--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at  http://vger.kernel.org/majordomo-info.html

Disclaimer

CONFIDENTIAL

This message contains confidential information and is intended only for the intended recipients. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system.

Disclaimer

CONFIDENTIAL

This message contains confidential information and is intended only for the intended recipients. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux