Re: high throughput storage server?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 18 Feb 2011, Joe Landman wrote:

On 02/18/2011 08:49 AM, Mattias Wadenstein wrote:
[...]
2U machines with 12 3.5" or 16-24 2.5" hdd slots can be gotten pretty
cheaply. Add a quad-gige card if your load can get decent sequential
load, or look at fast/ssd 2.5" drives if you are mostly short random
reads. Then add as many as you need to sustain the analysis speed you
need. The advantage here is that this is really scalable, if you double
the number of servers you get at least twice the IO capacity.

Oh, yet another setup I've seen is adding a some (2-4) fast disks to
each of the analysis machines and then running a distributed replicated
filesystem like hadoop over them.

Ugh ... short-stroking drives or using SSDs? Quite cost-inefficient for this work. And given the HPC nature of the problem, its probably a good idea to aim for more cost-efficient.

Or just regular fairly slow sata drives. The advantage being that it is really cheap to get to 100-200 spindles this way, so you might not need very fast disks. It depends on your IO pattern, but for the LHC data analysis this has been showed to be surprisingly fast.

/Mattias Wadenstein
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux