Re: high throughput storage server?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Wow, I can't believe the number of responses I've received to this
question.  I've been trying to digest it all.  I'm going to throw some
follow-up comments as time allows, starting here...

On Tue, Feb 15, 2011 at 3:43 AM, David Brown <david@xxxxxxxxxxxxxxx> wrote:
> If you are not too bothered about write performance, I'd put a fair amount
> of the budget into ram rather than just disk performance.  When you've got
> the ram space to make sure small reads are mostly cached, the main
> bottleneck will be sequential reads - and big hard disks handle sequential
> reads as fast as expensive SSDs.

I could be wrong, but I'm not so sure RAM would be beneficial for our
case.  Are workload is virtually all reads, however, these are huge
reads.  The analysis programs basically do a full read of data files
that are generally pretty big: roughly 100 MB to 5 GB in the worst
case.  Average file size is maybe 500 MB (rough estimate).  And there
are hundreds of these falls, all of which need "immediate" access.  So
to cache these in RAM, seems like it would take an awful lot of RAM.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux