Re: RAID6 r-m-w, op-journaled fs, SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/30/2011 4:50 PM, Michael Monnerie wrote:
On Samstag, 30. April 2011 Stan Hoeppner wrote:
Poor cache management, I'd guess, is one reason why you see Areca
RAID  cards with 1-4GB cache DRAM whereas competing cards w/ similar
price/performance/features from LSI, Adaptec, and others sport
512MB.

On one server (XENserver virtualized with ~14 VMs running Linux) which
suffered from slow I/O on RAID-6 during heavy times, I upgraded the
cache from 1G to 4G using an Areca ARC-1260 controller (somewhat
outdated now), and couldn't see any advantage. Maybe it would have been
measurable, but the damn thing was still pretty slow, so using more hard
disks is still the better option than upgrading the cache.

Just for documentation if someone sees slow I/O on Areca. More spindles
rock. That server had 8x 10krpm WD Raptor 150G drives by the time.

Similar to the case with CPUs, more cache can only take you so far. The benefit resulting from the cache size, locality (on/off chip), and algorithm is often very workload dependent, as is the case with RAID controller cache.

Adding controller cache can benefit some workloads, depending on the controller make/model, but I agree with you that adding spindles, or swapping to faster spindles (say 7.2 to 15k, or SSD), will typically benefit all workloads. However, given that DIMMs are so cheap compared to hot swap disks, maxing out controller cache on models that have DIMM slots is an inexpensive first step to take when faced with an IO bottleneck.

Larger controller cache seemed to have more positive impact on SCSI RAID controllers of the mid/late 90s than on modern controllers. The difference between 8MB and 64MB was substantial with many workloads back then. On many modern SAS/SATA controllers the difference between 512MB and 1GB isn't nearly as profound, if any at all. The shared SCSI bus dictated sequential access to all 15 drives on the bus which would tend to explain why more cache made a big difference, by masking the latencies. SAS/SATA allows concurrent access to all drives simultaneously (assuming no expanders) without the SCSI latencies. This may tend to explain why larger RAID cache on today's controllers doesn't yield the benefits of previous generation SCSI RAID cards.

--
Stan

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux