At 12:23 PM 1/9/2006, peter royal wrote:
On Jan 8, 2006, at 4:35 PM, Ron wrote:
Areca ARC-1220 8-port PCI-E controller
Make sure you have 1GB or 2GB of cache. Get the battery backup and
set the cache for write back rather than write through.
The card we've got doesn't have a SODIMM socket, since its only an
8- port card. My understanding was that was cache used when writing?
Trade in your 8 port ARC-1220 that doesn't support 1-2GB of cache for
a 12, 16, or 24 port Areca one that does. It's that important.
Present generation SATA2 HDs should average ~50MBps raw ASTR. The
Intel IOP333 DSP on the ARC's is limited to 800MBps, so that's your
limit per card. That's 16 SATA2 HD's operating in parallel (16HD
RAID 0, 17 HD RAID 5, 32 HD RAID 10).
Next generation 2.5" form factor 10Krpm SAS HD's due to retail in
2006 are supposed to average ~90MBps raw ASTR. 8 such HDs in
parallel per ARC-12xx will be the limit.
Side Note: the PCI-Ex8 bus on the 12xx cards is good for ~1.6GBps
RWPB, so I expect Areca is going to be upgrading this controller to
at least 2x, if not 4x (would require replacing the x8 bus with a x16
bus), the bandwidth at some point.
A PCI-Ex16 bus is good for ~3.2GBps RWPB, so if you have the slots 4
such populated ARC cards will max out a PCI-Ex16 bus.
In your shoes, I think I would recommend replacing your 8 port
ARC-1220 with a 12 port ARC-1230 with 1-2GB of battery backed cache
and planning to get more of them as need arises.
A 2.6.12 or later based Linux distro should have NO problems using
more than 4GB or RAM.
Upgraded the kernel to 2.6.15, then we were able to set the BIOS
option for the 'Memory Hole' to 'Software' and it saw all 4G (under
2.6.11 we got a kernel panic with that set)
There are some other kernel tuning params that should help memory and
physical IO performance. Talk to a Linux kernel guru to get the
correct advice specific to your installation and application.
It should be noted that there are indications of some major
inefficiencies in pg's IO layer that make it compute bound under some
circumstances before it becomes IO bound. These may or may not cause
trouble for you as you keep pushing the envelope for maximum IO performance.
With the kind of work you are doing and we are describing, I'm sure
you can have a _very_ zippy system.
Ron