Ross S. W. Walker wrote:
I've always wanted a dollars to dollars comparison instead of
comparing
single components, and I've always thought that a bunch of RAM could
make up for slow disks in a lot of situations. Has anyone
done any sort
of tests that would confirm whether a typical user would get better
performance from spending that several hundred dollars
premium for scsi
on additional ram instead? Obviously this will depend to a certain
extend on the applications and how much having additional
cache can help
it, but unless you are continuously writing new data, most things can
live in cache - especially for machines that run continuously.
RAM will never make up for it cause user's are always accessing files
that are just outside of cache in size, especially if you have a lot
of files open and if the disks are slow then cache will starve to
keep up.
I'm not convinced 'never' is the right answer here although you are of
course right that cache can't solve all problems. Most of the speed
issues I see on general purpose machines are really from head contention
where a hundred different applications and/or users each want the head
to be in a different place at the same time and end up waiting for each
other's seeks. If some large percent of those requests can resolve from
cache you speed up all the others. It's a hard thing to benchmark in
ways that match real world use, though.
Always strive to get the best quality for the dollar even if quality
costs more, because poor performance always makes IT skills look bad.
This isn't really a quality issue, it's about the tradeoff between a
drive system that does somewhat better at actually handling lots of
concurrent seek requests vs. cache to avoid the need to do many of those
seeks at all. For the cases where the cache works, it will be hundreds
of times faster - where it doesn't, the slower drive might be tens of
times slower.
Better to scale down a project and use quality components then to use
lesser quality components and end up with a solution that can't
perform.
If you have an unlimited budget, you'd get both a scsi disk system with
a lot of independent heads _and_ load the box with RAM. If you don't,
you may have to choose one or the other.
SATA is good for it's size, data-warehousing, document imaging, etc.
SCSI/SAS is good for it's performance, transactional systems, huge
multi-user file access, latency sensitive data.
No argument there, but the biggest difference is in how well they deal
with concurrent seek requests. If you have to live with SATA due to
the cost difference, letting the OS have some RAM for its intelligent
cache mechanisms will sometimes help. I just wish there were some
benchmark test values that would help predict how much.
--
Les Mikesell
lesmikesell@xxxxxxxxx
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos