Matthew Wilcox wrote:
On Mon, Apr 21, 2008 at 02:44:45PM -0400, Ric Wheeler wrote:
Turning the drive write cache off is the default case for most RAID
products (including our mid and high end arrays).
I have not seen an issue with drives wearing out with either setting (cache
disabled or enabled with barriers).
The theory does make some sense, but does not map into my experience ;-)
To be fair though, the gigabytes of NVRAM on the array perform the job
that the drive's cache would do on a lower-end system.
The population I deal with personally is a huge number of 1U Centera nodes, each
of which has 4 high capacity ATA or S-ATA drives (no NVRAM). We run with
barriers (and write cache) enabled and I have not seen anything that leads me to
think that this is an issue.
One way to think about this is that even with barriers, relatively few
operations actually turn into cache flushes (fsync's, journal syncs, unmounts?).
Another thing to keep in mind is that drives are constantly writing and moving
heads - disabling write cache or doing a flush just adds an incremental number
of writes/head movements.
Using barriers or disabling write cache matters only when you are doing a write
intensive load, read intensive loads are not impacted (and random, cache miss
reads will move the heads often).
I just don't see it being an issue for any normal user (laptop user or desktop
user) since the write workload more people have is a small fraction of what we
run into in production data centers.
Running your drives in a moderate way will probably help them last longer, but I
am just not convinced that the write cache/barrier load makes much of a
difference...
ric
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html