Search Postgresql Archives

Re: SSDD reliability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/05/11 03:31, David Boreham wrote:
On 5/4/2011 11:15 AM, Scott Ribe wrote:

Sigh... Step 2: paste link in ;-)

<http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-drive-scale.html>

To be honest, like the article author, I'd be happy with 300+ days to
failure, IF the drives provide an accurate predictor of impending doom.
That is, if I can be notified "this drive will probably quit working in
30 days", then I'd arrange to cycle in a new drive.
The performance benefits vs rotating drives are for me worth this hassle.

OTOH if the drive says it is just fine and happy, then suddenly quits
working, that's bad.

Given the physical characteristics of the cell wear-out mechanism, I
think it should be possible to provide a reasonable accurate remaining
lifetime estimate, but so far my attempts to read this information via
SMART have failed, for the drives we have in use here.

In what way has the SMART read failed?
(I get the relevant values out successfully myself, and have Munin graph them.)

FWIW I have a server with 481 days uptime, and 31 months operating that
has an el-cheapo SSD for its boot/OS drive.

Likewise, I have a server with a first-gen SSD (Kingston 60GB) that has been running constantly for over a year, without any hiccups. It runs a few small websites, a few email lists, all of which interact with PostgreSQL databases.. lifetime writes to the disk are close to three-quarters of a terabyte, and despite its lack of TRIM support, the performance is still pretty good.

I'm pretty happy!

I note in the comments of that blog post above, it includes:

"I have shipped literally hundreds of Intel G1 and G2 SSDs to my customers and never had a single in the field failure (save for one drive in a laptop where the drive itself functioned fine but one of the contacts on the SATA connector was actually flaky, probably from vibrational damage from a lot of airplane flights, and one DOA drive). I think you just got unlucky there."

I do have to wonder if this Portman Wills guy was somehow Doing It Wrong to get a 100% failure rate over eight disks..

--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux