Re: SSD + RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greg Smith wrote:
> Karl Denninger wrote:
>> If power is "unexpectedly" removed from the system, this is true.  But
>> the caches on the SSD controllers are BUFFERS.  An operating system
>> crash does not disrupt the data in them or cause corruption.  An
>> unexpected disconnection of the power source from the drive (due to
>> unplugging it or a power supply failure for whatever reason) is a
>> different matter.
>>   
> As standard operating procedure, I regularly get something writing
> heavy to the database on hardware I'm suspicious of and power the box
> off hard.  If at any time I suffer database corruption from this, the
> hardware is unsuitable for database use; that should never happen. 
> This is what I mean when I say something meets the mythical
> "enterprise" quality.  Companies whose data is worth something can't
> operate in a situation where money has been exchanged because a
> database commit was recorded, only to lose that commit just because
> somebody tripped over the power cord and it was in the buffer rather
> than on permanent disk.  That's just not acceptable, and the even
> bigger danger of the database perhaps not coming up altogether even
> after such a tiny disaster is also very real with a volatile write cache.
Yep.  The "plug test" is part of my standard "is this stable enough for
something I care about" checkout.
>> With the write cache off on these disks they still are huge wins for
>> very-heavy-read applications, which many are.
> Very read-heavy applications would do better to buy a ton of RAM
> instead and just make sure they populate from permanent media (say by
> reading everything in early at sequential rates to prime the cache). 
> There is an extremely narrow use-case where SSDs are the right
> technology, and it's only in a subset even of read-heavy apps where
> they make sense.
I don't know about that in the general case - I'd say "it depends."

250GB of SSD for read-nearly-always applications is a LOT cheaper than
250gb of ECC'd DRAM.  The write performance issues can be handled by
clever use of controller technology as well (that is, turn off the
drive's "write cache" and use the BBU on the RAID adapter.)

I have a couple of applications where two 250GB SSD disks in a Raid 1
array with a BBU'd controller, with the disk drive cache off, is all-in
a fraction of the cost of sticking 250GB of volatile storage in a server
and reading in the data set (plus managing the occasional updates) from
"stable storage."  It is not as fast as stuffing the 250GB of RAM in a
machine but it's a hell of a lot faster than a big array of small
conventional drives in a setup designed for maximum IO-Ops.

One caution for those thinking of doing this - the incremental
improvement of this setup on PostGresql in WRITE SIGNIFICANT environment
isn't NEARLY as impressive.  Indeed the performance in THAT case for
many workloads may only be 20 or 30% faster than even "reasonably
pedestrian" rotating media in a high-performance (lots of spindles and
thus stripes) configuration and it's more expensive (by a lot.)  If you
step up to the fast SAS drives on the rotating side there's little
argument for the SSD at all (again, assuming you don't intend to "cheat"
and risk data loss.)

Know your application and benchmark it.

-- Karl
begin:vcard
fn:Karl Denninger
n:Denninger;Karl
email;internet:karl@xxxxxxxxxxxxx
x-mozilla-html:TRUE
version:2.1
end:vcard

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux