At 06:07 PM 4/14/2011, RadosÅ?aw Smogura wrote:
One thing you should care about is such called
write endurance - number of writes to one memory
region before it will be destroyed - if your SSD
driver do not have transparent allocation, then
you may destroy it really fast, because write of
each "block" will be in same memory segment,
clog/xlog may be failed with 10k-100k writes.
But if your SSD has transparent allocation, then
internal controller will count your writes to
given memory cell, and when lifetime of this
cell will be at the end, it will "associate"
block with different cell. With transparent
allocation, You may sometimes do not fear if
system uses journaling, you store logs there on
any kind of often updatable data. You may calculate life time of your SSD with:
WritesToDestroyCells = "write_endurance" * "disk_size"
AvgLifeTime = WritesToDestroyCells / writes_per_sec
Those are high numbers, even with simply disks
as 10.000 * 60GB, means you need to send 600TB
of data to one SSD (not completely true, as you
can't send one byte, but full blocks) . Ofc, In
order to extend life time of SSD you should
provide file systems cache, or SSD with cache, as well turn off FS journaling.
I'm not an expert on SSDs, but I believe modern
SSDs are supposed to automatically spread the
writes across the entire disk where possible -
even to the extent of moving already written stuff.
So if the drives are full or near full, the
tradeoff is between lower performance (because
you have to keep moving stuff about) or lower
lifespan (one area gets overused).
If the drives are mostly empty the SSD's
controller has an easier job - it doesn't have to move as much data around.
Regards,
Link.
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general