----- Original Message ----- > From: Graeme B. Bell <grb@xxxxxxxxxxxxxxxxx> > To: "pgsql-admin@xxxxxxxxxxxxxx" <pgsql-admin@xxxxxxxxxxxxxx> > Cc: "tsimon@xxxxxxxxxxx" <tsimon@xxxxxxxxxxx> > Sent: Friday, 22 May 2015, 13:27 > Subject: Re: Performances issues with SSD volume ? > >> No, I had read some megacli related docs about SSD, and the advice was >> to put writethrough on disks. (see >> http://wiki.mikejung.biz/LSI#Configure_LSI_Card_for_SSD_RAID), last > section. >> Disks are already in "No Write Cache if Bad BBU" mode. (wrote on >> splitted line on my extract) > > > ==== > > The advice in that link is for maximum performance e.g. for fileservers where > people are dumping documents, temporary working space, and so on. > > It is not advice for maximum performance of DB systems which have unique demands > in terms of persistence of data writes. > > For postgres, if you use WT with SSDs that are not tested as having > data-in-flight protection via capacitors, YOU WILL GET A CORRUPTED DB WITH > WRITETHROUGH (the first time the power is cut). It is quite likely you will not > be able to recover that DB, except from backups. Potentially, the consequences > of corrupted data could affect your slave depending on which version of postgres > you're using. If the cache on the SSD isn't safe then nothing you do elsewhere will protect the data. The only thing you could try is to disable the cache on the SSD itself, which would have severe performance and longevity penalties as each write would have to hit a full erase block. Regardless; in this conversation there's no need for your doom as Thomas has said he's using intel s3500's, which have supercaps, and from personal experience they are safe; it's news to me if they're not. -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin