On Thu, May 8, 2014 at 1:11 AM, Johann Spies <johann.spies@xxxxxxxxx> wrote:
The best way to utilize them would probably be to spend less on the CPU and RAM and more on the storage, and use SSD either for all of the storage or for specific items that have a high level of I/O (such as the indexes). Can't be more specific than that without a lot more information about the database, how it is utilized, and what's actually slow.
So my questions:
1. Will the SSD's in this case be worth the cost?
2. What will the best way to utilize them in the system?
I understand your remark about the CPU in the light of my wrong assumption earlier, but I do not understand your remark about the RAM. The fact that temporary files of up to 250Gb are created at times during complex queries, is to me an indication of too low RAM.
Are these PostgreSQL temp files or other temp files? PostgreSQL doesn't suppress the use of temp files just because you have a lot of RAM. You would also have to set work_mem to a very large setting, probably inappropriately large, and even that might not work because there other limits on how much memory PostgreSQL can use for any given operation (for example, you can't sort more than 2**32 (or 2**31?) tuples in memory, no matter how much memory you have, and in older versions even less than that). But that doesn't mean the RAM is not useful. The OS can use the RAM to buffer the temp files so that they might not ever see the disk, or might not be read from disk because they are still in memory.
SSD is probably wasted on temp files, as they are designed to be accessed mostly sequentially.
Cheers,
Jeff