Hi,
In my current project we have unusual (at least for me) conditions for relation db, namely:
* high write/read ratio (writes goes from bulk data updates/inserts (every couple or minutes or so))
* loosing some recent part of data (last hour for example) is OK, it can be easy restored
First version of app that used plain updates worked too long, it was replaced with version two that uses partitions, truncate, copy and cleanup of old data once daily. It works reasonably fast with current amount of data. But this amount will grow, so I'm looking for possible optimisations.
The main idea (exept using some non relational db) I have is to say postgres to make more operation in memory and use fsync and other operations less.
For example, I have idea to setup partition in memory, corresponding tablespace and use it for that data. Main problem here that amount of data is big and only part is going to be updated realy frequently.
Are there any ideas, best practies or so in such conditions?