On Thu, 2005-07-21 at 07:43 +0000, vinita bansal wrote: > Hi, > > My application is database intensive. I am using 4 processes since I have 4 > processeors on my box. There are times when all the 4 processes write to the > database at the same time and times when all of them will read all at once. > The database is definitely not read only. Out of the entire database, there > are a few tables which are accessed most of the times and they are the ones > which seem to be the bottleneck. I am trying to get as much performance > improvement as possible by putting some of these tables in RAM so that they > dont have to be read to/written from hard disk as they will be directly > available in RAM. Here's where slony comes into picture, since we'll have to > mainatin a copy of the database somewhere before running our application > (everything in RAM will be lost if there's a power failure or anything else > goes wrong). > > My concern is how good Slony is? > How much time does it take to replicate database? If the time taken to > replicate is much more then the perf. improvement we are getting by putting > tables in memory, then there's no point in going in for such a solution. Do > I have an alternative? > > Regards, > Vinita Bansal You see, if those frequently accessed tables are read-only mostly, there's no need at all to use RAMFS. They already are cached for sure in either PostgreSQL buffers or the underlying OS page cache. If you don't have enough RAM for that, increase it. Using RAMFS does only make things worse. The OS page cache really knows about frequently accessed data, usually much better than you do. If there are frequent writes on those tables, still RAMFS is not the answer. Have a look at the documentation and disable sync on writes, and you'll get a similar effect (writes are in RAM and will be synced on disk in blocks, much more efficiently). Of course you loose the safety of data this way, in case of power failure, but it's still much better than RAMFS, at least most of the data is on disk. As for Slony, or other solutions, consider this: the _only_ way to have data safety is to return 'OK' to the application only after you're _sure_ about the fact the data is on some disk platter. So, even with a replica server, data has to be transferred over the net, committed remotely, the commit notification has to come back over the net, and _then_ the database says 'OK' to the application. This is going to be quite slow, possibly slower than synchronous writing on the local disks (notice that the protocol is the same: send data to the disk, wait for a write completed notification, say 'OK' to the application). My advice is: tune you PostgreSQL, the best you can, _with_ sync write enabled. Review and optimize your SQL. Do not use RAMFS. Upgrade your hardware if that's not enough. Consider distributing the load on different servers (you'll need a multi-master solution for that, search the archives), that is, upgrade your hardware in number not in size. I hope it helps, .TM. -- ____/ ____/ / / / / Marco Colombo ___/ ___ / / Technical Manager / / / ESI s.r.l. _____/ _____/ _/ Colombo@xxxxxx ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org