On Fri, Apr 2, 2010 at 3:15 PM, Christiaan Willemsen <cwillemsen@xxxxxxxxxxxxx> wrote: > About a year ago we setup a machine with sixteen 15k disk spindles on > Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris, > we want to move away (we are more familiar with Linux anyway). > > So the plan is to move to Linux and put the data on a SAN using iSCSI (two > or four network interfaces). This however leaves us with with 16 very nice > disks dooing nothing. Sound like a wast of time. If we were to use Solaris, > ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem > with those features (ZFS on fuse it not really an option). > > So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10 > or 5), and make this a big and fast swap disk. Latency will be lower than > the SAN can provide, and throughput will also be better, and it will relief > the SAN from a lot of read iops. > > So I could create a 1TB swap disk, and put it onto the OS next to the 64GB > of memory. Then I can set Postgres to use more than the RAM size so it will > start swapping. It would appear to postgres that the complete database will > fit into memory. The question is: will this do any good? And if so: what > will happen? I suspect it will result in lousy performance because neither PG nor the OS will understand that some of that "memory" is actually disk. But if you end up testing it post the results back here for posterity... ...Robert -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance