Re: 1 TB of memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/17/06, Rodrigo Madera <rodrigo.madera@xxxxxxxxx> wrote:
> I don't know about you databasers that crunch in some selects, updates
> and deletes, but my personal developer workstation is planned to be a
> 4x 300GB SATA300 with a dedicated RAID stripping controller (no
> checksums, just speedup) and 4x AMD64 CPUs... not to mention 2GB for
> each processor... all this in a nice server motherboard...

no doubt, that will handle quite a lot of data.  in fact, most
databases (contrary to popular opinion) are cpu bound, not i/o bound. 
However, at some point a different set of rules come into play.  This
point is constantly chaning due to the relentless march of hardware
but I'd suggest that at around 1TB you can no longer count on things
to run quickly just depending on o/s file caching to bail you out. 
Or, you may have a single table + indexes thats 50 gb that takes 6
hours to vacuum sucking all your i/o.

another useful aspect of SSD is the relative value of using system
memory is much less, so you can reduce swappiness and tune postgres to
rely more on the filesystem and give all your memory to work_mem and
such.

merlin


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux