Re: Large tables (was: RAID 0 not as fast as expected)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mike,

> On Mon, Sep 18, 2006 at 07:14:56PM -0400, Alex Turner wrote:
> >If you have a table with 100million records, each of which is
200bytes
> long,
> >that gives you roughtly 20 gig of data (assuming it was all written
> neatly
> >and hasn't been updated much).
> 
I'll keep that in mind (minimizing updates during loads). My plan is
updates will actually be implemented as insert to summary/history table
then delete old records. The OLTP part of this will be limited to a
particular set of tables that I anticipate will not be nearly as large.

> If you're in that range it doesn't even count as big or
challenging--you
> can keep it memory resident for not all that much money.
> 
> Mike Stone
> 
I'm aware of that, however, *each* scan could be 100m records, and we
need to keep a minimum of 12, and possibly 50 or more. So sure, if I
only have 100m records total, sure, but 500m, or 1b... According to
Alex's calculations, that'd be 100G for 500m records (just that one
table, not including indexes). 


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux