Search Postgresql Archives

coping with failing disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings,

we are setting up a new database server with quite some disks for our inhouse Postgresql-based "data warehouse".

We are considering to use separate sets of disks for indices (index space on SSDs in this case) and a table space for tables which are used as temporary tables (but for some reasons are standard tables for Postgresql). The storage for those should be as fast as possible, possibly sacrifying reliability for this.

If we would set up the SSDs for the indices as a non-redundant RAID0, it would be quite likely that this volume became faulty at some point. Theorectically, this shouldn't hurt us to much as we would just have to rebuild the indices from the existing, unharmed data. But is it that simple in pratice? Would the consistency of the database be affected if all indices are suddenly gone?

The same goes for the temporary tables. If the storage for those becomes unavailable, only the current queries should be affected. But how can we tell Postgresql to just forget about those tables, and consider the remaining database as consistent?

We can afford some down time, obviously.

 thanks, Joachim


--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux