Search Postgresql Archives

Getting "could not read block" error when creating an index on a function.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I’m getting stuck on a problem I was hoping I could get some help with. I’m trying to create an index for the results of a function that touches two tables like this and get the following error:

CREATE INDEX my_idx ON mytable (first_time(id));
ERROR:  could not read block 0 in file "base/16386/42868": read only 0 of 8192 bytes
CONTEXT:  SQL function "first_time" during startup

Every time I run this, the last number in the block path increments by one, e.g.

ERROR:  could not read block 0 in file "base/16386/42869": read only 0 of 8192 bytes
ERROR:  could not read block 0 in file "base/16386/42870": read only 0 of 8192 bytes
ERROR:  could not read block 0 in file "base/16386/42871": read only 0 of 8192 bytes

The database is sitting on two tablespaces (pg_default and ‘data2’). When I try to use the second, I get:

CREATE INDEX my_idx ON mytable (first_time(id)) TABLESPACE data2;
ERROR:  could not read block 0 in file "pg_tblspc/17007/PG_13_202007201/16386/42870": read only 0 of 8192 bytes
CONTEXT:  SQL function "first_time" during startup

with the last number similarly incrementing upon repeated attempts.

Relevant details:

* PostgreSQL version 13.1 running on Ubuntu 20.04.1 on an AWS instance using 2 x 8TB EBS storage.
* The database is ~15TB in size.
* I am not worried about data loss; the database can be considered read-only and I have all of the files needed to recreate any table.
* I am hoping to not recreate the whole database from scratch since doing so and creating the required indices will take more than a week.
* I used these settings while importing the files to speed the process since I was not worried about data loss to improve the import speed (all turned back on after import):

autovacuum = off
synchronous_commit=off
fsync = off
full_page_writes = off

* I will not do the above again. :)
* The postmaster server crashed at least twice during the process due to running out of disk space.
* Creating any number of new indices on bare columns is no problem.
* I DROPped and recreated the functions with no change.
* This statement returns no results (but maybe am looking at the wrong thing):

select n.nspname AS schema, c.relname AS realtion from pg_class c inner join pg_namespace n on (c.relnamespace = n.oid) where c.relfilenode = '16386’;

>From reading about this error (missing block files) it suggests I have some database corruption, which is fine as I can easily delete anything problematic and recreate. I’ve deleted the indices related to the function and recreated them, but the same error remains. Accessing the related tables seems ok, but with that much data I can’t guarantee that. I don’t get any errors. 

Any help would be appreciated!

Cheers,
Demitri







[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux