Search Postgresql Archives

Re: DELETE ERROR: tuple concurrently updated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, I did it.

1. Drop primary key CASCADE (foreign keys too).
2. Reindex table.
3. Delete duplicate rows.
4. Create primary key.
5. Create foreign keys.

Thanks for help.

29 декабря 2009 г. 15:24 пользователь Михаил Кечинов <kechinoff@xxxxxxxxx> написал:
Good. Now I have error:

docs=# REINDEX TABLE document;
ERROR:  could not create unique index "pkey_document"
DETAIL:  Table contains duplicated values.

So, I have primary key and I have some rows with similar "numdoc", but "numdoc" is primary key and must be unique.

I can't drop pkey because there are some tables with foreign keys:

docs=# alter table document drop constraint pkey_document;
NOTICE:  constraint fk_search_document_vid_numdoc on table ref_search_document_v                                             id depends on index pkey_document
ERROR:  cannot drop constraint pkey_document on table document because other obj                                             ects depend on it
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

So, I can drop pkey only with other fkey's. It's bad. Is there any methods, how to clean pkey? When I try to delete some duplicate document, I have that error: tuple concurrently updated

...

2009/12/29 Greg Stark <gsstark@xxxxxxx>
On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов <kechinoff@xxxxxxxxx> wrote:
> When I try to delete one row from database (for example):
> delete from document where numdoc = 901721617
> I have this error:
> ERROR: tuple concurrently updated
> SQL state: XX000
> I know, that no one deleting this row at same time.
> What's mean this error?

So this error can only come from a normal SQL-level delete if there is
associated TOAST data which is being deleted as well. In which case
that TOAST data must be already marked deleted -- which shouldn't be
possible.

It sounds like you have a database where some writes from earlier
transactions reached the database and others didn't. That can happen
if you take an inconsistent backup (without using pg_start_backup())
or if the drive you're using confirmed writes before crashing but
didn't actually write them.

You might be able to get somewhat further by reindexing the TOAST
table for this table. To do so do "REINDEX TABLE document". But note
that you could run into further errors from the missing toast data.

--
greg



--
Михаил Кечинов
http://www.mkechinov.ru



--
Михаил Кечинов
http://www.mkechinov.ru

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux