Re: Problem retrieving large records (bytea) data from a table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You may do a backup of this table. Then with ultraedit search your
documents and remove them.

2011/7/5, jtkells@xxxxxxxxxxx <jtkells@xxxxxxxxxxx>:
> I am having a hang condition every time I try to retrieve a large
> records (bytea) data from  a table
> The OS is a 5.11 snv_134 i86pc i386 i86pc Solaris with 4GB memory
> running Postgresql 8.4.3 with a standard postgresql.conf file (nothing
> has been changed)
> I have the following table called doc_table
>       Column  |              Type              |  Modifiers     |
> Storage  | Description
> ------------------------+--------------------------------+---------------------------------------
>  id           | numeric                        | not null    | main |
>  file_n       | character varying(4000)        |             |
> extended |
>  create_date  | timestamp(6) without time zone | not null
>                 default (clock_timestamp())
>                 ::timestamp(0)without time zone              | plain |
>  desc         | character varying(4000)        |             |
> extended |
>  doc_cc       | character varying(120)         | not null    |
> extended |
>  by           | numeric                        | not null    | main |
>  doc_data     | bytea                          |             |
> extended |
>  mime_type_id | character varying(16)          | not null    |
> extended |
>  doc_src      | text                           |             |
> extended |
>  doc_stat     | character varying(512)         | not null
>                default 'ACTIVE'::character varying           |
> extended |
> Indexes:
>    "documents_pk" PRIMARY KEY, btree (document_id)
>
>
> A while ago the some developers inserted several records with a
> document (stored in doc_Data) that was around 400 - 450 MB each. Now
> when you do a select * (all) from this table you get a hang and the
> system becomes unresponsive.  Prior to these inserts, a select * (all,
> no where clause) worked.  I'm also told a select * from doc_table
> where id = xxx still works.  I haven't seen any error message in the
> postgresql log files.
> So I'm not sure how to find these bad records and why I am getting a
> hang.  Since this postgresql is running with the default config files
> could I be running out of a resource?  If so I'm not sure how to or
> how much to add to these resources to fix this problem since I have
> very little memory on this system.  Does anyone have any ideas why I
> am getting a hang.  Thanks
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>


-- 
------------
pasman

-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux