RE: Ora2pg migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You might find this section of the ora2pg documentation relevant.  Note that the size of a bytea object is limited to < 1Gb,

Importing BLOB as large objects

By default Ora2Pg imports Oracle BLOB as bytea, the destination column is created using the bytea data type. If you want to use large object instead of bytea, just add the --blob_to_lo option to the ora2pg command. It will create the destination column as data type Oid and will save the BLOB as a large object using the lo_from_bytea() function. The Oid returned by the call to lo_from_bytea() is inserted in the destination column instead of a bytea. Because of the use of the function this option can only be used with actions SHOW_COLUMN, TABLE and INSERT. Action COPY is not allowed.

If you want to use COPY or have huge size BLOB ( > 1GB) than can not be imported using lo_from_bytea() you can add option --lo_import to the ora2pg command. This will allow to import data in two passes.

1) Export data using COPY or INSERT will set the Oid destination column for BLOB to value 0 and save the BLOB value into a dedicated file. It will also create a Shell script to import the BLOB files into the database using psql command \lo_import and to update the table Oid column to the returned large object Oid. The script is named lo_import-TABLENAME.sh

2) Execute all scripts lo_import-TABLENAME.sh after setting the environment variables PGDATABASE and optionally PGHOST, PGPORT, PGUSER, etc. if they do not correspond to the default values for libpq.

You might also execute manually a VACUUM FULL on the table to remove the bloat created by the table update.

Limitation: the table must have a primary key, it is used to set the WHERE clause to update the Oid column after the large object import. Importing BLOB using this second method (--lo_import) is very slow so it should be reserved to rows where the BLOB > 1GB for all other rows use the option --blob_to_lo. To filter the rows you can use the WHERE configuration directive in ora2pg.conf.

Also note:
LONGREADLEN

Use this directive to set the database handle's 'LongReadLen' attribute to a value that will be the larger than the expected size of the LOBs. The default is 1MB witch may not be enough to extract BLOBs or CLOBs. If the size of the LOB exceeds the 'LongReadLen' DBD::Oracle will return a 'ORA-24345: A Truncation' error. Default: 1023*1024 bytes.

Take a look at this page to learn more: http://search.cpan.org/~pythian/DBD-Oracle-1.22/Oracle.pm#Data_Interface_for_Persistent_LOBs

Important note: If you increase the value of this directive take care that DATA_LIMIT will probably needs to be reduced. Even if you only have a 1MB blob, trying to read 10000 of them (the default DATA_LIMIT) all at once will require 10GB of memory. You may extract data from those table separately and set a DATA_LIMIT to 500 or lower, otherwise you may experience some out of memory.

 

Good luck!

 

Clay Jackson

Database Solutions Sales Engineer

clay.jackson@xxxxxxxxx

office  949-754-1203  mobile 425-802-9603

 

From: Narendran .j <jnarendran3@xxxxxxxxx>
Sent: Monday, March 25, 2024 6:47 AM
To: Laurenz Albe <laurenz.albe@xxxxxxxxxxx>
Cc: pgsql-admin@xxxxxxxxxxxxxxxxxxxx
Subject: Re: Ora2pg migration

 

CAUTION: This email originated from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe.

 

Hello 

It's almost 8gb, could you explain me how can I transfer as binary file?.I'm new on this.

 

On Fri, 22 Mar 2024, 3:33 pm Laurenz Albe, <laurenz.albe@xxxxxxxxxxx> wrote:

On Fri, 2024-03-22 at 11:08 +0530, Narendran .j wrote:
> I have export one table which has blob data type by ora2pg tool.while Importing insert file in postgres I'm getting this error .
> psql:ERROR: out of memory 
> DETAIL: Cannot enlarge string buffer containing 1073741808 bytes by 8191 more bytes.
>
> Please give me any suggestions and I adjust shared_buffer it didn't work.

1GB is a hard limit for memory allocations.

How big are those BLOBs?  The theoretical limit in PostgreSQL is 1GB, but in
practice things start getting difficult around 500MB, unless you use binary
transfer.

Yours,
Laurenz Albe


[Index of Archives]     [Postgresql Home]     [Postgresql General]     [Postgresql Performance]     [Postgresql PHP]     [Postgresql Jobs]     [PHP Users]     [PHP Databases]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Yosemite Forum]

  Powered by Linux