You might find this section of the ora2pg documentation relevant. Note that the size of a bytea object is limited to < 1Gb,
Importing BLOB as large objects By default Ora2Pg imports Oracle BLOB as bytea, the destination column is created using the bytea data type. If you
want to use large object instead of bytea, just add the --blob_to_lo option to the ora2pg command. It will create the destination column as data type Oid and will save the BLOB as a large object using the lo_from_bytea() function. The Oid returned by the call
to lo_from_bytea() is inserted in the destination column instead of a bytea. Because of the use of the function this option can only be used with actions SHOW_COLUMN, TABLE and INSERT. Action COPY is not allowed. If you want to use COPY or have huge size BLOB ( > 1GB) than can not be imported using lo_from_bytea() you can add
option --lo_import to the ora2pg command. This will allow to import data in two passes. 1) Export data using COPY or INSERT will set the Oid destination column for BLOB to value 0 and save the BLOB value
into a dedicated file. It will also create a Shell script to import the BLOB files into the database using psql command \lo_import and to update the table Oid column to the returned large object Oid. The script is named lo_import-TABLENAME.sh 2) Execute all scripts lo_import-TABLENAME.sh after setting the environment variables PGDATABASE and optionally PGHOST,
PGPORT, PGUSER, etc. if they do not correspond to the default values for libpq. You might also execute manually a VACUUM FULL on the table to remove the bloat created by the table update. Limitation: the table must have a primary key, it is used to set the WHERE clause to update the Oid column after
the large object import. Importing BLOB using this second method (--lo_import) is very slow so it should be reserved to rows where the BLOB > 1GB for all other rows use the option --blob_to_lo. To filter the rows you can use the WHERE configuration directive
in ora2pg.conf. Also note:
Use this directive to set the database handle's 'LongReadLen' attribute to a value that will be the larger than the expected size of the LOBs. The default is 1MB witch may not be
enough to extract BLOBs or CLOBs. If the size of the LOB exceeds the 'LongReadLen' DBD::Oracle will return a 'ORA-24345: A Truncation' error. Default: 1023*1024 bytes.
Take a look at this page to learn more:
http://search.cpan.org/~pythian/DBD-Oracle-1.22/Oracle.pm#Data_Interface_for_Persistent_LOBs
Important note: If you increase the value of this directive take care that DATA_LIMIT will probably needs to be reduced. Even if you only have a 1MB blob, trying to read 10000 of
them (the default DATA_LIMIT) all at once will require 10GB of memory. You may extract data from those table separately and set a DATA_LIMIT to 500 or lower, otherwise you may experience some out of memory. Good luck! Clay Jackson Database Solutions Sales Engineer
office 949-754-1203 mobile 425-802-9603 From: Narendran .j <jnarendran3@xxxxxxxxx>
CAUTION: This email originated
from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe. Hello It's almost 8gb, could you explain me how can I transfer as binary file?.I'm new on this. On Fri, 22 Mar 2024, 3:33 pm Laurenz Albe, <laurenz.albe@xxxxxxxxxxx> wrote:
|