Hi Donald,
Yes, the database is shutting down perfectly without any errors before the backup starts up, hence no file is in use during backup.
Thanks & Regards Girish
O'Neill, Donald (US - Deerfield) wrote:
There is a good possibility. I assumed that your shutting down Oracle
before your performing the backup?
-----Original Message----- From: redhat-list-bounces@xxxxxxxxxx [mailto:redhat-list-bounces@xxxxxxxxxx] On Behalf Of Girish N Sent: Monday, December 20, 2004 8:18 AM To: General Red Hat Linux discussion list Subject: Re: problem with extraction of .tgz file on Redhat AS 3.0
Hi Donald,
Thanks for the reply, does this mean that .tgz will get corrupted if the
datafile is in use?
Thx & Rgds Girish
O'Neill, Donald (US - Deerfield) wrote:
Some Oracle process could still have some open files. I would do ainstead
'fuser -am' on the partition just to make sure there are no open files
before tarring.. You could also do a 'tar -cjvf' which uses bzip
of gzip..
-----Original Message----- From: redhat-list-bounces@xxxxxxxxxx [mailto:redhat-list-bounces@xxxxxxxxxx] On Behalf Of Girish N Sent: Monday, December 20, 2004 1:56 AM To: General Red Hat Linux discussion list Subject: Re: problem with extraction of .tgz file on Redhat AS 3.0
Hi Linus,
Thanks again for the reply. As said earlier, wil reschedule the .tgz dump to a local mount point & will check the same.
Thanks & Regards Girish
C. Linus Hicks wrote:
toOn Mon, 2004-12-20 at 11:07 +0530, Girish N wrote:
Hi Linus,
Thanks for the reply,
1. The datafile in question if only 1Gb
2. This is a low end server with 2Gb memory & the backup is scheduled
run at 4 AM when there is no memory resource crunch.
The corruption seems to be very inconsistent, 1 day the .tgz is fine,
ofwhile the 2nd day, the .tgz file gets corrupted. Am planning to reschedule the .tgz backup to one of the local Mount points instead
mean
the SAN Harddisk & then check the same.
You hv commented that "Not with the symptoms you have", does that
restoring
that this may be one-of-the reasons for file corruption.Having an inconsistent datafile will not cause the kind of corruption
you are getting in the tgz file. If you backup (by whatever means) a
datafile that is in an inconsistent state, then the result of
memorythat file will be a datafile in an inconsistent state, not a problem with the restore. The reason tar complained was because of the gzip error. When gzip took the error, it was unable to continue ungzipping the file and sent EOF to tar.
This means the error will be with corruption either during gzip, or
writing to disk. This suggests a hardware problem, perhaps in memory,
or
with writing to the SAN. Trying a local disk rather than the SAN is a
good idea. You might also try running memtest on this machine. Having
no
memory resource crunch at the time of the backup doesn't really mean
much, but I would expect other files to show the same symptom if
is the problem.
http://www.memtest86.com/
-- redhat-list mailing list unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe https://www.redhat.com/mailman/listinfo/redhat-list