Corrupt BKP Oracle
Corrupt BKP Oracle
Corrupt BKP Oracle
If you use RMAN to perform backups, the contents of this paper don’t apply to you. Being
an Oracle utility, RMAN knows the structure of Oracle blocks, and can therefore detect
corruption as it is backing things up. If any corruption is encountered, it will throw an
error stack the size of the Empire State Building, and proceed no further.
If you are using Operating System techniques to take your backups, though, then there is a
real risk that the resulting copies of Data Files and so forth might get corrupted as part of
the backup process –or that the Data Files themselves already contain corruption, which
the O/S just happily copies across, oblivious to its presence.
Fortunately, Oracle supplies a handy little utility called “dbverify” which can be used to
check the backed up copies of the Data Files for corruption, once the backup has finished.
This utility is run from the command line, by typing in the command dbv. You simply tell
it what files to check, and supply some other run-time parameters if needed. You might
type something like this, therefore:
Note that you must tell dbverify what size Oracle blocks to expect when it scans the Data
File copy. If you miss that parameter out, dbverify will expect to find 2K blocks, and when
it encounters anything else, will simply abort with an error. Curiously, the error message
will tell you precisely what block size it did encounter –and you might well wonder why, if
it can identify the correct block size for the sake of error reporting, it doesn’t simply go on
to use the discovered block size for its continued work! One of life’s little mysteries, I
guess –and we just have to live with it.
You’ll also notice there a ‘logfile’ parameter. That’s so that dbverify’s work can be output
to a text file for later reading. If you miss it out, the results of the check are simply
displayed on screen, which is probably not exactly what you want.
There are some other parameters that can be supplied, too. For example, you can request
a check of parts of a Data File copy, by specifying the start and end block addresses. If
you need the complete list of available parameters, just type:
When dbverify has completed its work, you’ll want to check the logfile output. That will
look a bit like this:
Here, the word “page” means an Oracle block. So this report indicates that 83,200 blocks
were looked at –and the key bit is that none of them are reported as “failing” or “marked
corrupt”. That means this backup file can be considered clean, and there’d be no
problems using it during a recovery.
In case you’re wondering, the “Total Pages Influx” line is there because dbverify can be
used to check for corruption in online Data Files, not just backup copies of them. When
used for that purpose, it’s possible that DBWR is writing to a block just as dbverify wishes
to check it –at which point, dbverify gives up, and just declares that the block was being
changed as it was being investigated.
Now all the Oracle documentation states that dbverify can only be applied to ‘cache
managed files’ –i.e., ones which are read into the Buffer Cache. So that theoretically rules
out running it against the Control Files and seeing whether they are internally corrupt.
However, in Oracle version 8.0, 8i and 9i, you can run the tool against Control Files. I have
not run it against a 7 database, but you might give it a try and see if anything useful
results. But as proof that it works in the 8s and 9s, I offer the following log file output:
Clearly, therefore, dbverify is a perfectly useful tool in ensuring that your Control File
backups are free of corruption… at least, demonstrably so in versions 8.0.5, 8.1.6, 8.1.7
and 9.0.1.1. However, for anything earlier than these versions, you’re really on your own.
Try it and see. Don’t forget, however, that the SQL command
…is guaranteed to result in a binary image of the Control File which is consistent and
corruption-free (though it can obviously only be issued when the database is at least in the
MOUNT stage).
So that’s Data Files and Control Files dealt with: how about the Redo Logs?
Online Redo Logs may be copied if you are taking cold backups, and Archived Redo Logs
will also be copied, whatever type of backup you are doing, where they exist. How can
either of these types of Logs be checked for corruption?
Copyright © Howard Rogers 2001 24/10/2001 Page 3 of 4
Checking backups for corruption Backup and Recovery Tips
Well, dbverify honestly cannot be used to verify redo log files, whether of the Online or
Archived variety (if you try it, you’ll get a report indicating that the entire file is corrupted,
which simply means it doesn’t understand their contents). Therefore, the only really
viable tool is Log Miner –and that was only introduced in Oracle 8.1.x (though it can be
used from there to examine 8.0.x-version logs).
If Log Miner encounters corruption during its analysis session (begun with the EXECUTE
DBMS_LOGMNR.START_LOGMNR(DICTFILENAME=>’HJRDICT.ORA’) command), then it simply
bombs out with an error –at which point you know you have a problem.
If you need to check Oracle 7 logs, however, Log Miner can’t help you, and I know of no
other Oracle-supplied tool that can (there are third-party tools that can do the job, but
they cost an arm and a leg –yet another reason for upgrading to a more recent version).
I can offer, therefore, no better suggestion than that you perform regular practice
recoveries (which you should be doing anyway as a matter of ordinary precautionary
management). If the recoveries work, the Logs are clean. If not…