Quantcast
Channel: Oracle and MySQL Database Recovery Repair Software recover delete drop truncate table corrupted datafiles dbf asm diskgroup blogs
Viewing all 175 articles
Browse latest View live

Oracle ORA-00600 [4000] ORA-600 [4000] “trying to get dba of undo segment header block from usn”

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

Format: ORA-600 [4000] [a]

VERSIONS:
version 6.0 to 9.2

DESCRIPTION:
This has the potential to be a very serious error.

It means that Oracle has tried to find an undo segment number in the dictionary cache and failed.

ARGUMENTS:
Arg [a] Undo segment number

FUNCTIONALITY:
KERNEL TRANSACTION UNDO

IMPACT:
INSTANCE FAILURE – Instance will not restart STATEMENT FAILURE

SUGGESTIONS:

AsperNote 1371820.8,thiscanbeseenwhenexecutingDMLontablesresiding in tablespaces transported from another database.

It is fixed in 8.1.7.4, 9.0.1.4 and 9.2.0.1 The workaround however is to create more rollback segments in the target database until the highest rollback segment number (select max(US#) from sys.undo$;) is at least as high as in equivalent max(US#) from the source database.

It has also been seen where memory has been corrupted so try shutting down and restarting the instance.

Known Bugs

NB

Bug

Fixed

Description

11.1.0.7.4, 11.2.0.1.2,

OERI[25027]/OERI[4097]/OERI[4000]/ORA-1555 in plugged datafile

* 9145541

 

11.1.0.7.4, 11.2.0.1.2, OERI[25027]/OERI[4097]/OERI[4000]/ORA-1555 in plugged datafile

*

9145541

11.2.0.2, 12.1.0.0

after CREATE CONTROLFILE in 11g

+

10425010

11.2.0.3, 12.1

Stale data blocks may be returned by Exadata FlashCache

12353983

ORA-600 [4000] with XA in RAC

7687856

11.2.0.1

ORA-600 [4000] from DML on transported ASSM tablespace

2917441

11.1.0.6

OERI [4000] during startup

3115733

9.2.0.5, 10.1.0.2

OERI[4000] / index corruption can occur during index coalesce

2959556

9.2.0.5, 10.1.0.2

STARTUP after an ORA-701 fails with OERI[4000]

1371820

8.1.7.4, 9.0.1.4, 9.2.0.1

OERI:4506 / OERI:4000 possible against transported tablespace

+

434596

7.3.4.2, 8.0.3.0

ORA-600[4000] from altering storage of BOOTSTRAP$

Bug 1362499
ORA-600 [4000] after migrating 7.3.4.3 to 8.0.6.1 on HP-UX 32-bit Specific to HP-UX, fixed in one-off patch

Historic info on the Oracle 7.3.x issues re unlimited extents and bootstrap$

In 7.3.4 then due to Bug:434596, this can result from altering the SYS.BOOTSTRAP$ table.

When a SHUTDOWN command follows this, the database will not startup again. Example: Any of following modifications of SYS.BOOTSTRAP$

will cause this error:
ALTER TABLE BOOTSTRAP$ STORAGE (MAXEXTENTS UNLIMITED ); ALTER TABLE BOOTSTRAP$ STORAGE (NEXT 1024);
ALTER TABLE SYS.BOOTSTRAP$ STORAGE (MAXEXTENTS UNLIMITED); ALTER TABLE sys.BOOTSTRAP$ STORAGE (MAXEXTENTS UNLIMITED);

A lock byte is now set on the SYS.BOOTSTRAP$ segment header and following shutdown the database will not start.

A select from bootstrap$ before shutdown will cleanout the lock on
the SYS.BOOTSTRAP$ segment header and prevent the errors from occuring. Example: Issue the following BEFORE shutdown:

sql> select count(*) from sys.bootstrap$;
Get a backup history of the Database/s and the exact sequence of steps performed. Two possible options
a) Go back to backup before the storage clause on BOOTSTRAP$ was changed

b) Oracle Support may be able to patch bootstrap$. See Note:43132.1

Obviously, option a) is always the way to go if at all possible.

Articles:
ALERT about changing MAXEXTENTS to UNLIMITED Note:50380.1

Another cause of an ORA-600 [4000] is that a block scn is ahead of the database scn. In that case the block with the high scn could be printed in the trace file and

In that case the block with the high scn could be printed in the trace file and

Event ADJUST_SCN orparameter_MINIMUM_GIGA_SCNNote:552438.1 canbeusedtobumptheSCN.

ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600

ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600
4000 4000 4000 4000 4000 4000 4000 4000 4000 4000

4000 4000 4000 4000 4000 4000 4000 4000 4000 4000

 


Top Internal Errors – Oracle Server Release 8.1.7

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

ORA-600 [ksmals]
Possible bugs: Fixed in:
Bug:2662683 ORA-7445 & HEAP CORRUPTION WHEN RUNNING APPS PROGRAM THAT
DOES HEAVY INSERTS
9.2.0.4
References:
Note:247822.1 ORA-600 [ksmals]
ORA-600 [4000]
Possible bugs: Fixed in:
Bug:2959556 STARTUP after an ORA-701 fails with OERI[4000] 9.2.0.5,10G
Bug:1371820 OERI:4506 / OERI:4000 possible against transported tablespace 8.1.7.4, 9.0.1.4,
9.2.0.1
References:
Note:47456.1 ORA-600 [4000] “trying to get dba of undo segment header block from
usn”
ORA-600 [4454]
Possible bugs: Fixed in:
Bug:1402161 OERI:4411/OERI:4454 on long running job 8.1.7.3, 9.0.1.3,
9.2.0.1
References:
Note:138836.1 ORA-600 [4454]
ORA-600 [kcbgcur_9]
Possible bugs: Fixed in:
Bug:1804676 OERI:KCBGCUR_9 possible from ONLINE REBUILD INDEX with concurrent
DML
8.1.7.3, 9.0.1.3,
9.2.0.1
References:
Note:114058.1 ORA-600 [kcbgcur_9] “Block class pinning violation”
ORA-600 [729]
Possible bugs: Fixed in:
Bug:931820 Direct load fails in kghxhdr when SESSION_CACHED_CURSORS is larger
than zero
9.0.1
Bug:2177050 OERI:729 space leak possible (with tags “define var info” / “oactoid info”) 8.1.7.4, 9.0.1

 

References:
Note:31056.1 ORA-600 [729] “UGA Space Leak”
ORA-600 [1113]
Possible bugs: Fixed in:
Bug:1307247 OERI:1113 can occur if ANALYZE fails or is interupted 8.0.6.3, 8.1.7.1, 9.0.1
References:
Note:145367.1 Interrupted or failed ANALYZE might cause instance to hang
Note:41767.1 ORA-600 [1113] “Parent SO is free when adding a child”
ORA-600 [1114] / ORA-600 [ksmguard2]
Possible bugs: Fixed in:
Bug:1779978 OERI:1114 / OERI:KSMGUARD2 with more than about 536 sessions 8.1.7.2, 9.0.1
References:
Note:153041.1 ORA-600 [1114] / ORA-600 [KSMGUARD2] on 8.1.7.1.x with Large
Number of Sessions
ORA-600 [4820]
Possible bugs: Fixed in:
Bug:1951929 ORA-7445 in KQRGCU/kqrpfr/kqrpre possible 8.1.7.3, 9.0.1.2, 9.2
References:
ORA-600 [12261]
Possible bugs: Fixed in:
Bug:912223 OERI:12261 / dump in OPIPLS using EXECUTE IMMEDIATE with SQL
derived strings
8.1.7.2, 9.0.1
@Bug:1661786 OERI:12261 / single byte memory corruption possible for CALL type
triggers
8.1.7.3, 9.0.1
References:
ORA-600 [12333]
Possible bugs: Fixed in:
References:
Note:35928.1 ORA-600 [12333] “Fatal Two-Task Protocol Violation”
ORA-600 [16224]
Possible bugs: Fixed in:
Bug:1651530 ORA-600 [16224] [], THEN SMON DIES AND KILLS INSTANCE 8.1.7.4, 9.0.1.3
Bug:1310142 SMON CRASHES INSTANCE WITH ERROR ORA-600 [16224] 8.1.6.3
References:
Note:136754.1 Instance crashes with ORA-600 [16224]
ORA-600 [17069]
Possible bugs: Fixed in:

 

References:
Note:39616.1 ORA-600 [17069] “Failed to pin a library cache object after 50 attempts”
ORA-600 [17112]
Possible bugs: Fixed in:
References:
Note:47411.1 ORA-600 [17112] “Internal Heap Error”
ORA-600 [17182]
Possible bugs: Fixed in:
References:
Note:34779.1 ORA-600 [17182] “Heap chunk header BAD MAGIC”
ORA-600 [kcbgcur_2]
Possible bugs: Fixed in:
Bug:1502537 OERI:KCBGCUR_2 possible on DML (stack includes ktsf_rsp1) 8.1.7.1, 9.0.1
References:
Note:145610.1 ORA-600 [kcbgcur_2] after upgrade from Oracle7 to 8.1.x
ORA-600 [kccsbck_first] or ORA-600 [3716] or ORA-600 [4185] (HP Tru64 Only)
Possible bugs: Fixed in:
Bug:1379200 NODE PANIC OR SHUTDOWN CAN CAUSE PARITIONED CLUSTER AND
DATABASE CORRUPTION
8.1.7.1, 9.0.1.0
References:
Note:137322.1 ALERT: Node panic or shutdown can cause partitioned cluster and
database corruption
ORA-600 [kwqitnmptme:read]
Possible bugs: Fixed in:
Bug:1663503 ORA-600 [kwqitnmptme:read] w/MAX_RETRIES and EXPIRATION
(AQ_TM_PROCESSES)
9.0.1
References:

Oracle Force open erroring out with ORA-00704 ORA-00604 ORA-01555

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

Applies to:
Oracle Server – Enterprise Edition – Version 10.2.0.4 and later
Information in this document applies to any platform.

Symptoms

Database is been Force opened using the following Document 283945.1

Alter database open resetlog fails with

ORA-00704: bootstrap process failure
ORA-00704: bootstrap process failure
ORA-00604: error occurred at recursive SQL level 1
ORA-01555: snapshot too old: rollback segment number 11 with name “_SYSSMU11$” too small

Tue Jan 17 04:46:17 2012

Error 704 happened during db open, shutting down database

USER: terminating instance due to error 704

Instance terminated by USER, pid = 5496

ORA-1092 signalled during: ALTER DATABASE OPEN RESETLOGS…

Changes

Database is been Force open

Cause

Database is been forced open.All files are not in sycn.

File would be having block scn higher than database scn

Solution

If no trace file is available for Ora-1555 please do the following to generate a trace file for 704 And ora-1555 :-

SQL>Startup mount ;
SQL>Alter session set tracefile_identifier=new1555 ;
Now try open resetlogs

 

SQL>Alter database open resetlogs ;
It will error out with ORA-1092
Alert log would show 1555 error message.
Go to udump or trace directory
ls -lrt *new1555*
Case one undo segment name is reported in alert log
ORA-01555: snapshot too old: rollback segment number 11 with name “_SYSSMU11$” too small
In the above case the Ora-1555 was reported on _SYSSMU11$ which is undo segment number 11.
Hex value of same is 11 –> b
Search the trace file from Table or index block header dump whose transaction layer has an undo segment 11
been used in the ITL.Scroll up to get the Buffer header dump of that block
BH (0xacff8e48) file#: 1 rdba: 0x0040003e (1/62) class: 1 ba: 0xacf66000
set: 3 blksize: 8192 bsi: 0 set-flg: 2 pwbcnt: 0
dbwrid: 0 obj: 18 objn: 18 tsn: 0 afn: 1
hash: [d0e04c98,d0e04c98] lru: [acff8fd8,acff8db8]
ckptq: [NULL] fileq: [NULL] objq: [cc9a3d00,cc9a3d00]
use: [ce699910,ce699910] wait: [NULL]
st: CR md: EXCL tch: 0
cr: [scn: 0x0.4124e1e],[xid: 0x6.0.c28d],[uba: 0x820075.ea1.23],[cls: 0x0.46b5261],[sfl: 0x1]
flags:
Using State Objects
—————————————-
SO: 0xce6998d0, type: 24, owner: 0xd04439e8, flag: INIT/-/-/0x00
(buffer) (CR) PR: 0xd02fa378 FLG: 0x500000
class bit: (nil)
kcbbfbp: [BH: 0xacff8e48, LINK: 0xce699910]
where: kdswh02: kdsgrp, why: 0
buffer tsn: 0 rdba: 0x0040003e (1/62)
scn: 0x0000.046b527a seq: 0x00 flg: 0x00 tail: 0x527a0600
frmt: 0x02 chkval: 0x0000 type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Dump of memory from 0x00000000ACF66000 to 0x00000000ACF68000
0ACF67FF0 FFFF02C1 01FF8001 02C10280 527A0600 […………..zR]

 

Block header dump: 0x0040003e
Object id on Block? Y
seg/obj: 0x12 csc: 0x00.46b519e itc: 1 flg: – typ: 1 – DATA
fsl: 0 fnx: 0x0 ver: 0x01
Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x000b.00b.00000e7a 0x00802042.00db.1a C— 0 scn 0x0000.04624228

 

 

So here we see ITL allocated is 0x01.
Transaction identifier –> <undo no>.<slot>.<wrap> —> 0x000b.00b.00000e7a
Undo segment no —> 0x000b –> 11 in decimal .
This block belong to 0x0040003e –(1/62)
Find the SCN of this block from the BH(Buffer header) for 0x0040003e –(1/62)
So in this case its the one highlighted above in trace scn: 0x0000.046b527a
Now set _mimimum_giga_scn value based on this SCN
scn: 0x0000.046b527a
Convert 0x0000 –> Decimal –> 0
Convert 046b527a –> Decimal –>74142330
Combine both these value and find the value for _minimum_giga_scn
Convert –> 074142330 /1024/1024/1024 =0.069050
Add + 2G to the above value and round it up
_minimum_giga_scn = 3G
Set this parameter in the pfile along with the other force open parameter
SQL>Startup mount pfile=<> ;
SQL>recover database using backup controlfile until cancel ;
Cancel
SQL>Alter database open resetlogs ;
As per the force open steps do complete export and create new database and do import

 

 

ORA-01173: data dictionary indicates missing data file from system tablespace

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

ORA-01173: data dictionary indicates missing data file from system tablespace

Cause: Either the database has been recovered to a point in time in the future of the control file or a datafile from the system tablespace was omitted from the create control file command previously issued.
Action: For the former problem you need to recover the database from a more recent control file. For the latter problem, simply recreate the control file checking to be sure that you include all the datafiles in the system tablespace.
Oracle Server – Enterprise Edition – Version 8.1.7.4 to 10.2.0.3 [Release 8.1.7 to 10.2]
Information in this document applies to any platform.
Information in this document applies to any platform.

 

Goal

This document presents an option to patch the SYSTEM rollback segment header when errors ORA-600 [4193] / ORA-600 [4194] are produced in the SYSTEM rollback segment.  This situation could be avoiding the database to be opened.

The supported procedure to fix this problem when the SYSTEM rollback segment is affected, is to make a Point In Time Recovery before the logical inconsistency.

ORA-600 [4193] and ORA-600 [4194] are normally produced by new transactions and it happens when there is a mismatch in the undo segment header (info in TRN CTL / FREE BLOCK POOL) and the undo segment block .  In the case it happens in undo segments other than SYSTEM the solution is to drop the rollback segment.   Here is a procedure to manually fix these errors when the SYSTEM rollback segment is involved.

Fix

Take a backup before applying this procedure.

Using bbed set ktuxc.ktuxcnfb and ktuxc.ktuxcfbp[0..x].ktufbuba to 0 in the SYSTEM rollback segment header.  In that way Oracle will use an empty undo block for the new transaction avoiding the comparison between the undo block segment header and the undo block pointed by it.

Example:

This is part of a rollback segment header dump

TRN CTL:: seq: 0x00af chd: 0x0036 ctl: 0x002a inc: 0x00000000 nfb: 0x0001
mgc: 0x8002 xts: 0x0068 flg: 0x0001 opt: 2147483646 (0x7ffffffe)
uba: 0x00400006.00af.0f scn: 0x07be.a0bae152
Version: 0x01
FREE BLOCK POOL::
uba: 0x00400006.00af.0f ext: 0x0 spc: 0x13b4
uba: 0x00000000.00a8.0d ext: 0x7 spc: 0x1a2c
uba: 0x00000000.009b.0b ext: 0x3 spc: 0x1c08
uba: 0x00000000.0092.27 ext: 0x3 spc: 0x12d0
uba: 0x00000000.0000.00 ext: 0x0 spc: 0x0

1. With bbed set the appropriate offset and modify ktuxc.ktuxcnfb to 0x0000.  In the example nfb: 0x0001.

2. Set the appropriate offset to modify all the not null ktuxcfbp[0..x].ktufbuba to 0x00000000. In this example only ktuxc.ktuxcfbp[0].ktufbuba has a not null value which is 0x00400006

3. As the block has been modified set the block checksum to the new value or disable the checksum in the block.

The partial block dump after the modification is:

TRN CTL:: seq: 0x00af chd: 0x0036 ctl: 0x002a inc: 0x00000000 nfb: 0x0000
mgc: 0x8002 xts: 0x0068 flg: 0x0001 opt: 2147483646 (0x7ffffffe)
uba: 0x00400006.00af.0f scn: 0x07be.a0bae152
Version: 0x01
FREE BLOCK POOL::
uba: 0x00000000.00af.0f ext: 0x0 spc: 0x13b4
uba: 0x00000000.00a8.0d ext: 0x7 spc: 0x1a2c
uba: 0x00000000.009b.0b ext: 0x3 spc: 0x1c08
uba: 0x00000000.0092.27 ext: 0x3 spc: 0x12d0
uba: 0x00000000.0000.00 ext: 0x0 spc: 0x0
nfb=ktuxc.ktuxcnfb “number of non-empty slots in free block pool”
ktuxc.ktuxcfbp=free block pool entries

4. OPEN the database and shrink the system rollback segment.  It is just to free the extents in the segment and to start from “scratch”:

alter rollback segment SYSTEM shrink;

 

ORACLE CHECKLIST FOR CORRUPTION AND DATABASE DOWN

$
0
0


If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

STARTUP HANGS 
If the database hangs on startup: 
1.) Instruct the customer to do a STARTUP NOMOUNT (to see if the 
background processes will start). 
2.) Try an ALTER DATABASE MOUNT. 
3.) Try doing some SELECTs from any v$ view. 
4.) If this works, you can do an alter session 
set _trace_enabled=true in the init.ora. 
5.) Then do an ALTER DATABASE OPEN. 
6.) After the db has been hanging for a minute or so, use CTRL/C (depress 
and hold the CTRL key while pressing the 'c' key) to stop the 
process. See if the trace tells you which SQL statement it is 
hanging on (it could be dictionary corrupt). 
TABLESPACE, LOST DATAFILE 
After a tablespace has been created with its datafiles, the datafiles 
must exist for the life of the tablespace unless all objects in the 
tablespace are dropped first. The supported way to recover from a 
lost datafile is to have the customer restore the old datafile from 
an older, cold backup (full backup) or a hot backup (single tablespace 
backup while the database is online). 
If the database is in NOARCHIVELOG mode, you will only succeed in 
recovering the db if the datafile in the redo to be applied to it is 
within the range of your online REDO logs. 
If the customer has no backups of the datafile that is corrupt, there 
is a chance the events 10231 and 10233 can be set to skip the corrupted 
blocks so an export can be done. If that doesn't work or the corruption 
is in the datafile header, they will loose their data. 
CONTROL FILES 
If you are mirroring control files, and one is bad, delete it and copy 
the good one in its place. 
If you need to create a new control file or change the MAXLOGFILES, 
MAXLOGMEMBERS, MAXDATAFILES, MAXINSTANCES, or MAXLOGHISTORY parameters:

Oracle _OFFLINE_ROLLBACK_SEGMENTS/_CORRUPTED_ROLLBACK_SEGMENTS and AUM Undo Segments

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

PURPOSE
——-
This bulletin explains how to use the known hidden parameters such as
_OFFLINE_ROLLBACK_SEGMENTS and _CORRUPTED_ROLLBACK_SEGMENTS with undo segments
when
–> Automatic Undo Management is active : UNDO_MANAGEMENT=AUTO
–> Corrupted undo information prevents the database from being accessible :
undo segments like _SYSSMUn$ are in NEEDS RECOVERY status
–> Backup of RBS datafiles / archive redo log files are not available and
therefore no recovery is possible
Be aware that the use of _OFFLINE_ROLLBACK_SEGMENTS may lead to the recreation
of the database, depending on whether there were active transactions in the
dropped undo segments. If so, then this may lead to logical corruption, and
hence to the recreation of the database. (Refer Note:106638.1 that explains
how to check the transaction table : you can use the same SELECT statements)
Be aware that the use of _CORRUPTED_ROLLBACK_SEGMENTS requires the recreation
of the database.
SCOPE & APPLICATION
——————-
For all DBAs having to manage the recovery of databases with corrupted undo
segments.
Example of situations
———————
–> Situation 1
===========
After setting an UNDO datafile OFFLINE and shutting the database down
normally, the following errors occur :
SQL> show parameter undo
NAME TYPE VALUE
———————————— ———– ——————————
undo_management string AUTO
undo_retention integer 10800
undo_suppress_errors boolean FALSE
undo_tablespace string UNDOTBS1
SQL> select segment_name , status from dba_rollback_segs;
SEGMENT_NAME STATUS
—————————— —————-
SYSTEM ONLINE
_SYSSMU1$ ONLINE
_SYSSMU2$ ONLINE
_SYSSMU3$ ONLINE

_SYSSMU10$ ONLINE
11 rows selected.
SQL> alter database datafile ‘C:\ORANT\DB1\UNDOTBS01.DBF’ offline;
alter database datafile ‘C:\ORANT\DB1\UNDOTBS01.DBF’ offline
*
ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error
In alert.log:
————
alter database datafile ‘C:\ORANT\DB1\UNDOTBS01.DBF’ offline
Thu Mar 07 16:52:55 2002
ORA-376 signalled during: alter database datafile ‘C:\ORANT\DB1\UNDOTBS01.DB…
Thu Mar 07 16:52:55 2002
Errors in file C:\ORANT\admin\DB1\bdump\db1SMON.TRC:
ORA-00376: file 2 cannot be read at this time
ORA-01110: data file 2: ‘C:\ORANT\DB1\UNDOTBS01.DBF’

 

In user trace file on NT:
————————
KCRA: start recovery buffer claims
*** 2002-03-07 17:13:32.000
KCRA: buffers claimed = 0/0, eliminated = 0
ORA-00376: file 2 cannot be read at this time
ORA-01110: data file 2: ‘C:\ORANT\DB1\UNDOTBS01.DBF’
In user trace file on Unix:

 

————————–
kssxdl: error deleting SO: 82af3fc0, type: 38, owner: 8320de58, flag: I/-/-/0x00:
ORA-00376: file 2 cannot be read at this time
ORA-01110: data file 2: ‘/filer/9.0.2/DB1/undotbs01.dbf’
–> Situation 2
===========
When RBS datafiles are in a RECOVER status, and no backup is available to
recover appropriately, you need to drop the UNDO tablespace.
In alert.log:
————
Successfully onlined Undo Tablespace 1.
Mon May 27 17:17:14 2002
SMON: enabling tx recovery
SMON: about to recover undo segment 1
SMON: mark undo segment 1 as needs recovery
SMON: about to recover undo segment 2
SMON: mark undo segment 2 as needs recovery
SMON: about to recover undo segment 3
SMON: mark undo segment 3 as needs recovery

Errors in file /oracle3/djeunot/DB1/udump/ora_19462.trc:
ORA-00376: file 2 cannot be read at this time
ORA-01110: data file 2: ‘/oracle3/djeunot/DB1/undotbs01.dbf’
Mon May 27 17:17:14 2002
Error 376 happened during db open, shutting down database
USER: terminating instance due to error 376
Instance terminated by USER, pid = 19462
ORA-1092 signalled during: alter database open…
–> Situation 3
===========
The datafile of the undo tablespace is removed.
The database is in NOARCHIVELOG mode.
$ rm undotbs01.dbf
SQL> update x.t set a=1;
update x.t set a=1
*
ERROR at line 1:
ORA-01115: IO error reading block from file 2 (block # 3)
ORA-01110: data file 2: ‘/oracle3/djeunot/DB1/undotbs01.dbf’
ORA-27091: skgfqio: unable to queue I/O
ORA-27072: skgfdisp: I/O error
Additional information: 2
At startup:
SQL> startup pfile=/oracle3/djeunot/DB1/pfile/initDB1.ora
ORACLE instance started.
Total System Global Area 235693108 bytes
Fixed Size 279604 bytes
Variable Size 167772160 bytes
Database Buffers 67108864 bytes
Redo Buffers 532480 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 2 – see DBWR trace file
ORA-01110: data file 2: ‘/oracle3/djeunot/DB1/undotbs01.dbf’
In alert.log
————
Tue May 28 14:53:37 2002
Errors in file /oracle3/djeunot/DB1/bdump/dbw0_23154.trc:
ORA-01157: cannot identify/lock data file 2 – see DBWR trace file
ORA-01110: data file 2: ‘/oracle3/djeunot/DB1/undotbs01.dbf’
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Actions
——-
————————————————
1/ | Set the following parameters in the init.ora |
————————————————
UNDO_MANAGEMENT=MANUAL
_OFFLINE_ROLLBACK_SEGMENTS=(_SYSSMU1$, _SYSSMU2$, _SYSSMU3$, …etc)
or
_CORRUPTED_ROLLBACK_SEGMENTS=(_SYSSMU1$, _SYSSMU2$, _SYSSMU3$, …etc)
Note:
To get the list of the _SYSSMUn undo segments to OFFLINE when the database
is not accessible, you can use the following :
$ strings system01.dbf | grep _SYSSMU | cut -d $ -f 1 | sort -u > listSMU

 

where system01.dbf is the name of the datafile for the SYSTEM tablespace.
** From this list, do not forget to rename the _SYSSMU9 to _SYSSMU9$ **
a/ If you keep UNDO_MANAGEMENT=AUTO, when you want to DROP the UNDO
tablespace, you get the following error:
SQL> drop tablespace undotbs including contents and datafiles;
drop tablespace undotbs including contents and datafiles
*
ERROR at line 1:
ORA-30013: undo tablespace ‘UNDOTBS’ is currently in use
though you may have dropped all undo segments.
b/ Be aware that the names of the undo segments do not start back at
_SYSSMU1$ once the tablespace has been dropped and recreated.
The names take the next sequence numbers: if the undo tablespace dropped
contained _SYSSMU1$ to _SYSSMU10$, then the creation of the new undo
tablespace generates undo segments whose names start at _SYSSMU11$.
c/ To know which one of the parameters _OFFLINE_ROLLBACK_SEGMENTS or
_CORRUPTED_ROLLBACK_SEGMENTS to use, refer to
@Note:106638.1 Handling Rollback Segment Corruptions in Oracle7.3 to 8.1.7
d/ Dumping the transaction table and undo for active transactions from undo
segments such as “_SYSSMUn$” is strictly the same procedure as defined in
the referenced note above.
———————
2/ | Open the database |
———————
a/ If the RBS datafiles are not missing, the database may open:
———————————————————–
SQL> startup
ORACLE instance started.
Total System Global Area 118560016 bytes
Fixed Size 451856 bytes
Variable Size 100663296 bytes
Database Buffers 16777216 bytes
Redo Buffers 667648 bytes
Database mounted.
Database opened.
SQL> select name, status, enabled, checkpoint_change# from v$datafile;
NAME STATUS ENABLED CHECKPOINT_CHANGE#
———————————- ——- ———- ——————
/oracle3/djeunot/DB1/system01.dbf SYSTEM READ WRITE 62315
/oracle3/djeunot/DB1/undotbs01.dbf RECOVER READ WRITE 62241
/oracle3/djeunot/DB1/users01.dbf ONLINE READ WRITE 62315
SQL> select SEGMENT_NAME, STATUS from dba_rollback_segs;
SEGMENT_NAME STATUS
———— —————-
SYSTEM ONLINE
_SYSSMU2$ NEEDS RECOVERY
_SYSSMU3$ NEEDS RECOVERY

b/ If the RBS datafiles are missing, the database does not open:
————————————————————
Use the _OFFLINE_ROLLBACK_SEGMENTS parameter to allow the undo segments to
be dropped once the database opened.
SQL> startup pfile=/oracle3/djeunot/DB1/pfile/initDB1.ora
ORACLE instance started.
Total System Global Area 235693108 bytes
Fixed Size 279604 bytes
Variable Size 167772160 bytes
Database Buffers 67108864 bytes
Redo Buffers 532480 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 2 – see DBWR trace file
ORA-01110: data file 2: ‘/oracle3/djeunot/DB1/undotbs01.dbf’
SQL> select * from v$recover_file;
FILE# ONLINE ONLINE_STATUS ERROR CHANGE# TIME
———- ——- ————- ————— ——- —-
2 ONLINE ONLINE FILE NOT FOUND 0

 

 

Before opening the database, OFFLINE DROP the missing datafiles :
SQL> alter database datafile ‘/oracle3/djeunot/DB1/undotbs01.dbf’
2 offline drop;
Database altered.
SQL> alter database open;
Database altered.
—————————————————–
3/ | The Undo Segments need to be individually dropped |
—————————————————–
SQL> drop rollback segment “_SYSSMU1$”;
Rollback segment dropped.
SQL> drop rollback segment “_SYSSMU2$”;
Rollback segment dropped.
…..
If you get the following error:
SQL> drop rollback segment “_SYSSMU11$”;
drop rollback segment “_SYSSMU11$”
*
ERROR at line 1:
ORA-30025: DROP segment ‘_SYSSMU11$’ (in undo tablespace) not allowed
this means that you did not specify the right undo segment name in the
list of the hidden parameter at startup time, and therefore the undo segment
is not offlined. Define the correct list and re-startup the database.
——————————————————————–
4/ | Once the Undo Segments are all dropped, drop the UNDO tablespace |
——————————————————————–
SQL> drop tablespace UNDOTBS including contents and datafiles;
Tablespace dropped.
If you get the following error:
SQL> drop tablespace undotbs including contents and datafiles;
drop tablespace undotbs including contents and datafiles
*
ERROR at line 1:
ORA-01548: active rollback segment ‘_SYSSMU11$’ found, terminate dropping
tablespace
this means that undo segments still exist in the undo tablespace to be dropped.
——————————–
5/ | Recreate the undo tablespace |
——————————–
SQL> create undo tablespace undotbs
2 datafile ‘/DB1/undotbs01.dbf’ size 500k reuse;
Tablespace created.
————————————————–
6/ | Reset the following parameters in the init.ora |
————————————————–
UNDO_MANAGEMENT=AUTO
#_OFFLINE_ROLLBACK_SEGMENTS=(_SYSSMU1$, _SYSSMU2$, _SYSSMU3$, …etc)
or
#_CORRUPTED_ROLLBACK_SEGMENTS=(_SYSSMU1$, _SYSSMU2$, _SYSSMU3$, …etc)

 

 
7/ If you used these hidden ROLLBACK_SEGMENTS parameter, perform a full
export since the database may be in an inconsistent state.
Then you MUST recreate the database and perform a full import in the case of
the use of _CORRUPTED_ROLLBACK_SEGMENTS .
In the case of _OFFLINE_ROLLBACK_SEGMENTS with active transactions that may
lead to logical corruption, you need to recreate the database and import the
data back. If there were no active transactions, then there is no need to
recreate the database: an export is nevertheless a good backup.

Oracle ORA-600 [kccpb_sanity_check_2] ORA-00600 [kccpb_sanity_check_2]

$
0
0
 

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

ERROR:
ORA-600 [kccpb_sanity_check_2] [a] [b] [c]

 

VERSIONS:

Versions 10.2 to 11.2
DESCRIPTION:

This internal error is raised when the sequence number (seq#) of the current block of the controlfile is greater than the seq# in the controlfile header. The header value should always be equal to, or greater than the value held in the control file block(s).
This extra check was introduced in Oracle 10gR2 to detect lost writes or stale reads to the header.
ARGUMENTS:
Arg [a] seq# in control block header.
Arg [b] seq# in the control file header.
Arg [c]
FUNCTIONALITY:

Kernel Cache layer Control file component.

IMPACT:

INSTANCE FAILURE
PROCESS FAILURE
POSSIBLE CONTROLFILE CORRUPTION

 

ORA-00600: [kccpb_sanity_check_2] During Instance Startup

 

Symptoms

The database is getting the following errors on Startup:

ORA-00600: internal error code, arguments: [kccpb_sanity_check_2], [3621501], [3621462], [0x000000000]

Changes

In this case, the customer moved the box from one data center to another.

Cause
ORA-600 [kccpb_sanity_check_2] indicates that the seq# of the last read block is higher than the seq# of the control file header block.

 

This is indication of the lost write of the header block during commit of the previous cf transaction.

Solution

1) restore a backup of a controlfile and recover
OR

2) recreate the controlfile

OR

3) restore the database from last good backup and recover

NOTE: If you do not have any special backup of control file to restore and you are using Multiple Control File copies in your pfile/init.ora/spfile you can attempt to mount the database using each control file one by one. If you are able to mount the database with any of these control file copies you can then issue ‘alter database backup controlfile to trace’ to recreate controlfile.

 

ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600
ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600
kccpb_sanity_check_2 kccpb_sanity_check_2 kccpb_sanity_check_2 kccpb_sanity_check_2
kccpb_sanity_check_2 kccpb_sanity_check_2 kccpb_sanity_check_2 kccpb_sanity_check_2
kccpb_sanity_check_2 kccpb_sanity_check_2 kccpb_sanity_check_2 kccpb_sanity_check_2
kccpb_sanity_check_2 kccpb_sanity_check_2 kccpb_sanity_check_2 kccpb_sanity_check_2
kccpb_sanity_check_2 kccpb_sanity_check_2 kccpb_sanity_check_2 kccpb_sanity_check_2

 

 

Oracle ORA-600 [kccsbck_first] ORA-00600 [kccsbck_first]

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com



ERROR:              

  Format: ORA-600 [kccsbck_first] [a] [b]
 
VERSIONS:           
  versions 8.1.5 to 10.2
 
DESCRIPTION:

  We receive this error because we are attempting to be the first 
  thread/instance to mount the database and cannot because it appears that 
  at least one other thread has mounted the database already.

  We therefore abort the mount attempt and log this error.
 
ARGUMENTS:          
  Arg [a] thread number which has database mounted
  Arg [b] mount id of the thread
 
FUNCTIONALITY:      
  CONTROL FILE COMPONENT
 
IMPACT:             
  PROCESS FAILURE
  GENERALLY NON CORRUPTIVE - No underlying data corruption. Although see
                             Alert in Note:137322.1 for Tru64
 
SUGGESTIONS: 

  See article: Note:157536.1  ORA-600 [KCCSBCK_FIRST] What to check

  Reference Notes: 
  Note:137322.1  ALERT: Node panic or shutdown can cause partitioned 
                   cluster and database corruption 8.1.5 - 8.1.7 Tru64
  Note:139812.1  ORA-600 [KCCSBCK_FIRST] When starting up second instance
  Note:105904.1  ORA-600 [KCCSBCK_FIRST] After Failed Migration 805/816
       

  Known Issues:
 
 
NBProbBugFixedDescription
II7360390 10.2.0.4.CRS04, 10.2.0.5, 11.1.0.7.CRS07, 11.2.0.1 Split brain in case of multi-failures (network and VD) / ORA-600 [kccsbck_first]
II6117754 10.2.0.5, 11.1.0.6 OERI[kccsbck_rtenq] db instance fail to start after storage cable restore
3814603 10.1.0.4 OERI[kccsbck_first] from CSS problem with split brain resolution
P*II2646914 9.2.0.4 Linux: OERI:[KCCSBCK_FIRST] possible on node startup
P*2695783 9.2.0.3 Win: OERI:[KCCSBCK_FIRST] possible if Oracle & CM restarted

 

PURPOSE

This article helps to resolve problems with the error ORA-600 [kccsbck_first]
after having read Note:139013.1 .

SCOPE

The ORA-600 [kccsbck_first] error occurs when Oracle detects that another instance
has this database already mounted. For some reason, Oracle already sees a thread
with a heartbeat. This could be the expected behaviour if running OPS. In such a
case the parallel_server parameter needs to be set. In cases where Parallel Server
is not linked in, this is not the expected behaviour.

 

In special cases this error can be raised due to one or more corrupt
controlfile(s).

DETAILS

1- YOU TRY TO START THE INSTANCE YOU JUST CREATED
==============================================

sqlplus> startup

ORA-600 [KCCSBCK_FIRST]

The error is recorded only on the screen and no errors are reported in the alert.log.

Several other instances run fine on the box. None of them has a similar db_name. They all run different Oracle versions.

Solution 1:
———–

Make sure that the initSID.ora soft link points to the correct release location.

Explanation 1
————-

The initSID.ora in $ORACLE_HOME/dbs is pointing to a higher release of Oracle.
E.g., init.ora points to 11.2.0.2 instead of 11.1.0.7. The database and software versions need to be synchronized.

Refer also to : Note 730108.1 One Instance Of Two Node RAC Fails to Start with ORA-600 [kccsbck_first]

2- YOU INSTALLED HA/CMP SOFTWARE
=============================

List all cluster nodes with the following:

$ $ORACLE_HOME/bin/lsnodes

The following verification doesn’t show any error:

$ /usr/sbin/cluster/diag/clverify

Check HACMP interconnect network adapter configuration with the following:

$ /usr/sbin/cluster/utilities/cllsif

Adapter Type Network Net Type Attribute Node IP Address
pfpdb3 service pfpdb3 ether private pfpdb3 11.2.18.24
pfpdb4 service pfpdb4 ether private pfpdb4 11.2.18.3

The network parameter doesn’t match. It has to be identical for both adapters.

cllsif on a working configuration should look like this:

Adapter Type Network Net Type Attribute Node IP Address
pfpdb3 service pfpdb ether private pfpdb3 11.2.18.24
pfpdb4 service pfpdb ether private pfpdb4 11.2.18.3

Solution 2:
———–

Please change the HACMP interconnect network adapter configuration.

3- YOU ARE RUNNING ORACLE ON AN NT CLUSTER
=======================================

You encounter one the following errors:

ORA-00600: internal error code, arguments: [kccsbck_first],[1],[number]

– OR –

ORA-00600: internal error code, arguments: [KSIRES_1],[KJUSERSTAT_not attached]

The OPS database had been running for some time with no problems; therefore,
cluster and database configuration issues can be ruled out.

Rebooting the node itself also does not clear the problem.

Solution 3:
———–

Reboot the entire NT cluster.

Explanation 3:
————–

When the primary instance mounts the database, a lock is enabled that will
prevent other instances from mounting the database in exclusive mode. If there
is a problem with the status of this lock, Oracle will return either of these
errors until the entire cluster is rebooted and the locks are reinitialized.

4- YOU ARE MOUNTING SECOND INSTANCE WHEN OTHER INSTANCE IS RUNNING
===========================================================

Restarting instance while other instance is running fails.
Executing the following sql:

Alter database mount

you receive the following error code:

ORA-00600 [KCCSBCK_FIRST]

with stack: ksedmp ksfdmp kgesinv ksesin kccsbck kccocf kcfcmb kcfmdb

Explanation 4:
————–

See Bug:2646914 Linux: OERI:[KCCSBCK_FIRST] possible on node startup

5- CHECK THE PARAMETERS
=======================

You encounter these 2 errors:

ORA-00600: internal error code, arguments: [kccsbck_first],[1],[number]

– AND –

ORA-00439 “feature not enabled: %s”

Solution 5:
———–

Please check the “init.ora” to verify that the “parallel_server” option is not
set. Setting the parameter “Parallel_Server” to true in the “init.ora” of both
instances yields these errors.

You need to make sure you can start up all your Parallel Server instances in
shared mode successfully.

Explanation 5:
————–

The parameter “PARALLEL_SERVER” was introduced in 8.x. When this
parameter is set to TRUE, then the instance will always come up in shared
mode. In RAC the parameter CLUSTER_DATABASE must be set to TRUE
to allow the instances to come up in shared mode.

When “parallel_server=false” or “cluster_database=false”, or they are not set in
“init.ora” or spfile, the instance will always startup in exclusive mode. The first
instance will start up successfully, but the second or subsequent OPS/RAC instances
will fail. Make sure you can start up all your Parallel Server instances in shared mode
successfully.

6- ORA-600 [kccsbck_rtenq] TRYING TO START AN ORACLE PARALLEL SERVER DATABASE
===========================================================

ORA-600 [kccsbck_rtenq]

From the alert.log:

Mon Jan 31 08:48:41 2000
Errors in file /u01/app/oracle/admin/nps3/udump/ora_6676.trc:
ORA-00600: internal error code, arguments: [kccsbck _rtenq], [1],
[3775228464], [], [], [], [], []

When trying to start the second node in cluster, you encounter this
error:

ORA-600 [kccsbck_first]

Solution 6:
———–

Ensure the ‘oracle’ binary is the same across all nodes of the OPS cluster.
Specifically, check that the GROUPS are the same on each node.
For example:

Node jag2:
% ls -l oracle
oracle backup 28262400, Jan 31 1:15

Node jag1:
% ls -l oracle
oracle backup 28262400, Jan 31 1 :26

Logged in as the ‘oracle’ software owner…

Node jag1:
%id uid=1001, gid=13, groups=101 dba
Node jag2:
%id uid=1001, gid=13, groups =15 users, 101

Note that the primary GROUPS displayed for the oracle user are not the same
on each node of the cluster. Correct this and restart the OGMS to correct
the problem.

Explanation 6:
————–

It is assumed that the lock management/node monitor divides up the lock domain
by unix group id. Instances with the same dbname should belong to the same
lock domain, therefore the user which starts the instance must belong to
the same groups.

7- ON STARTUP AFTER DATABASE CRASHED
==============================

You are attempting to start your database after it crashed, and are
getting the following errors on startup mount:

skgm warning: Not enough physical memory for SHM_SHARE_MMU segment of size 000000000795a000

ORA-00600: internal error code, arguments: [kccsbck_first], [1], [3141290959]

Solution 7:
———–

– check if background processes for this SID are still running and kill them
with the unix kill command.

– check also if shared memory segments still exist for this instance and
remove them.

See Note:68281.1
and
Note:123322.1 SYSRESV Utility for instruction

– check also if the “sgadefSID” file exists in the “$ORACLE_HOME/dbs”
directory for the SID and remove it.

– check if OPS is linked in:
$ cd $ORACLE_HOME/rdbms/lib
$ ar tv libknlopt* | grep kcs
$ kcsm.o => OPS is linked in
$ ksnkcs.o => OPS is not linked in

Explanation 7:
————–

In most cases when a shutdown abort is issued for an instance, the background
processes will die. In this case they did not. There was not enough information
to determine why the database crashed and the Oracle background processes
continued to run. Other things to check for ,in this case, are shared memory
segments that are still running for the instance that crashed, and the “sgadefSID”
file existence in the “$ORACLE_HOME/dbs” directory for the SID that is receiving
the error.

See also ORA-600[KCCSBCK_FIRST]: ON STARTUP AFTER DATABASE CRASHED Note 1074067.6

8- ORA-600 [kccsbck_rtenq] DURING INSTANCE STARTUP OF AN INSTANCE ON RAC DATABASE
==============================================================

Refer to the following document for more details:
Startup (mount) of 2nd RAC instance fails with ORA-00600 [kccsbck_first]  Note 395156.1

Solution 8
—————

Make sure ‘db_unique_name’ is the same for all RAC instances using this database.

9- ON STARTUP WHEN DATABASE IS RESIDING ON NFS
==========================================

You are using NFS for datafile storage, without Real Application Clusters (RAC), and the mount point with the datafiles is using the ‘nolock’ NFS mount option. Then 2 nodes accidentally open the same database.
Problem occurs either at database startup or corruptions occur while it is up and will need recovery.

Solution 9
————–

Clear Stuck NFS Locks on NetApp Filer(s) .

For details see NetApp: Using ‘nolock’ NFS Mount Option with non-RAC Systems Results in Database Corruption  Note 430920.1

10- CORRUPT CONTROLFILE(S)
==========================

If the instance is setup with multiple control files check if the instance will start with any of the control files, one at a time. To do so edit the control_files parameter to point to one control file at a time and check if the instance will start.

If the instance starts then shut it down and replace the bad control files with a copy of this one. Then adjust the control_files parameter back to its original value and restart the instance.

Refer also to
Ora-00600 [kccsbck_first] Error Occuring On Alter Database Mount Exclusive Command Note 291684.1

11-ON STARTUP AFTER AN RMAN RESTORE

==============================

A restore (RMAN or other) has occurred … and now the controlfiles are in a new location

Upon attempted startup of the first instance in the cluster an ORA-600 [kccsbck_first] is signaled.

Explanation 10

============

The controlfiles no longer have the same name as the CONTROL_FILE entry in the parameter file (PFILE or SPFILE)

EXAMPLE:

FILE: initORCL.ora

*.control_files=’+DATADG/ORCL/controlfile/current.1710.827252719′,’+RECODG/ORCL/controlfile/current.13011.827252719′

FILE: RMAN_restore_output.txt

output file name=+DATADG/ORCL/controlfile/current.1849.828899339

output file name=+RECODG/ORCL/controlfile/current.18173.828899341

Solution 11

—————

Modify the parameter file such that the CONTROL_FILE parameter points to the location of the current control files

 


Oracle ORA-600 [qertbfetchbyrowid] ORA-00600 [qertbfetchbyrowid]

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com


ERROR:

  Format: ORA-600 [qertbfetchbyrowid]
VERSIONS:
  versions 10.1 and above



SUGGESTIONS:

  If the Known Issues section below does not help in terms of identifying
  a solution, please submit the trace files and alert.log to Oracle 
  Support Services for further analysis.

  Known Issues:

You can restrict the list below to issues likely to affect one of the following versions by clicking the relevant button:

 

 

NBProbBugFixedDescription
 II12821418 11.2.0.3.8, 11.2.0.3.BP18, 11.2.0.4, 12.1.0.1 Direct NFS appears to be sending zero length windows to storage device. It may also cause Lost Writes
 III10633840 11.2.0.2.7, 11.2.0.2.BP17, 11.2.0.3, 12.1.0.1 ORA-1502 on insert statement on INTERVAL partitioned table. ORA-8102 / ORA-1499 Index inconsistency
 II10245259 11.2.0.2.BP03, 11.2.0.3, 12.1.0.1 PARALLEL INSERT with +NOAPPEND hint or if PARALLEL INSERT plan is executed in SERIAL corrupts index and causes wrong results
+II10209232 11.1.0.7.7, 11.2.0.1.BP08, 11.2.0.2.1, 11.2.0.2.BP02, 11.2.0.2.GIBUNDLE01, 11.2.0.3, 12.1.0.1 ORA-1578 / ORA-600 [3020] Corruption. Misplaced Blocks and Lost Write in ASM
+II9734539 11.2.0.2, 12.1.0.1 ORA-8102 / ORA-1499 corrupt index after update/merge using QUERY REWRITE
+III9469117 10.2.0.5.4, 11.2.0.1.BP04, 11.2.0.2, 12.1.0.1 Corrupt index after PDML executed in serial. Wrong results. OERI[kdsgrp1]/ORA-1499 by analyze
+II9231605 11.1.0.7.4, 11.2.0.1.3, 11.2.0.1.BP02, 11.2.0.2, 12.1.0.1 Block corruption with missing row on a compressed table after DELETE
+II8951812 11.2.0.2, 12.1.0.1 Corrupt index by rebuild online. Possible OERI [kddummy_blkchk] by SMON
EII8720802 10.2.0.5, 11.2.0.1.BP07, 11.2.0.2, 12.1.0.1 Add check for row piece pointing to itself (db_block_checking,dbv,rman,analyze)
PII8635179 10.2.0.5, 11.2.0.2, 12.1.0.1 Solaris: directio may be disabled for RAC file access. Corruption / Lost Write
+II8597106 11.2.0.1.BP06, 11.2.0.2, 12.1.0.1 Lost Write in ASM when normal redundancy is used
+II8546356 10.2.0.5.1, 11.2.0.1.3, 11.2.0.1.BP07, 11.2.0.2, 12.1.0.1 ORA-8102/ORA-1499/OERI[kdsgrp1] Composite Partitioned Index corruption after rebuild ONLINE in RAC
 II7710827 11.2.0.2, 12.1.0.1 Index rebuild or Merge partition causes wrong results in concurrent reads instead of ORA-8103
 II7705591 10.2.0.5, 11.2.0.1.1, 11.2.0.1.BP04, 11.2.0.2, 12.1.0.1 Corruption with self-referenced row in MSSM tablespace. Wrong Results / OERI[6749] / ORA-8102
 I8588540 11.1.0.7.2, 11.2.0.1 Corruption / ORA-8102 in RAC with loopback DB links between instances
+III7329252 10.2.0.4.4, 10.2.0.5, 11.1.0.7.5, 11.2.0.1 ORA-8102/ORA-1499/OERI[kdsgrp1] Index corruption after rebuild index ONLINE
 II6791996 11.2.0.1 ORA-600 errors for a DELETE with self referencing FK constraint and BITMAP index
 III6404058 10.2.0.5, 11.1.0.7, 11.2.0.1 OERI:12700 OERI:kdsgrp1 OERI:qertbFetchByRowID wrong results from CR rollback of split index leaf
 II6772911 10.2.0.5, 11.1.0.7.3 OERI[12700] OERI[qertbFetchByRowID] OERI[kdsgrp1] due to bad CR rollback of INDEX block
 5621677 10.2.0.4, 11.1.0.6 Logical corruption with PARALLEL update
 II4883635 10.2.0.4, 11.1.0.6 MERGE (with DELETE) can produce wrong results or Logical corruption in chained rows
 4258825 10.1.0.5, 10.2.0.1 R-TREE index may get corruptioned (may contain orphan ROWIDs)
 4000840 9.2.0.7, 10.1.0.4, 10.2.0.1 Update of a row with more than 255 columns can cause block corruption

 

 

qertbfetchbyrowid qertbfetchbyrowid qertbfetchbyrowid qertbfetchbyrowid qertbfetchbyrowid qertbfetchbyrowid

qertbfetchbyrowid qertbfetchbyrowid qertbfetchbyrowid qertbfetchbyrowid qertbfetchbyrowid qertbfetchbyrowid

Oracle ORA-600 [4137] ORA-00600 [4137]”XID in Undo and Redo Does Not Match”

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

Format: ORA-600 [4137]
VERSIONS:
versions 7.0 to 10.1

DESCRIPTION:

While backing out an undo record (i.e. at the time of rollback) we found a transaction id mis-match indicating either a corruption in the rollback segment or corruption in an object which the rollback segment is trying to apply undo records on.
This would indicate a corrupted rollback segment.

FUNCTIONALITY:

Kernel Transaction Undo Recovery

IMPACT:

POSSIBLE PHYSICAL CORRUPTION in Rollback segments

SUGGESTIONS:

Signalled during rollback (also rollback for consistent read). The consistency check that compares the transaction id of the transaction being rolled back against the transaction id in undo block being applied is failing.
A possible cause is a lost write to the undo segment.

The main approach is to identify the file containing the bad undo segment block and treat it as if the file is corrupt. Consult the trace file for this information.

If in archivelog mode, restore the file & roll forward.

If in Noarchivelog mode, restore from a cold backup taken before the error was reported.
Alternatively, you can look at dba_rollback_segs data dictionary view.

If the status column that describes what state the rollback segment is currently in is “needs recovery” then lookup the following article for posible solution.

NB Bug Fixed Description
8240762
10.2.0.5,
11.1.0.7.10,
11.2.0.1
Undo corruptions with ORA-600 [4193]/ORA-600 [4194] or ORA-600
[4137] / SMON may spin to recover transaction
671491 8.1.6.0 Rollback Segment corruption possible if RBS has > 32767 extents

ORA-600 [4137] []
Versions: 7.x – 10.x Source: ktur.c
===========================================================================
Meaning:
While backing out an undo record ktuko finds that the transaction
id (txid) in the header of the undo block doesn’t match the
txid in the transaction state object.
—————————————————————————
Argument Description:
No arguments are returned.
—————————————————————————
Diagnosis:
This is a similar kind of error to <OERI:4147>, and basically indicates some kind of corruption with the UNDO block.
The main approach is to identify the file containing the bad RBS block and treat the problem as if this file is corrupt. E.g., if in archivelog mode, restore and roll forward.
At the end of the day, this usually comes down to a lost write to the RBS so it is a corruption. The redo stream should be ok.
In the trace file, the transaction ids that do not match are dumped together with the undo block. In 9i and 10G there is also a redo dump for the block. The redo dump shows the file number.
If there is no redo dump, you can use the uba of the undo block and determine which file to restore and roll forward. See <SupTool:ODBA>.

Search for “buffer tsn” (Oracle8) or “buffer dba” (Oracle7) in the
trace file and find the UNDO block containing the bad transaction ID.
This is the file/block that needs to be recovered.
Here is the code that shows you two transaction IDs and which
is which.
10G:
if (!KXIDEQ(xid, &ubh->ktubhxid)) /* make sure the txid matches */
{
ksdwrf(“XID passed in =”);
KXIDDMP(xid);
ksdwrf(“\nXID from Undo block =”);
KXIDDMP(&ubh->ktubhxid);
ksdwrf(“\n”);
/* dump useg header diagnostics */
KTUR_DIAG_DUMP(&udes->ktusdbds);
/* dump undo block diagnostics */
KTUR_DIAG_DUMP(ubdes);
ksesic0(OERI(4137));
}
9i:
if (!KXIDEQ(xid, &ubh->ktubhxid)) /* make sure the txid matches */
{
ksdwrf(“XID passed in =”);
KXIDDMP(xid);
ksdwrf(“\nXID from Undo block =”);
KXIDDMP(&ubh->ktubhxid);
ksdwrf(“\n”);
KCLDLCK(ubdes->kcbdsafn, ubdes->kcbdsrdba, ubdes->kcbdscls);
kcradx(ubdes->kcbdsafn, KTSNINV, ubdes->kcbdsrdba, 0, 0, 3, (char *)0);
ksesic0(OERI(4137));
}
7.3 – 8.1.7
if (!KXIDEQ(xid, &ubh->ktubhxid)) /* make sure the txid matches */
{
ksdwrf(“XID passed in =”);
KXIDDMP(xid);
ksdwrf(“\nXID from Undo block = “);
KXIDDMP(&ubh->ktubhxid);
ksdwrf(“\n”);
ksesic0(OERI(4137));
}
Before 7.3:
The transaction IDs dumped in the file are in this order:
Expected txid
undo txid
if (!KXIDEQ(xid, &ubh->ktubhxid)) /* make sure the txid matches */
{
KXIDDMP(xid);
KXIDDMP(&ubh->ktubhxid);
ksesic0(OERI(4137));
}
Note: It is unlikely that we will be able to ‘repair/trace’ the corrupt
undo block. With this scenario we typically have two options:
o Assuming the redo is good, we can restore the database file
o Assuming the redo is good, we can restore the database file
and roll forward.
o As a last resort, use undocumented parameter (_offline_rollback_segment or
_corrupted_rollback_segment) and rebuild the database.
—————————————————————————
Articles:
Note:106638.1 Handling Rollback Segment Corruptions in Oracle7.3/8

ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600
ora-600 ora-600 ora-600 ora-600 ora-600 ora-600 ora-600
4137 4137 4137 4137 4137 4137 4137 4137 4137 4137
4137 4137 4137 4137 4137 4137 4137 4137 4137 4137

 

OERR: ORA-1578 “ORACLE data block corrupted (file # %s, block # %s)” Master Note

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

APPLIES TO:

Oracle Database – Standard Edition – Version 8.0.6.0 to 12.1.0.1 [Release 8.0.6 to 12.1]
Oracle Database – Enterprise Edition – Version 8.0.6.0 to 12.1.0.1 [Release 8.0.6 to 12.1]
Information in this document applies to any platform.

PURPOSE

This article provides information about error ORA-1578 and possible actions.

SCOPE

This note is intended for general audience as initial starting point for beginning diagnosis of ORA-1578.

DETAILS

Error:  ORA-01578 (ORA-1578)
Text:   ORACLE data block corrupted (file # %s, block # %s)
Cause:  The data block indicated was corrupted, mostly due to software errors.
Action: Try to restore the segment containing the block indicated. This
may involve dropping the segment and recreating it. If there
is a trace file, report the errors in it to your ORACLE
representative.

 

Description

Error ORA-1578 reports a Physical Corruption within a block or a block marked as software corrupt.  Reference Note 840978.1 for Physical Corruption concept.

ORA-1578 – Solution (excludes NOLOGGING case)

  • Main article describing corruption issues in different Oracle areas and Solutions:
Note 28814.1 Handling Oracle Block Corruptions in Oracle7/8/8i/9i/10g

Database in ARCHIVELOG mode

    • Repair the Block with RMAN Block Media recovery.  In order to repair a block causing ORA-1578 with Block Media Recovery, the database must be in archivelog mode.
Note 144911.1 RMAN : Block-Level Media Recovery – Concept & Example

 

Note 342972.1 How to perform Block Media Recovery (BMR) when backups are not taken by RMAN

Database in NOARCHIVELOG mode or there is not a valid backup:

    • Identify the segment producing ORA-1578:
NOTE 819533.1 How to identify the corrupt Object reported by ORA-1578 / RMAN / DBVERIFY

 

NOTE 472231.1 How to identify all the Corrupted Objects in the Database reported by RMAN

NOTE 836658.1 Identify the corruption extension using RMAN/DBV/ANALYZE etc. Main sections in Note 836658.1 to identify corrupt blocks causing ORA-1578 are:

RMAN – Identify Datafile Block Corruptions
DBVerify – Identify Datafile Block Corruptions

    • For INDEX segment type consider to recreate the index.
    • Drop the segment and recover it from a different source.  Or use the next options to recover the information from the current segment:
    • For TABLES the corrupt block can be skipped using DBMS_REPAIR and decide to create a new table using “Create Table As Select”:
Note 556733.1 DBMS_REPAIR script
Note 68013.1 DBMS_REPAIR example
    • A Datapump export with ACCESS_METHOD=DIRECT_PATH (default value) may also be used to skip the corrupt block, then the table may be truncated or dropped and imported.
    • Another option is to MOVE the table with “ALTER TABLE MOVE &TABLE_NAME” as the MOVE skips corrupt blocks causing ORA-1578; it is recommended to take a backup (e.g. datapump export before moving the table it).
    • Reference Note 28814.1 for additional cases.

 

ORA-1578 / ORA-26040 due to NOLOGGING

Error ORA-1578 can also be produced along with error ORA-26040 meaning that the block is corrupt due to a NOLOGGING operation after a recovery.

ORA-1578 / ORA-26040 due to NOLOGGING – SOLUTION

  • Reference the next article to fix error ORA-1578 caused by NOLOGGING:
Note 794505.1 ORA-1578 / ORA-26040 Corrupt blocks by NOLOGGING – Error explanation and solution

ORA-1578 due to incorrect wallet in encrypted database

  • Reference the next article for ORA-1578 caused by incorrect wallet:
Note 1329437.1 ORA-1578 Corrupt Block Found in Encrypted Database

 

Known Corruption issues caused by 3rd party Software Provider

  • Reference the next document for 3rd party known issues causing corruption:
Note 1323649.1 Known Corruption issues caused by 3rd party Software Provider

 

White Paper: Preventing, Detecting, and Repairing Block Corruption: Oracle Database 11g

Oracle Maximum Availability Architecture White Paper

 

You can restrict the list below to issues likely to affect one of the following versions by clicking the relevant button:
              

 
 
NBProbBugFixedDescription
 III16776922 12.1.0.2, 12.2.0.0 ORA-1578/ORA-600 block corruption messages on the temporary data blocks
EI22228324 12.2.0.0 Enhancement to Avoid Block Memory Corruption being propagated to Disk by Direct Path (prevents future ORA-1578 / adds ORA-600[kcblco_2] )
 II20437153 12.2.0.0 Unnecessary incident files after ORA-603 following CTRL-C or session kill on Global Temporary Tables
+20144308 12.2.0.0 ORA-27086 or ORA-1182 RMAN May Overwrite a SOURCE Database File during TTS, TSPITR, etc when OMF is used in SOURCE. ORA-1578 ORA-1122 in SOURCE afterwards
 II18323690 12.1.0.2, 12.2.0.0 ORA-600 [kcbz_blk_decrypt_failed] [2001] / ORA-1578. Logical Corrupt undo block code 2001 when decryption with incorrect wallet. Error message changed
 II18252487 12.1.0.2, 12.2.0.0 ORA-1578 for an encrypted block (TDE) after master REKEY of incorrect wallet. Error message changed
EI17511071 12.1.0.2, 12.2.0.0 Datapump expdp silently skips corrupt block that produce ORA-1578 – This fix prints a warning message in export log when the corupt block is the first block
 II17210525 12.2.0.0 ORA-1 on SYS.I_PLSCOPE_SIG_IDENTIFIER$ / ORA-600 [kqlidchg0] / ORA-1578 in SYSTEM or SYSAUX Tablespaces
 IIII17437634 11.2.0.3.9, 11.2.0.3.BP22, 11.2.0.4.2, 11.2.0.4.BP03, 12.1.0.1.3, 12.1.0.2 ORA-1578 or ORA-600 [6856] transient in-memory corruption on TEMP segment during transaction recovery / ROLLBACK (eg: after Ctrl-C)
 I20658524  A query using direct read may fail with ORA-1578 ORA-26040 due to former corrupt block version
 II14828059 11.2.0.3.BP15, 11.2.0.4, 12.1.0.1 Wrong Results / False ORA-1578 in SuperCluster
 III13804294 11.2.0.3.4, 11.2.0.3.BP07, 11.2.0.4, 12.1.0.1 Internal errors, corruptions, using pipelined function whose rows raise exceptions
PI12330911 12.1.0.1 EXADATA LSI firmware for lost writes
 I11707302 11.2.0.2.3, 11.2.0.2.BP06, 11.2.0.3, 12.1.0.1 Corruption from ASM crash during rebalance diskgroup. Misplaced Blocks
 II11659016 11.2.0.3, 12.1.0.1 ORA-1578 against recently create tablespace that once was encrypted
+II10209232 11.1.0.7.7, 11.2.0.1.BP08, 11.2.0.2.1, 11.2.0.2.BP02, 11.2.0.2.GIBUNDLE01, 11.2.0.3, 12.1.0.1 ORA-1578 / ORA-600 [3020] Corruption. Misplaced Blocks and Lost Write in ASM
*III10205230 11.2.0.1.6, 11.2.0.1.BP09, 11.2.0.2.2, 11.2.0.2.BP04, 11.2.0.3, 12.1.0.1 ORA-600 / corruption possible during shutdown in RAC
 I9965085 11.2.0.3, 12.1.0.1 ORA-1578 / ORA-8103 Temporary table block corruption / space wastage from PDML – superseded
 III9739664 11.2.0.2, 12.1.0.1 ORA-1578 / ORA-26040 MANUAL RECOVER marks block as corrupt NOLOGGING in even if LOGGING is enabled
+III9724970 11.2.0.1.BP08, 11.2.0.2.2, 11.2.0.2.BP02, 11.2.0.3, 12.1.0.1 Block Corruption with PDML UPDATE. ORA_600 [4511] OERI[kdblkcheckerror] by block check
 II9407198 11.2.0.3, 12.1.0.1“LOG ERRORS INTO” can cause ORA-600 [kcb***] or hang scenarios
*II9406607 11.2.0.1.3, 11.2.0.1.BP06, 11.2.0.2, 12.1.0.1 Corrupt blocks in 11.2 in table with unique key. OERI[kdBlkCheckError] by block check
*III8943287 11.2.0.2, 12.1.0.1 ORA-1578 corrupt block with AUTH SQL*Net strings
*III8898852 11.1.0.7.2, 11.2.0.1.1, 11.2.0.1.BP04, 11.2.0.2, 12.1.0.1 ORA-1578 Blocks misplaced in ASM when file created with compatible.asm < 11 and resized
 III8885304 11.2.0.2, 12.1.0.1 ORA-7445 [ktu_format_nr] during RMAN CONVERT or Corrupt fractured block of UNDO tablespace datafile
*III8768374 10.2.0.5, 11.1.0.7.8, 11.2.0.1.BP11, 11.2.0.2, 12.1.0.1 RFS in Standby with a wrong location for archived log corrupting/overwriting database files when max_connections > 1
EII8760225 11.2.0.2, 12.1.0.1 Auto Block Media Recovery reports ORA-1578 on first query
 II8731617 11.2.0.3, 12.1.0.1 ORA-1578 from DESCRIBE or CTAS even if table not accessed / ORA-959 from DBMS_STATS
EII8720802 10.2.0.5, 11.2.0.1.BP07, 11.2.0.2, 12.1.0.1 Add check for row piece pointing to itself (db_block_checking,dbv,rman,analyze)
EII8493978 11.2.0.2, 12.1.0.1 Reserve file descriptors for datafile access
 II10025963 11.2.0.1.BP09, 11.2.0.2 Block corruption of LOB blocks with checksum value but block has checksum disabled
 II8714541 11.2.0.2 ORA-1578 Corrupt Block in ASM with 0xbadfda7a after ASM block repair due to disk read error when ASM mirror is used
 I13101288  ORA-600, corruption or check errors dropping a column in a OLTP compressed table
+8354682 11.2.0.1 ORA-1578 – Blocks can be misplaced in ASM when there is IO error and AU > 1MB
+III8339404 10.2.0.5, 11.1.0.7.1, 11.2.0.1 ORA-1578 – Blocks can be misplaced in ASM during a REBALANCE
 8227257 11.2.0.1 ORA-1578 corruption found after media recovery on encrypted datafile
EII7396077 10.2.0.5, 11.2.0.1 RMAN does not differentiate NOLOGGING corrupt blocks that produce ORA-1578/ORA-26040
 6471351 10.2.0.5, 11.1.0.7, 11.2.0.1 ORA-1578 / ORA-26040 due to NOLOGGING after recovery despite of FORCE LOGGING
 II6674196 10.2.0.4, 10.2.0.5, 11.1.0.6 OERI / buffer cache corruption using ASM, OCFS or any ksfd client like ODM
 5515492 10.2.0.3, 11.1.0.6 ORA-1578 corruption with Block Misplaced during ASM rebalance after IO error
E5031712 10.2.0.4, 11.1.0.6 DBV enhanced to report NOLOGGING corrupt blocks with DBV-201 instead of DBV-200
+4724358 11.1.0.6 ORA-27045 ORA-1578 ORA-27047 corruption caused by DBMS_LDAP
 4684074 10.2.0.2, 11.1.0.6 OERI:510 / block corruption (ORA-1578) with DB_BLOCK_CHECKING
 4655520 10.2.0.3, 11.1.0.6, 9.2.0.8 Block corrupted during write not noticed
 4411228 9.2.0.8, 10.2.0.3, 11.1.0.6 ORA-1578 Block misplaced with mixture of file system and RAW files
 II4344935 10.2.0.4, 11.1.0.6 OERI from DML on TEMPORARY TABLE after autonomous TRUNCATE
 II7381632 11.1.0.6 ORA-1578 Free corrupt blocks may not be reformatted when Flashback is enabled
 8976928 10.2.0.5 ORA-1578 caused by a former free corrupt block and remains unformatted
 I8684999 10.2.0.5 ORA-1578 caused by a former free corrupt block and remains unformatted
+3544995 9.2.0.6, 10.1.0.3, 10.2.0.1 LOB segments with “CACHE READS” generate no REDO even with the logging option
+1281962 9.2.0.1 Media recovery after ORA-1578 on rollback can cause logical inconsistency
 589855 7.3.3.6, 7.3.4.1 ORA:1578 or ORA:8103 selecting invalid ROWID
 406863 7.3.3.4, 7.3.4.0, 8.0.3.0 ORA-1578 using PQ with heavy simultaneous INSERTS
P707304 7.3.4.4 AIX: Resizing RAW datafile can corrupt a DB block
 603502 7.3.4.3, 8.0.4.4, 8.0.5.0 Possible Corruption if a session with LOOPBACK DB Links aborts.

 

Oracle REDO LOG CORRUPTION – DROPPING REDO LOGS NOT POSSIBLE – CLEAR LOGFILE

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

APPLIES TO:

Oracle Database – Enterprise Edition – Version 9.0.1.0 and later
Information in this document applies to any platform.

SYMPTOMS

 

Redo log corruption errors in one of the redo log files while the database is open.

The redo log corruption could be any of these errors:

ORA-16038 log %s sequence# %s cannot be archived
ORA-354 corrupt redo log block header
ORA-353 log corruption near block <num> change <str >time <str>
ORA-367 checksum error in log file header
ORA-368 checksum error in redo log block

Dropping the redo logs is not possible as it may be needed for instance recovery.

The online redo logs may not be dropped if:

There are only two log groups
The corrupt redo log file belongs to the current group

CAUSE

There many possible reasons why a redo log file becomes corrupted.

SOLUTION

Clear the logfile having the problem:

Syntax:

alter database clear <unarchived> logfile group <integer>;
alter database clear <unarchived> logfile ‘<filename>’;

eg: alter database clear logfile group 1;
alter database clear unarchived logfile group 1;

An online redo log file with status=CURRENT or status=ACTIVE in v$log may not be cleared and the error ORA-1624 will be produced. In this case, the database will have to be restored and recovered to a point in time to last available archivelog file.

NOTE: the ‘alter database clear logfile’ should be used cautiously. If no archived log was produced, then a complete recovery will not be possible. Perform a backup immediately after completing this command.

Explanation:

If an online redo log file has been corrupted while the database is open, the ‘alter database clear logfile’ command can be used to clear the files without the database having to be shutdown.

The command erases all data in the specified logfile group.

IMPORTANT: It is essential that a new database backup is taken as missing archivelog sequence will affect full database recovery.

Oracle PRM-DUL Undelete Oracle record/rows

$
0
0

Download PRM-DUL http://www.parnassusdata.com/en

On scenarios without valid physical or logical backups, when a mistaken delete occurred in Oracle, it will be given priority to use techniques such as flashback or logminer to recover the data rows in Oracle tables in general, but in many cases even flashback or logminer could not turn the tide.

For the row piece in the underlying data block of Oracle, delete operation only modify the row flag and mark them as deleted. It allows records of the subsequent INSERT to override these data marked as delete, and also allows to destroy the structure of these data that are deleted. In other words, if no operations has been done on tables after delete, it is possible to read the complete data by directly reading those records that are marked as deleted in blocks.

 

In a word, whether it can recover the deleted data or not all depends on whether the deleted data rows in oracle block on the disk have been eventually cleared or not.

 

As soon as it has not been cleared, ORACLE PRM-DUL can attempt to recover the data, and the specific steps has little difference with the ordinary data dictionary mode.

 

Start up the PRM – DUL, click the restore wizard in dictionary mode
 

prm-undelete1

prm-undelete2

 

 

 

prm-undelete4

prm-undelete5

 

Add all of the Oracle data files, no TEMPFILE, UNDO data files, control files, log files is required.

prm-undelete6

 

Click the load button, PRM will automatically load the data dictionary, i.e. bootstrap operation

 

prm-undelete7

ow on the left of PRM, you will see the object tree, select the corresponding data table under the user that you need to recover, right-click the object and then select unload deleted data.

prm-undelete8

 

prm-undelete9

After completing the recovery of the deleted data, PRM-DUL will write the data to the location of File path in the picture above, the Sample data recovery is as follows.

prm-undelete10

 

How to Recover from Loss Of Online Redo Log And ORA-312 ORA-00312 And ORA-313 ORA-00313

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

Oracle Database - Standard Edition - Version 9.0.1.0 and later

Oracle Database - Enterprise Edition - Version 9.0.1.0 and later

Oracle Database - Personal Edition - Version 9.0.1.0 and later

Generic UNIX



***Checked for relevance on 29-Mar-2012*** 

PURPOSE

This article aims at walking you through some of the common recovery scenarios after a loss of Online Redolog 
 

SCOPE

All Oracle support Analysts, DBAs and Consultants who have a role to play in recovering an Oracle database
 

DETAILS

Recovering After the Loss of Online Redo Log Files: Scenarios

If a media failure has affected the online redo logs of a database, then the
appropriate recovery procedure depends on the following:

- The configuration of the online redo log: mirrored or non-mirrored
- The type of media failure : temporary or permanent
- The types of online redo log files affected by the media failure: CURRENT, ACTIVE, UNARCHIVED, or INACTIVE

The database was shutdown normally before loss of archivelog file

 

1) Recovering After Losing a Member of a Multiplexed Online Redo Log Group

If the online redo log of a database is multiplexed, and if at least one member of each online redo log group is not affected by the media failure, then the database continues functioning as normal, but error messages are written to the log writer trace file and the alert_SID.log of the database.

ACTION PLAN

If the hardware problem is temporary, then correct it. The log writer process accesses the previously unavailable online redo log files as if the problem never existed.

If the hardware problem is permanent, then drop the damaged member and add a new member by using the following procedure.

To replace a damaged member of a redo log group:

Locate the filename of the damaged member in V$LOGFILE. The status is INVALID if the file is inaccessible:
 

SQL> SELECT GROUP#, STATUS, MEMBER FROM V$LOGFILE WHERE STATUS='INVALID';

GROUP#    STATUS       MEMBER
-------   -----------  ---------------------
0002      INVALID      /oracle/oradata/trgt/redo02.log

 
+ Drop the damaged member. 
  For example, to drop member redo01.log from group 2, issue:
 

SQL> ALTER DATABASE DROP LOGFILE MEMBER '/oracle/oradata/trgt/redo02.log';

+ Add a new member to the group. 
  For example, to add redo02.log to group 2, issue:
 

SQL> ALTER DATABASE ADD LOGFILE MEMBER '/oracle/oradata/trgt/redo02b.log' TO GROUP 2;

 + If the file you want to add already exists, then it must be the same size as the other group members, and you must specify REUSE. 

  For example:

SQL> ALTER DATABASE ADD LOGFILE MEMBER '/oracle/oradata/trgt/redo02b.log' REUSE TO GROUP 2;

2) Losing an Inactive Online Redo Log Group

If all members of an online redo log group with INACTIVE status are damaged, then the procedure depends on whether you can fix the media problem that damaged the inactive redo log group.

If the failure is ... Temporary... then Fix the problem. LGWR can reuse the redo log group when required.
If the failure is ... Permanent... then the damaged inactive online redo log group eventually halts normal database operation.

ACTION PLAN

Reinitialize the damaged group manually by issuing the "ALTER DATABASE CLEAR LOGFILE"
You can clear an inactive redo log group when the database is open or closed.
The procedure depends on whether the damaged group has been archived.

To clear an inactive, online redo log group that has been archived:

If the database is shut down, then start a new instance and mount the database:
STARTUP MOUNT

Reinitialize the damaged log group. 
For example, to clear redo log group 2, issue the following statement:

ALTER DATABASE CLEAR LOGFILE GROUP 2;

Clearing Inactive, Not-Yet-Archived Redo

Clearing a not-yet-archived redo log allows it to be reused without archiving it. This action makes backups unusable if they were started before the last change in the log, unless the file was taken 
offline prior to the first change in the log.   Hence, if you need the cleared log file for recovery of a backup, then you cannot recover that backup.  Also, it prevents complete recovery from backups due to the missing log.

To clear an inactive, online redo log group that has not been archived:

If the database is shut down, then start a new instance and mount the database:

STARTUP MOUNT

Clear the log using the UNARCHIVED keyword. For example, to clear log group 2,
issue:

ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 2;

If there is an offline datafile that requires the cleared log to bring it online, then the keywords UNRECOVERABLE DATAFILE are required.   The datafile and its entire tablespace have to be dropped because the redo necessary to bring it online is being cleared, and there is no copy of it. 
For example enter:

ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 2 UNRECOVERABLE DATAFILE;

Note:  If this is performed on an Active (current) logfile an error will occur:

Immediately back up the whole database including controlfile, so that you have a backup you can use for complete recovery without relying on the cleared log group. 

 

Failure of CLEAR LOGFILE Operation

The ALTER DATABASE CLEAR LOGFILE statement can fail with an I/O error due to media failure when it is not possible to:

* Relocate the redo log file onto alternative media by re-creating it under the currently configured redo log filename
* Reuse the currently configured log filename to re-create the redo log file because the name itself is invalid or unusable (for example, due to media failure)

In these cases, the ALTER DATABASE CLEAR LOGFILE statement (before receiving the I/O error) would  have successfully informed the control file that the log was being cleared and did not require archiving.

The I/O error occurred at the step in which the CLEAR LOGFILE statement attempts to create the new redo log file and write zeros to it. This fact is reflected in V$LOG.CLEARING_CURRENT.

3) Loss of online logs after normal shutdown 

You have a database in archive log mode, shutdown immediate and deleted one of the online redo logs, in this case there are only 2 groups with 1 log member in each. When you try to open the database you receive the following errors: 

ora-313 open failed for members of log group 2 of thread 1.
ora-312 online log 2 thread 1 'filename'

It is not possible to recover the missing log, so the following needs to be performed!

Mount the database and check v$log to see if the deleted log is current.

- If the missing log is not current, simply drop the log group (alter database drop logfile group N).
If there are only 2 log groups then it will be necessary to add another group before dropping this one.

- If the missing log is current they should simply perform fake recovery and then open resetlogs

sql> connect / as sysdba
sql> startup mount
sql> recover database until cancel;
(cancel immediately)
sql> alter database open resetlogs;

Be sure the location (directory) for the online log files exists before trying to open the database.  If not available then create it and rerun the resetlogs else this will give error

 NOTE:  If the current online log, needed for instance recovery, is lost, the database must be restored and recovered through the last available archivelog file.  

Oracle Recreating a missing datafile with no backups

$
0
0

 

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

APPLIES TO:

Oracle Database - Enterprise Edition - Version 10.2.0.2 and later

Information in this document applies to any platform.

***Checked for relevance on 16-Apr-2014***

GOAL

How to recreate a datafile that is missing at the operating system level. Missing/inaccessible files may be reported with one or more of these errors:

ORA-01116: error in opening database file %s
ORA-27041: unable to open file
ORA-01157: cannot identify/lock data file %s - see DBWR trace file
ORA-01119: error in creating database file '%s'

No backup or copy of the datafile is required. We only need the redo logs starting from the time of the datafile creation to the current point in time.

Note: plugged-in datafiles do not apply in this scenario and needs to be plugged-in again from its source.

SOLUTION

When a datafile goes missing at the operating system level, you would normally need to restore and recover it from a backup. If you do not have backups of this datafile, but do have redo logs you can still create and recover the datafile. You only need the redo logs starting from the datafile creation time to now.

Prior to 10g, you would use the following SQL command:

SQL> alter database create datafile 'missing name' as 'misisng name'; 
SQL> recover datafile 'missing name';
SQL> alter database datafile '<missing name>' online;

As of 10g, you can also do this in RMAN. 

1) RMAN will create the datafile if there is no backups or copies of this datafile:

 

RMAN> restore datafile <missing file id>;

2) Recover the newly created datafile:

RMAN> recover datafile <missing file id>;

3) Bring it online:

RMAN> sql 'alter database datafile <missing file id> online';

Example:

RMAN> list copy of datafile 6;

specification does not match any datafile copy in the repository

RMAN> list backup of datafile 6;

specification does not match any backup in the repository

RMAN> restore datafile 6;

Starting restore at 14 JUL 10 10:20:02
using channel ORA_DISK_1

creating datafile file number=6 name=/opt/app/oracle/oradata/ORA112/datafile/o1_mf_leng_ts_63t08t64_.dbf
restore not done; all files read only, offline, or already restored
Finished restore at 14 JUL 10 10:20:05

RMAN> recover datafile 6;

Starting recover at 14 JUL 10 10:21:02
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 14 JUL 10 10:21:02

RMAN> sql 'alter database datafile 6 online';

sql statement: alter database datafile 6 online


Oracle Recover A Lost Datafile With No Backup

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

Problem Description: 
==================== 
 
You have inadvertantly lost a datafile at the OS level and there are no current 
backups. 
You are in archivelog mode.
You have ALL Archivelogs available since the datafile was created initially (creation date). 


Problem Explanation: 
==================== 

Since there are no backups, the database cannot be opened without this file 
unless dropped and the tablespace dropped.  If this is an important file and 
tablespace, this is not a valid option.

 
Problem References: 
=================== 

Oracle 7 Backup and Recovery Workshop Student Guide, Failure Scenario 14 


Search Words: 
============= 
 
ORA-1110, lost datafile, file not found.



Solution Description: 
===================== 
 
This files have to be recreated and recovered. Do the following:
 
1) Go to svrmgrl and connect internal.

2) SVRMGR>shutdown immediate. (If this hangs, issue shutdown abort)

3) SVRMGR>startup mount 

4) SVRMGR> select * from v$recover_file;


  SAMPLE:

  FILE#      ONLINE  ERROR              CHANGE#    TIME                
  ---------- ------- ------------------ ---------- --------------------   
  11 OFFLINE FILE NOT FOUND              0 01/01/88 00:00:00   

  (Noting the file number that was reported in the error)


5) SVRMGR> select * from v$datafile where FILE#=11;

  SAMPLE:

  FILE#      STATUS  ENABLED    CHECKPOINT BYTES      CREATE_BYT NAME             
  ---------- ------- ---------- ---------- ---------- ---------- --------
  11 RECOVER READ WRITE 4.9392E+12          0      10240 /tmp/sample.dbf

  (Note the status is RECOVER and the CREATE_BYTE size)
  (Note the NAME)


6) Recreate the datafile.

	SVRMGR> alter database create datafile '/tmp/sample.dbf'
		as '/tmp/sample.dbf' size 10240 reuse.

	(Note that the file "created" and the file created "as" are
	 the same file. The "size" needs to be the same size as it
	 was when it was created.)

7) Check to see that it was successful.

	SVRMGR> select * from v$datafile where FILE#=11;

8) Bring the file online.

	SVRMGR> alter database datafile '/tmp/sample.dbf' online;

9) Recover the datafile.

	SVRMGR> Recover database;

Note: During recovery, all archived redo logs written to since the original 
datafile was created must be applied to the new, empty version of the 
lost datafile." 


10) Enjoy!!

	SVRMGR> alter database open;


Solution Explanation: 
===================== 
 
Recreating the file and recovering it rewrites it to the OS and brings it up to 
date.   

 
Solution References: 
==================== 

Oracle 7 Backup and Recovery Workshop Student Guide, Failure Scenario 14

 

Oracle How to Recover a Database Having Added a Datafile Since Last Backup

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 



HOW TO RECOVER A DATABASE HAVING ADDED A DATAFILE SINCE THE LAST BACKUP
-----------------------------------------------------------------------

This bulletin outlines the steps required in performing database recovery
having added a datafile to the database since the last backup was taken. 
Section A is applicable to Oracle release 7.x. Section B applies only to
Oracle releases 7.3.x and above.

PLEASE READ THROUGH ALL STEPS AND WARNINGS BEFORE ATTEMPTING TO USE THIS
BULLETIN.


A. Current controlfile, backup of datafile exists (Oracle release 7.x)
   ===================================================================

 A valid (either hot or cold) backup of the datafiles exists, except for the
 datafile created since the backup was taken. The current controlfile exists. 
 The database is in archivelog mode (see note (c) at bottom of page).

 1. Restore ONLY the datafiles (those that have been lost or damaged) from the 
    last hot or cold backup. The current online redo logs and control file(s) 
    must be intact.

 2. Mount the database

 3. Create a new datafile using the 'ALTER DATABASE CREATE DATAFILE' command.

    a. The datafile can be created with the same name as the original
       file. For example,

       SQLDBA> alter database create datafile
            2> '/dev1/oracle/dbs/testtbs.dbf';
       Statement processed.
 
    b. The datafile can be created with a different filename to the original. 
       This option might be chosen if the original file was lost due to disk 
       failure and the failed disk was still unavailable; the new file would 
       then be created on a different device. For example,

       SQLDBA> alter database create datafile
            2> '/dev1/oracle/dbs/testtbs.dbf'
            3> as
            4> '/dev2/oracle/dbs/testtbs.dbf';
       Statement processed.

       The above command creates a new datafile on the dev2 device. The file
       is created using information, stored in the control file, from the 
       original file. The command implicitly renames the filename in the 
       control file.
   
       NOTE: IT IS VERY IMPORTANT TO SPECIFY THE CORRECT FILENAME WHEN
             RECREATING THE LOST DATAFILE. IF YOU SPECIFY AN EXISTING
             ORACLE DATAFILE, THAT DATAFILE WILL BE INITIALISED AND WILL
             ITSELF REQUIRE RECOVERY.

 4. Recover the database.

    SQLDBA> recover database
    ORA-00279: Change 6677 generated at 06/03/97 15:20:24 needed for thread 1
    ORA-00289: Suggestion : /dev1/oracle/dbs/arch/arch000074.arc
    ORA-00280: Change 6677 for thread 1 is in sequence #74
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    
    At this point the recovery procedure will wait for the user to supply the
    information requested regarding the name and location of the archived log
    files. For example, entering AUTO directs Oracle to apply the suggested 
    redo log and any others that it requires to recover the datafiles.

    Applying suggested logfile...
    Log applied.
              :
              :
    <Application of further redo logs>
              :
              :
    Media recovery complete.

 5. Open the database

    SQLDBA> alter database open;
    Statement processed.



B. Old controlfile, no backup of datafile (Oracle release 7.3.x and above)
   =======================================================================

 A valid (either hot or cold) backup of the datafiles exists, except for the
 datafile created since the backup was taken. The controlfile is a backup from
 before the creation of the new datafile. The database is in archivelog mode 
 (see note (c) at bottom of page).

 NOTE : 'svrmgrl' has been replaced by SQL*Plus starting from Oracle8i
        So the 'SVRMGR>' prompt is than replaced by 'SQL>'

 1. Restore the datafiles (those that have been lost or damaged) from the 
    last hot or cold backup. Also restore the old copy of the controlfile.
    The current online redo logs must be intact.

 2. Mount the database

 3. Start media recovery, specifying backup controlfile

    SVRMGR> recover database using backup controlfile
    ORA-00279: Change 6677 generated at 06/03/97 15:20:24 needed for thread 1
    ORA-00289: Suggestion : /dev1/oracle/dbs/arch/arch000074.arc
    ORA-00280: Change 6677 for thread 1 is in sequence #74
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}

    At this point, apply the archived logs as requested. Eventually Oracle
    will encounter redo to be applied to the non-existent datafile. The 
    recovery session will exit with the following message, and will return
    the user to the Server Manager prompt:

    ORA-00283: Recovery session canceled due to errors
    ORA-01244: unnamed datafile(s) added to controlfile by media recovery
    ORA-01110: data file 5: '/dev1/oracle/dbs/testtbs.dbf'
 
 4. Recreate the missing datafile. To do this, select the relevant filename 
    from v$datafile:

    SVRMGR> select name from v$datafile where file#=5;
    NAME
    -------------------------------------------------------
    UNNAMED0005

    Now recreate the file:

    SVRMGR> alter database create datafile
         2> 'UNNAMED0005'
         3> as
         4> '/dev1/oracle/dbs/testtbs.dbf';



 5. Restart recovery

    SVRMGR> recover database using backup controlfile
    ORA-00279: Change 6747 generated at 09/24/97 16:57:18 needed for thread 1
    ORA-00289: Suggestion : /dev1/oracle/dbs/arch/arch000079.arc
    ORA-00280: Change 6747 for thread 1 is in sequence #79
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}

    Apply archived logs as requested. Prior to Oracle8, recovery must apply
    the complete log which was current at the time of the datafile creation
    (in the above example, this would be log sequence 79). A recovery to a
    point in time before the end of this log would result in errors:

    ORA-01196: file 1 is inconsistent due to a failed media recovery session
    ORA-01110: data file 1: '/dev1/oracle/dbs/systbs.dbf'

    If this happens, re-recover the database and ensure that the complete log
    is applied (plus any further redo if required). This limitation does
    not exist from Oracle 8.0+.

    Eventually, Oracle will request the archived log corresponding to the 
    current online log. It does this because the (backup) controlfile has no 
    knowledge of the current log sequence. If an attempt is made to apply the 
    suggested log, the recovery session will exit with the following message:

    ORA-00308: cannot open archived log '/dev1/oracle/dbs/arch/arch000084.arc'
    ORA-07360: sfifi: stat error, unable to obtain information about file.
    SVR4 Error: 2: No such file or directory

    At this stage, simply restart the recovery session and apply the current
    online log. The best way to do this is to try applying the online redo 
    logs one by one until Oracle completes media recovery:

    SVRMGR> recover database using backup controlfile
    ORA-00279: Change 6763 generated at 09/24/97 16:57:59 needed for thread 1
    ORA-00289: Suggestion : /dev1/oracle/dbs/arch/arch000084.arc
    ORA-00280: Change 6763 for thread 1 is in sequence #84
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /dev1/oracle/dbs/log2.dbf
    Log applied.
    Media recovery complete.

 6. Open the database

    SVRMGR> alter database open resetlogs;

    The resetlogs option must be chosen to resynchronize the controlfile. 

    
NOTES:
======

a) These techniques can be used whether the database was closed either 
   cleanly or uncleanly (aborted).

b) If the database is recovered using an incomplete recovery technique (either
   time-based, cancel-based, or change-based), and is recovered to a point in
   time before the datafile was originally created, any references to that
   datafile will be removed from the database when the database is opened.

   Oracle handles this situation as follows:

   - The 'alter database create datafile....' command creates a reference in 
     the controlfile for the datafile.
   - Incomplete recovery terminates before applying redo that would create a
     corresponding row for the datafile in the file$ dictionary table.
   - When the database is opened, Oracle detects an inconsistency between file$
     and the controlfile and resolves in favour of file$, deleting the entry
     from the controlfile. 

c) It may be possible to recover the datafile using this technique even if the
   database is not in archivelog mode. However, this relies on the required 
   redo being available in the online redo logs.
   

 

Oracle How to Recover from a Lost or Deleted Datafile with Different Scenarios

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

APPLIES TO:

Oracle Database - Personal Edition - Version 10.2.0.1 and later

Oracle Database - Enterprise Edition - Version 10.2.0.1 and later

Oracle Database - Standard Edition - Version 10.2.0.1 and later

Information in this document applies to any platform.

***Checked for relevance on 01-JUL-2015***

PURPOSE

This article explains the various scenarios for ORA-01157 and how to avoid them.

SCOPE & APPLICATION

This article is intended for Oracle Support Analysts, Oracle Consultants and 
Database Administrators.
 

TROUBLESHOOTING STEPS

How to Recover from a Lost Datafile in Different Scenarios

In the event of a lost datafile or when the file cannot be accessed an ORA-01157
is reported followed by ORA-01110. 

Besides this, you may encounter error ORA-07360 : sfifi: stat error, unable to 
obtain information about file. A DBWR trace file is also generated in the 
background_dump_dest directory.  If an attempt is made to shutdown the database 
normal or immediate will result in ORA-01116, ORA-01110 and possibly ORA-07368.

This article discusses various scenarios that may be causing this error and the  
solution/workaround for these.

Throughout this note we refer to "backups" but if you have a valid physical standby database 
you may also use the standby database's datafiles to recover the primary database.

Datafile not found by Oracle

- Unintentionally renamed or moved at the Operating System (OS) level.
  Simply restore the file to its original location and recover it

- Intentionally moved/renamed at OS level.
  You are re-organising the datafile layout across various disks at the OS. 
  After moving/renaming the file you will have to rename the file at database 
  level, and recover it.

Note:115424.1 How to Rename or Move Datafiles and Logfiles

Datafile damaged/deleted

If the file is damaged/deleted and an attempt is made to start the database 
will result in ORA-01157, ORA-01110. Then depending upon the type of datafile 
lost different action needs to be taken. Check for a faulty hard disk. The 
file may have gone corrupt due to faulty disk. Replace the bad disk or create 
the file on a non-faulty disk.

Lost datafile could be in one of the following:

1. Temporary tablespace
   
   If the datafile belongs to a temporary tablespace, you will have to simply offline
   drop the datafile and then drop the tablespace with including contents option.
   Thereafter, re-create the temporary tablespace.

   Note.184327.1 Common Causes and Solutions on ORA-1157 Error Found in Backup & Recovery

2. Read Only Tablespace
   
   In this case you will have to restore the most recent backup of the read-only 
   datafile. No media recovery is required as read-only tablespaces are not 
   modified. Note however that media recovery will be required under the following conditions: 

   a. The tablespace was in read-write mode when the last backup was taken
      and was made read-only afterwards.

   b. The tablespace was in read-only mode when last backup was taken and
      was made read-write in between and then again made read only

   In either of the above cases you will have to restore the file and do a media 
   recovery using RECOVER DATAFILE statement. Apply all the necessary archived redo 
   logs until you get the message "Media Recovery Complete".

   Note.184327.1 Common Causes and Solutions on ORA-1157 Error Found in Backup & Recovery

3. User Tablespace
   
   Two options are available:

   a. Recreate the user tablespace.
      If all the objects in the tablespace can be re-created (recent export is 
      available; tables can be re-populated using scripts; SQL*Loader etc)
      Then, offline drop the datafile, drop the tablespace with including 
      contents option. Thereafter, re-create the tablespace and re-create 
      the objects in it.

   b. Restore file from backup and do a media recovery.
      Database has to be in archivelog mode.If the database is in NOARCHIVELOG 
      mode, you will only succeed in recovering the datafile if the redo to be 
      applied to it is within the range of your online redo logs.

   Note.184327.1 Common Causes and Solutions on ORA-1157 Error Found in Backup & Recovery

4. Index Tablespace
   
   Two options are available:

   a. Recreate the Index tablespace
      If the index can be easily re-created using script or manual CREATE INDEX
      statement, then best option is to offline drop the datafile,drop the 
      index tablespace, and re-create it and recreate all indexes in it.

   b. Restore file from backup and do a media recovery.
      If the index tablespace cannot be easily re-created, then restore the 
      lost datafile from a valid backup and then do a media recovery on it.

   Note.184327.1 Common Causes and Solutions on ORA-1157 Error Found in Backup & Recovery

5. System (and/or Sysaux) Tablespace
   
   a. Restore from a valid backup and perform a media recovery on it

   b. Rebuild the database.
      If neither backup of the datafile nor the full database backup is 
      available, then rebuild database using full export, user level/table 
      level export, scripts, SQL*Loader, standby etc. to re-create and 
      re-populate the database.

   Note.184327.1 Common Causes and Solutions on ORA-1157 Error Found in Backup & Recovery

6. Undo Tablespace
   
   While handling situation with lost datafile of an undo tablespace you need to
   be extra cautious so as not to lose active transactions in the undo segments. 

   The preferred option in this case is to restore the datafile from backup and 
   perform media recovery.

      i.  If the database was cleanly shutdown.
          Ensure that database was cleanly shutdown in NORMAL or IMMEDIATE mode.
          Update your init file with "undo_management=manual"
          Restart the database
          Drop and recreate the undo tablespace
          Update your init file with "undo_management=auto"
          Restart the database

      ii. If the database was NOT cleanly shutdown.
          If the database was shutdown aborted or crashed, you may not be able to drop 
          the datafile as the undo segments may contain active transactions. 
          You will need to restore the file from a backup 
          and perform a media recovery. 

7. Lost Controlfiles and Online Redo Logs
   
   If the datafiles are in a consistent state, not needing media recovery, but you have lost 
   all the controlfiles and online redologs, then while 
   attempting to create controlfile using scripts will complain of missing 
   redologs. In this case use RESETLOGS option of the create controlfile 
   script and then open the database with RESETLOGS option.

8. Lost datafile and no backup
   
   If there are no backups of the lost datafile then you can re-create the 
   datafile with the same size as the original file and then apply all the 
   archived redologs written since original datafile was created to the new
   version of the lost datafile.
 

Note: Please put the restore and recovery from backup as the first and prefer 
         option for case 2 - 6.

 
   Note:1060605.6 Lost datafile and no backup.

 

Oracle Recover database after disk loss

$
0
0

If you cannot recover data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 


PURPOSE
-------

This article aims at walking you through some of the common
recovery techniques after a disk failure
 
SCOPE & APPLICATION
-------------------

All Oracle support Analysts, DBAs and Consultants who have a role
to play in recovering an Oracle database

Loss due to Disk Failure
------------------------
What can we lose due to disk failure:
A) Control files
B) Redo log files
C) Archivelog files
D) Datafiles
E) Parameter file or SPFILE
F) Oracle software installation

Detecting disk failure
-----------------------
1) Run copy utilities like "dd" on unix
2) If using RAID mechanisms like RAID 5, parity information may mask 
    the disk failure and more vigorous check would be needed
3) As always, check the Operating system log files
4) Another obvious case would be when the disk could not be seen
    or mounted by the OS.
5) On the Oracle side, run dbv if the file affected is a datafile
6) The best way to detect disk failure is by running Hardware 
diagnostic tools and OS specific disk utilities.

Next Action
------------
Once the type of failure is identified, the next step is to rectify them.

Options could be:
(1) Replace the corrupted disk with a new one and mount them with 
     the same name (say /oracle or D:\)
(2) Replace the corrupted disk with a new one and mount them with 
     a different name (say /oracle1 as the new mount point)
(3) Decide to use another existing disk mounted with a different name
     (say /oracle2)

The most common methods are (1) AND (3).

Oracle Recovery
---------------
Once the disk problem is sorted, the next step is to perform recovery
at the Oracle level. This would depend on the type of files that is lost (see
"Loss due to Disk Failure" section) and also on the type of disk recovery done
as mentioned in the "Next Action" section above.

(A) Control Files
------------------
Normally, we have multiplexing of controlfiles and they are expected to be
placed in different disks.

If one or more controlfile is/are lost,mount will fail as shown below:
SQL> startup
Oracle Instance started
....
ORA-00205: error in identifying controlfile, check alert log for more info

You can verify the controlfile copies using:
SQL> select * from v$controlfile;

   **If atleast one copy of the controlfile is not affected by the disk failure, 
   When the database is shutdown cleanly:
   (a) Copy a good copy of the controlfile to the missing location
   (b) Start the database 

   Alternatively, remove the lost control file location specified in the
   init parameter control_files and start the database.

   **If all copies of the controlfile are lost due to the disk failure, then:
   Check for a backup controlfile. Backup controlfile is normally taken using 
   either of the following commands:
   (a) SQL> alter database backup controlfile to '/backup/control.ctl';
    -- This would have created a binary backup of the current controlfile --

    -->If the backup was done in binary format as mentioned above, restore the 
       file to the lost controlfile locations using OS copying utilities.
    --> SQL> startup mount;
    --> SQL> recover database using backup controlfile;
    --> SQL> alter database open;

   (b) SQL> alter database backup controlfile to trace;
    -- This would have created a readable trace file containing create controlfile
    script --

    --> Edit the trace file created (check user_dump_dest for the location) and
        retain the SQL commands alone. Save this to a file say cr_ctrl.sql
    --> Run the script
    
    SQL> @cr_ctrl

    This would create the controlfile, recover database and open the database.

    ** If no copy of the controlfile or backup is available, then create a controlfile
    creation script using the datafile and redo log file information. Ensure that the
    file names are listed in the correct order as in FILE$.
    Then the steps would be similar to the one followed with cr_ctrl.sql script.


Note that all controlfile related SQL maintenance operations are done in the 
database nomount state


(B) Redo logs
    ---------
In normal cases, we would not have backups of online redo log files. But the 
inactive logfile changes could already have been checkpointed on the datafiles
and even archive log files may be available.

SQL> startup mount
     Oracle Instance Started
     Database mounted
     ORA-00313: open failed for members of log group 1 of thread 1
     ORA-00312: online log 1 thread 1: '/ORACLE/ORADATA/H817/REDO01.LOG'
     ORA-27041: unable to open file
     OSD-04002: unable to open file
     O/S-Error: (OS 2) The system cannot find the file specified.

** Verify if the lost redolog file is Current or not.
     SQL> select * from v$log;
     SQL> select * from v$logfile; 

     --> If the lost redo log is an Inactive logfile, you can clear the logfile:

     SQL> alter database clear logfile '/ORACLE/ORADATA/H817/REDO01.LOG';

     Alternatively, you can drop the logfile if you have atleast two other   
     logfiles:
     SQL> alter database drop logfile group 1;

     
     --> If the logfile is the Current logfile, then do the following:
     SQL> recover database until cancel;
         
     Type Cancel when prompted

     SQL>alter database open resetlogs;

     
     The 'recover database until cancel' command can fail with the following 
     errors:
     ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error 
     below
     ORA-01194: file 1 needs more recovery to be consistent
     ORA-01110: data file 1: '/ORACLE/ORADATA/H817/SYSTEM01.DBF'

     In this case , restore an old backup of the database files and apply the
     archive logs to perform incomplete recovery.
     --> restore old backup
     SQL> startup mount
     SQL> recover database until cancel using backup controlfile;
     SQL> alter database open resetlogs;


If the database is in noarchivelog mode and if ORA-1547, ORA-1194 and ORA-1110 errors occur, then you would have restore from an old backup and start the database.


Note that all redo log maintenance operations are done in the database mount state


(C) Archive logs
-----------------
If the previous archive log files alone have been lost, then there is not much
to panic.
** Backup the current database files using hot or cold backup which would ensure
that you would not need the missing archive logs

(D) Datafiles
--------------
This obviously is the biggest loss.

(1) If only a few sectors are damaged, then you would get ora-1578 when 
accessing those blocks.
 --> Identify the object name and type whose block is corrupted by querying dba_extents
 --> Based on the object type, perform appropriate recovery
 --> Check metalink Note:28814.1 for resolving this error

(2) If the entire disk is lost, then one or more datafiles may need to be 
recovered . 
  SQL> startup
  ORACLE instance started.
  ...
  Database mounted.
  ORA-01157: cannot identify/lock data file 3 - see DBWR trace file
  ORA-01110: data file 3: '/ORACLE/ORADATA/H817/USERS01.DBF'

Other possible errors are ORA-00376 and ORA-1113

The views and queries to identify the datafiles would be:
   SQL> select file#,name,status from v$datafile;
   SQL> select file#,online,error from v$recover_file;


** If restoring to a replaced disk mounted with the same name, then :
  (1) Restore the affected datafile(s) using OS copy/restore commands from the 
      previous backup
  (2) Perform recovery based on the type of datafile affected namely SYSTEM, 
      ROLLBACK or UNDO, TEMP , DATA or INDEX.
  (3) The recover commands could be 'recover database', 'recover tablespace'
      or 'recover datafile' based on the loss and the database state

** If restoring to a different mount point, then :
  (1) Restore the files to the new location from a previous backup
  (2) SQL> STARTUP MOUNT
  (3) SQL> alter database rename file '/old path_name' to 'new path_name';     
      -- Do this renaming for all datafiles affected. --
  (4) Perform recovery based on the type of datafile affected namely SYSTEM, 
      ROLLBACK or UNDO, TEMP , DATA or INDEX.
  (5) The recover commands could be 'recover database', 'recover tablespace'
      or 'recover datafile' based on the loss and the database state

The detailed steps of recovery based on the datafile lost and the Oracle error 
are outlined in the articles referenced at the end of this note.


  NOARCHIVELOG DATABASE
  =====================
  The loss mentioned in (A),(B) and (D) would be different in this case
  wherever archive logs are involved. 

  We will discuss the datafile loss scenarios here:

  (a) If the datafile lost is a SYSTEM datafile, restore the complete
      database from the previous backup and start the database.
  (b) If the datafile lost is Rollback related datafile with active transactions,
      restore from the previous backup and start the database.
  (c) If the datafile contains rollback with no active rollback segments, you can
      offline the datafile (after commenting the rollback_segments parameter 
      assuming that they are private rollback segments) and open the database. 
  (d) If the datafile is temporary, offline the datafile and open the database. 
      Drop the tablespace and recreate the tablespace.
  (e) If the datafile is DATA or INDEX, 
      **Offline the tablespace and start the database.
      **If you have a previous backup, restore it to a separate location.
      **Then export the objects in the affected tablespace ( using User or 
        table level export).
      **Create the tablespace in the original database.
      **Import the objects exported above.

      If the database is 8i or above, you can also use Transportable tablespace
      feature.


(E) Parameter file
    ---------------
This is not a major loss and can be easily restored. Options are:
  (1) If there is a backup, restore the file
  (2) If there is no backup, copy sample file or create a new file and add the 
      required parameters. Ensure that the parameters db_name, control_files,
       db_block_size, compatible are set correctly
  (3) If the spfile is lost, you can create it from the init parameter file if it is available


(F) Oracle Software Installation
    ----------------------------
There are two ways to recover from this scenario:
  (1) If there is a backup of the Oracle home and Oracle Inventory, restore
      them to the respective directories. Note if you change the Oracle Home, 
      the inventory would not be aware of thid new path and you would not be
      able to apply patchsets. Also restore to the same OS user and group.

  (2) Perform a fresh Install, bringing it to the same patchset level


PRACTICAL SCENARIO
==================

In most cases, when a disk is lost, more than one type of file could be lost.
The recovery in this scenario would be:
  (1) A combination of each of these data loss recovery scenarios
  (2) Perform entire database restore from the more recent backup and apply
      archive logs to perform recovery. This is a highly preferred method 
      but could be time consuming.

 

PRM-DUL Whitepaper ParnassusData Recovery Manager For Oracle Database User Guide V0.4

$
0
0

Overview

 

ParnassusData Recovery Manager (PRM) is an enterprise-level Oracle database recovery tool, which can extract and restore database datafile from Oracle 9i, 10g, 11g, 12c directly without any SQL execution on Oracle database instances. ParnassusData Recovery Manager is a Java-based green software without any installation. Download it, and click to run.

 

PRM adopts the convenient GUI for any command (as shown in Picture1). There is no need to learn additional scripts or master any skill in Oracle data structure. It is all integrated in Recovery Wizard of the tool.

 

 

Download PRM-DUL:

http://parnassusdata.com/sites/default/files/ParnassusData_PRMForOracle_3206.zip

 

PRM-DUL-DUL1

 

Why PRM is necessary?

Isn’t RMAN enough for ORACLE database recovery? Why do the users need PRM for Oracle recovery? You may ask.

In the growing IT systems within enterprises, database size is expanding geometrically. Oracle DBAs are facing the problems that disks are insufficient for full backup, and tape storages take much more time than usual expectation.

 

 

“For Database, backup 1st” is the first lesson for DBAs, however, the fact is that: disk space for backup is not sufficient, new storage device is still on the way, and even the backup does not actually work in the process of data recovery.

 

 

In order to solve the above problems, PD Recovery Manager, based on its understanding of the data structure within Oracle DB and core startup process, can not only solve cases such as system tablespace lost without any backup, data dictionary table misoperation, and database unable to be opened caused by inconsistent data dictionary due to power outages, but also restore data from Truncated/Deleted business data tables.

 

 

No matter you are a professional DBA or new fish in Oracle world, you can master this user-friendly tool immediately. PRM is easy to install and use. You don’t need to have any deep Oracle knowledge or skills in scripts. All you need to do is to click-by-click and you will finish all recovery processes.

 

 

Comparing the traditional recovery tool Oracle DUL, which is an Oracle internal tool and only for Oracle employee usage, PRM can be used by any     kind of IT professionals or geeks. It greatly shortens the failure time from database failure to complete data recovery, and cuts down the total cost of   enterprise.

 

 

There are 2 ways for data recovery by PRM:

By traditional way, data has to be extracted to text file and then inserted to a new DB by SQLLDR tools, which takes double time and occupies double storage size.

 

 

Another way that we strongly recommend for you is to use the unique data bridge feature of ParnassusData Recovery Manager. It can extract data from original source database and then insert into new destination database without any inter-media. This is a truly time and storage saver.

 

 

Oracle ASM is becoming popular in enterprise database implementation, due to its advantage in high performance, cluster support, and convenient management. However, for many IT professionals, ASM is a black box. Once the data structure of certain Disk Group in ASM is corrupted so that the Disk Group cannot be mounted, which means that all data is locked in ASM. In this circumstance without PRM, only senior Oracle experts can manually patch ASM internal structure, but it is too expensive and time-consuming for normal Oracle users.

 

 

 

PRM now can support two kinds of ASM data recovery:

 

 

  1. Once Disk Group cannot be mounted, PRM can read metadata, and clone ASM file from Disk Group.
  2. Once Disk Group cannot be mounted, PRM can read ASM file and extract data, which supports both traditional data export and data bridge.

 

PRM-DUL  Software Introduction

ParnassusData Recovery Manager (PRM) was based on Java development, which ensured that PRM can run across platforms. No matter AIX, Solaris, HPUNIX, Red-Hat, Oracle Linux, SUSE, or Window, It can be run  smoothly. Whether AIX, Solaris, HPUX and other Unix platforms, Redhat, Oracle Linux, SUSE and other Linux platforms, or Windows can run PRM directly.

 

 

OS & Platform that PRM Supports:

 

 

 

Platform Name Supported
AIX POWERü
Solaris Sparcü
Solaris X86ü
Linux X86ü
Linux X86-64ü
HPUXü
MacOSü

 

 

Database Version that PRM Supports:

 

 

ORACLE DATABASE VERSION Supported
Oracle 7û
Oracle 8û
Oracle 8iû
Oracle 9iü
Oracle 10gü
Oracle 11gü
Oracle 12cü

 

 

 

 

Considering some old servers run early OS like AIX 4.3, on which the latest JD cannot be installed. Any platforms that can run JDS 1.4 can run PRM.

 

 

In addition, Oracle 10g database is integrated with JDK 1.4, and 11g with JDK 1.5. Therefore, users can run PRM directly without any JDK updates or installation.

 

 

For users who needs JDK 1.4, please download from below link:

http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-a rchive-downloads-javase14-419411.html

 

 

For less bug and performance purpose, ParnassusData strongly recommend users to use Open JDK on Linux.

 

 

Open JDK for Linux download Link:

 

 

Open jdk x86_64 for Linux   5http://pan.baidu.com/s/1qWO740O
Tzdata-java x86_64 for Linux 5http://pan.baidu.com/s/1gdeiF6r
Open jdk x86_64 for Linux   6http://pan.baidu.com/s/1mg0thXm
Open jdk x86_64 for Linux   6http://pan.baidu.com/s/1sjQ7vjf
Open jdk x86 for Linux 5http://pan.baidu.com/s/1kT1Hey7
Tzdata-java x86 for Linux  5http://pan.baidu.com/s/1kT9iBAn
Open jdk x86 for Linux 6http://pan.baidu.com/s/1sjQ7vjf
Tzdata-java x86 for Linux  6http://pan.baidu.com/s/1kTE8u8n

 

 

 

JDK on Other platforms download link:

 

 

AIX JAVA SDK 7http://pan.baidu.com/s/1i3JvAlv
JDK Windows x86http://pan.baidu.com/s/1qW38LhM
JDK Windows x86-64http://pan.baidu.com/s/1qWDcoOk
Solaris JDK 7 x86-64bithttp://pan.baidu.com/s/1gdzgSvh
Solaris JDK 7 x86-32bithttp://pan.baidu.com/s/1mgjxFlQ
Solaris JDK 7 Sparchttp://pan.baidu.com/s/1pJjX3Ft

 

 

The minimum JAVA software environment for PRM is JDK 1.4. Parnassus Data strongly recommends you to run it on JDK 1.6, since JDK 1.4, it has greatly improved performance on JAVA procedure.

Therefore, the recovery speed of PRM under JDK 1.6 is faster than JDK 1.4.

 

 

 

PRM hardware requirement:

 

 

CPU At least 800 MHZ
Memory At least 512 MB
Disk At least 50 MB

 

 

PRM recommended hardware configuration:

 

 

CPU 2.0 GHZ
Memory 2 GB
Disk 2 GB

 

 

 

Languages that PRM Supports:

 

 

 

 

Language Character Set Encoding
Simplified/Traditional Chinese 

 

ZHS16GBK

 

 

GBK

Simplified/Traditional Chinese 

 

ZHS16DBCS

 

 

CP935

Simplified/Traditional Chinese 

 

ZHT16BIG5

 

 

BIG5

Simplified/Traditional Chinese 

 

ZHT16DBCS

 

 

CP937

Simplified/Traditional Chinese 

 

ZHT16HKSCS

 

 

CP950

 

 

Simplified/Traditional Chinese 

 

ZHS16CGB231280

 

 

GB2312

Simplified/Traditional Chinese 

 

ZHS32GB18030

 

 

GB18030

Japanese JA16SJIS SJIS
Japanese JA16EUC EUC_JP
Japanese JA16DBCS CP939
Korean KO16MSWIN949 MS649
Korean KO16KSC5601 EUC_KR
Korean KO16DBCS CP933
French WE8MSWIN1252 CP1252
French WE8ISO8859P15 ISO8859_15
French WE8PC850 CP850
French WE8EBCDIC1148 CP1148
French WE8ISO8859P1 ISO8859_1
French WE8PC863 CP863
French WE8EBCDIC1047 CP1047
French WE8EBCDIC1147 CP1147
Deutsch WE8MSWIN1252 CP1252
Deutsch WE8ISO8859P15 ISO8859_15
Deutsch WE8PC850 CP850
Deutsch WE8EBCDIC1141 CP1141
Deutsch WE8ISO8859P1 ISO8859_1
Deutsch WE8EBCDIC1148 CP1148
Italian WE8MSWIN1252 CP1252
Italian WE8ISO8859P15 ISO8859_15
Italian WE8PC850 CP850
Italian WE8EBCDIC1144 CP1144
Thai TH8TISASCII CP874
Thai TH8TISEBCDIC TIS620
Arabic AR8MSWIN1256 CP1256
Arabic AR8ISO8859P6 ISO8859_6
Arabic AR8ADOS720 CP864
Spanish WE8MSWIN1252 CP1252
Spanish WE8ISO8859P1 ISO8859_1

 

 

Spanish WE8PC850 CP850
Spanish WE8EBCDIC1047 CP1047
Portuguese WE8MSWIN1252 CP1252
Portuguese WE8ISO8859P1 ISO8859_1
Portuguese WE8PC850 CP850
Portuguese WE8EBCDIC1047 CP1047
Portuguese WE8ISO8859P15 ISO8859_15
Portuguese WE8PC860 CP860

 

 

 

Features that PRM supports:

 

 

Features Supported
Cluster Table YES
Inline or out-of-line LOBS, different chunk version and size, LOB partition YES
Heap            table,           partitioned            or non-partitioned YES
Partition and Non-partition YES
Table With chained rows ,migrated rows, intra-block  chaining YES
Bigfile Tablespace YES
ASM Automatic Storage Management 10g,11g,12c,diskgroups  are dismounted YES
ASM      11g    Variable Extent Size YES
IOT, partitioned or non-partitioned YES(Future)
Basic Compressed Heap table YES(Future)
Advanced Compressed Heap Table NO
Exudates HCC Heap Table NO
Encrypted Heap Table NO
Table with Virtual Column NO

 

 

 

Attention: for virtual column、11g optimized default column, data export has no problem, but it may lose the corresponding column. These two are new features after 11g with less users.

 

 

 

 

Data type that PRM supports:

 

 

Data Type Supported
BFILE No
Binary XML No
BINARY_DOUBLE Yes
BINARY_FLOAT Yes
BLOB Yes
CHAR Yes
CLOB and NCLOB Yes
Collections (including VARRAYS and nested tables) No
Date Yes
INTERVAL DAY TO SECOND Yes
INTERVAL YEAR TO MONTH Yes
LOBs stored as SecureFiles Future
LONG Yes
LONG RAW Yes
Multimedia data types (including Spatial, Image, and Oracle Text) No
NCHAR Yes
Number Yes
NVARCHAR2 Yes
RAW Yes
ROWID, UROWID Yes
TIMESTAMP Yes
TIMESTAMP WITH LOCAL TIMEZONE Yes
TIMESTAMP WITH TIMEZONE Yes
User-defined types No
VARCHAR2 and VARCHAR Yes
XMLType stored as CLOB No
XMLType stored as Object Relational No

 

 

 

Support for ASM by PRM:

 

 

 

 

Function Supported
Directly extract Table data from ASM YES
Directly copy datafile from ASM YES
Repair ASM metadata YES
Draw ASM Structure by  GUI Future

 

PRM installation and start-up

It is not necessary to install PRM since it is a Java-based green software. Users simply need to extract the ZIP package and click to RUN.

 

unzip        prm_latest.zip

ParnassusData recommends you to run PRM with command line, from which you can get more diagnostic information.

 

 

Starting method under Windows:

 

 

  1. Make sure you have installed JDK correctly and add JAVA to environment variable.
  2. Double click ‘prm.bat’ under the folder.

PRM-DUL-DUL2

prm.bat will start PRM in the background.
PRM-DUL-DUL3

Then, it pops up PRM-DUL main interface:

PRM-DUL-DUL4

Linux/Unix:

 

In Linux/Unix, use X Server for GUI

 

  1. Make sure you had installed JDK and add Java to profile
  2. cd to PRM-DUL folder, and run./PRM-DUL.sh to start the tool

 

 

 

Starting method under Linux/Unix:

 

 

Under Linux/Unix, use X Server for GUI

 

 

  1. Make sure you have installed JDK correctly and add Java to environment variable
  2. Cd to the directory of PRM, and run./prm.sh to start the main interface of the program

 

PRM-DUL-DUL5

PRM-DUL-DUL6

 

PRM License Registration

ParnassusData Recovery Manager (PRM) needs license for full use. ParnassusData provide the community version of PRM for user testing and demo. (Community version has no limits on ASM clone, and we will add more free features in it.)

 

 

It needs license for full use of PRM. Now, we provide two kinds of license for clients: Standard Edition and Enterprise Edition, and specifications are as follows.

 

prm-price

 

Clients can purchase PRM license from official website: www.parnassusdata.com, and it needs Database name. After your purchasing, you will receive an email which includes a DBNAME and License Key.

 

 

Once you obtain the License Key, please register in the software as below,

  1. In the Menu,  Help  => Register
  2. Input DB NAME and you License Key, then click Register button

 

After registration, you don’t need to input license key again on your next boot.

 

PRM-DUL-DUL8

PRM-DUL-DUL9

Your registration information can be found in Help=>about

PRM-DUL-DUL10

 

PRM-DUL-DUL11

 

Case Study on Oracle database recovery via PRM

CASE 1: General recovery of truncated table by mistake

 

User D had truncated all data in a table by mistake due to mistaking test environment library for product database. The DBA tried to recover table from RMAN backup, and accidently the backup is unavailable. Therefore DBA decided to use PRM for rescuing all truncated data.

Since all database system files under the environment are available and healthy, DBA just needs to load SYSTEM tablespace datafile in dictionary mode and datafile in TRUNCATED table. For example:

 

 

create table ParnassusData.torderdetail_his1	tablespace	users as select * from parnassusdata.torderdetail_his;




SQL> desc	ParnassusData.TORDERDETAIL_HIS
Name	Null?	Type
----------------------- -------- --------------
SEQ_ID	NOT NULL	NUMBER(10)
SI_STATUS	NUMBER(38)
D_CREATEDATE	CHAR(20)
D_UPDATEDATE	CHAR(20)
B_ISDELETE	CHAR(1)
N_SHOPID	NUMBER(10)
N_ORDERID	NUMBER(10)
C_ORDERCODE	CHAR(20)
N_MEMBERID	NUMBER(10)
N_SKUID	NUMBER(10)
C_PROMOTION	NVARCHAR2(5)
N_AMOUNT	NUMBER(7,2)
N_UNITPRICE	NUMBER(7,2)
N_UNITSELLINGPRICE	NUMBER(7,2)
N_QTY	NUMBER(7,2)
N_QTYFREE	NUMBER(7,2)
 

N_POINTSGET	NUMBER(7,2)
N_OPERATOR	NUMBER(10)
C_TIMESTAMP	VARCHAR2(20)
H_SEQID	NUMBER(10)
N_RETQTY	NUMBER(7,2)
N_QTYPOS	NUMBER(7,2)



select count(*) from ParnassusData.TORDERDETAIL_HIS;


COUNT(*)
----------
984359


select bytes/1024/1024 from dba_segments where segment_name='TORDERDETAIL_HIS' and owner='PARNASSUSDATA';

BYTES/1024/1024
---------------
189.71875





SQL>  truncate  table ParnassusData.TORDERDETAIL_HIS;


Table truncated.


SQL> select count(*) from ParnassusData.TORDERDETAIL_HIS;

COUNT(*)
----------
0


Run PRM, and select Tools =>Recovery Wizard

PRM-DUL-DUL12

 

Click Next

 

PRM-DUL-DUL13

Since client did not use ASM storage in the scenario, just select ‘Dictionary Mode’:

PRM-DUL-DUL14

 

Next, we  need  to  select  a few parameters: Endian byte- order  and DBNAME.

 

 

Oracle datafiles adopt different Endian byte orders on different OS, please choose accordingly:

 

Solaris[tm] OE (32-bit) Big
Solaris[tm] OE (64-bit) Big
Microsoft Windows IA (32-bit) Little
Linux IA (32-bit) Little
AIX-Based  Systems (64-bit) Big
HP-UX (64-bit) Big
HP Tru64 UNIX Little
HP-UX IA (64-bit) Big
Linux IA (64-bit) Little
HP Open VMS Little
Microsoft Windows IA (64-bit) Little
IBM zSeries Based Linux Big
Linux x86 64-bit Little
Apple Mac OS Big
Microsoft Windows x86  64-bit Little
Solaris Operating System (x86) Little
IBM Power Based Linux Big
HP IA Open VMS Little
Solaris Operating System (x86-64) Little
Apple Mac OS (x86-64) Little

In traditional UNIX, AIX (64-bit), UP-UNIX (64-bit), it uses Big Endian byte order.

 

PRM-DUL-DUL15

 

Usually, Linux X86/64, Windows remain the default Little Endian:

PRM-DUL-DUL16

Attention: if your data file was generated on AIX, and you want to copy the datafile to Windows and recover data by PRM, you should select the original Big Endian mode.
Since the data file is on Linux X86, we select Little for Endian, and input the database name.

license key is generated based on DB_NAME found in datafile header)

PRM-DUL-DUL17

Click “Next” =>Click “Choose Files”

 

 

If the database is not too big, you can select all data files together; if the database is very big and DBA knows the data location, you can just select SYSTEM tablespace datafile(necessary) and specified datafile.

 

Attention: make sure the GUI Supports Ctrl + A & Shift short keys:

 

PRM-DUL-DUL18

PRM-DUL-DUL19

 

Then specify the Block Size (i.e. Oracle data block size) based on the actual situation. For example, if the default DB_BLOCK_SIZE is 8K, but some tablespace specify 16k as its block size, then users just need to modify the block size for datafile whose block size are not 8k.

 

OFFSET setting are mainly for raw device storage mode, for example: on AIX, LV based on normal VG as datafile, the offset will be 4k OFFSET.

 

If you are using raw device but don’t know what the OFFSET is, you can use dbfsize tool under $ORACLE_HOME/bin to check, as shown in the picture below.

 

$dbfsize /dev/lv_control_01

 

Database file: /dev/lv_control_01

Database file type: raw device without 4K starting offset Database file size: 334 16384 byte blocks

 

Since the block size of all data file here is 8K and there is no OFFSET, please click Load:

 

PRM-DUL-DUL20

 

During Load phase, PRM read Oracle data dictionary directly from system tablespace, and recreate a new data dictionary in embedded database, which enables PRM to process all kinds of data in Oracle DB.

PRM-DUL-DUL21

After loading, information such as the database character set and the national character set will be output in the background:

PRM-DUL-DUL22

 
Attention: PRM supports multiple languages and multiple character set of Oracle DB. However,
the prerequisite is the OS have installed specified language packages. For example, if you didn’t install Chinese language package on Windows, and Oracle database character set are independent and support ZHS16GBK, PRM would display Chinese as messy code. Once the Chinese language package is installed on OS, PRM can display multi-byte character set properly.

Similarly, it needs to install font-Chinese language package on Linux.

[oracle@mlab2 log]$ rpm -qa|grep chinese

fonts-chinese-3.02-12.el5

After loading, on the left side of PRM GUI, it will display a tree diagram grouped by database users.

Click Users, you can find more users. For example, if users want to recover a table under PARNASSUSDATA SCHEMA, click PARNASSUSDATA and double click the table name:

PRM-DUL-DUL23

The TORDERDETAIL_HIS table has been truncated before,   so  it    won’t  show any data.

Now right-click and select Unload truncated data on the table:

PRM-DUL-DUL24

 

PRM will scan the tablespace and extract data from truncated table.

 

PRM-DUL-DUL25

 

PRM-DUL-DUL26

As shown in the above picture, 984359 record have been exported from the truncated TORDERDETAIL_HIS, and stored under the specified path.

In addition, it generated SQLLDR control file for text data importing.

 

$ cd /home/oracle/PRM-DUL/PRM-DULdata/parnassus_dbinfo_PARNASSUSDATA/$ ls -l ParnassusData*-rw-r–r– 1 oracle oinstall       495 Jan 18 08:31 ParnassusData.torderdetail_his.ctl-rw-r–r– 1 oracle oinstall 191164826 Jan 18 08:32 ParnassusData.torderdetail_his.dat.truncated 

 

$ cat ParnassusData.torderdetail_his.ctl

LOAD DATA

INFILE  ‘ParnassusData.torderdetail_his.dat.truncated’

APPEND

INTO TABLE ParnassusData.torderdetail_his

FIELDS TERMINATED BY ‘ ‘

OPTIONALLY ENCLOSED BY ‘”‘

TRAILING NULLCOLS (

“SEQ_ID” ,

“SI_STATUS” ,

“D_CREATEDATE” ,

“D_UPDATEDATE” ,

“B_ISDELETE” ,

“N_SHOPID” ,

“N_ORDERID” ,

“C_ORDERCODE” ,

“N_MEMBERID” ,

“N_SKUID” ,

“C_PROMOTION” ,

“N_AMOUNT” ,

“N_UNITPRICE” ,

“N_UNITSELLINGPRICE” ,

“N_QTY” ,

“N_QTYFREE” ,

“N_POINTSGET” ,

“N_OPERATOR” ,

“C_TIMESTAMP” ,

“H_SEQID” ,

“N_RETQTY” ,

“N_QTYPOS”

)

 

 

When you import data to original table, ParnassusData strongly recommends you to modify the SQLLDR table name as a temporary table, thus it would not overwrite the original environment.

 

$ sqlldr control=ParnassusData.torderdetail_his.ctl direct=yUsername:/ as sysdba//user SQLLDR to import data//Minus can be used for data comparing

 

select * from ParnassusData.torderdetail_his minus select * from parnassus.torderdetail_his;

 

no rows selected

 

 

After comparing the tested truncate case table with original data table, it is found that the records are exactly the same.

It demonstrates that PRM has successfully and completely recovered the record on truncated table.

 

CASE 2: Recovery of MIS-truncated table by DataBridge

In Case 1, we used traditional unload+sqlldr method for data recovery, but in fact ParnassusData strongly recommend you to use DataBridge Feature for recovery.

 

Why use DataBridge?

 

 

  • Traditional unload+sqlldr method means that a copy of data needs to be saved as flat file on file system first, the data has to be loaded into Unicode text file and then inserted into destination database by sqlldr, which will take double storage space and double
  • DataBridge can extract data from source DB and export to destination DB without any
  • The data sent to destination DB by databridge is structured, users can immediately use SQL statement to verify its integrity and consistency.
  • If the source and destination database locate on different servers, the read/write IO will be balanced on two servers, and MTTR will be
  • If DataBridge is used in truncated table recovery, it is very convenient for the truncated data to be exported back to problem database

 

DataBridge is very easy and convenient to use. Right click the table on the left side, and select DataBridge:

 

PRM-DUL-DUL27

For the first time to use DataBridge, DB connection information is necessary, which is similar with SQL Developer connection, including DB host, Port, Service_Name and user login information.

Attention: DataBridge will save data to the specified schema given in the DB connection.

 

PRM-DUL-DUL28

For example, the above G10R25 connection, the user is maclean, and the corresponding Oracle Easy Connection is

192.168.1.191:1521/G10R25.

 

After inputting the account/connection information, you can use the Test button for connection testing. If the message “Connect to DB server successfully “is returned, the connection is done and click to save.

 

PRM-DUL-DUL29

After saving connection, and then enter the DataBridge main interface, first select the just added Connection G10R25 under the drop-down list of DB Connection:

PRM-DUL-DUL30

If your DB connection is not in the drop down list, please click DB connection Button, which is highlighted in red.

PRM-DUL-DUL31

 

After selecting DB Connection, the Tablespace dropdown list will be selectable:

PRM-DUL-DUL32

 

Notes on recovering truncated/dropped table by DataBridge: when recovering truncated/dropped data and inserting back to source DB, users should choose another tablespace which differs from the original tablespace. If exporting data into the same tablespace, oracle will reuse the space which stores truncated/dropped table, and make data overwritten, thus we may lose the last resort to recover the data.

For example, we truncated a table and now use DataBridge to recover the data back to source database, but we do not want to use the original table name, for example, the original table name is torderdetail_his. Then the user can select “if need to remap table” and fill in the appropriate target table name as below:

 

PRM-DUL-DUL33

 

Attention: 1) For destination DB which had the corresponding table name, PRM would not recreate a table but append all recovered data. 2) For destination DB which did not have corresponding table name, PRM would try to create table on specified tablespace and insert recovered data.

 

In this case, we need to recover truncated data, so please select “if data truncated”, Or, PRM will execute regular data extraction, which cannot extract the truncated data.

 

 

 

 

The mechanism of truncating data is: Oracle will only update table DATA_OBJECT_ID in data dictionary and segment header. And the real data will not be overwritten. Due to the difference between dictionary and DATA_OBJECT_ID, Oracle server process will not read data that was truncated but not yet overwritten while scanning table.

 

PRM will try to scan 10M-bytes blocks behind the table’s segment header, if some blocks with smaller DATA_OBJECT_ID than the object’s current DATA_OBJECT_ID were found, then PRM thinks it finds something useful.

 

 

 

There is a blank input field called “if to specify data object id”, which enables the user to input Data Object ID to be recovered. Generally, you don’t need to input any value, unless the recovery does not work. We suggest users contact ParnassusData for help.

 

Click the DataBridge button, then it will start extracting if the configuration is done.

 

PRM-DUL-DUL34

DataBridge will display the successfully rescued rows and elapsed time.

PRM-DUL-DUL35

 

 

Case 3: DB cannot be opened caused by corrupted Oracle Data Dictionary

 

DBA of Company D deleted SYS.TS$ (A bootstrap Table) by mistake, which causes Oracle DB cannot be opened.

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit ProductionWith the Partitioning, Automatic Storage Management, OLAP, Data Miningand Real Application Testing optionsINSTANCE_NAME

 

—————-

ASMME

 

SQL>

SQL>

SQL> select count(*) from sys.ts$;

 

COUNT(*)

———-

5

 

SQL> delete ts$;

 

5 rows deleted.

 

SQL> commit;

 

Commit complete.

 

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

 

Database mounted.

ORA-01092: ORACLE instance terminated. Disconnection forced

ORA-01405: fetched column value is NULL

Process ID: 5270

Session ID: 10 Serial number: 3

 

Undo initialization errored: err:1405 serial:0 start:3126020954 end:3126020954 diff:0 (0 seconds)

Errors in file /s01/diag/rdbms/asmme/ASMME/trace/ASMME_ora_5270.trc:

ORA-01405: fetched column value is NULL

Errors in file /s01/diag/rdbms/asmme/ASMME/trace/ASMME_ora_5270.trc:

ORA-01405: fetched column value is NULL

Error 1405 happened during db open, shutting down database

USER (ospid: 5270): terminating the instance due to error 1405

Instance terminated by USER, pid = 5270

ORA-1092 signalled during: ALTER DATABASE OPEN…

opiodr aborting process unknown ospid (5270) as a result of ORA-1092

 

 

 

In this case, data dictionary had been damaged, so it would be very hard to open the database normally.

 

Then, we can use PRM to rescue data in DB. Follow the steps as below:

 

 

 

  1. Recovery Wizard
  2. Select Data Dictionary Mode
  3. Choose Big or Little Endian , and input DB NAME
  4. Click Load for database loading
  5. Restore the data in the table according to actual demand

 

PRM-DUL-DUL36

 

Case 4: Mistakenly deleted or lost SYSTEM tablespace

 

A System Administrator of company D deleted SYSTEM tablespace by mistake, which caused DB unable to be opened. Unfortunately, there is no RMAN backup available. Therefore, company D try to use PRM to recover all data.

 

In this case, run PRM, enter Recovery Wizard, and select “Non-Dictionary mode”:

 

PRM-DUL-DUL37

PRM-DUL-DUL38

In Non-dictionary mode, we have to select User Specified Character Set and National Character

 

Set. This is because the character set information of database cannot be obtained due to the lost system tablespace.

 

Similar to case 1, select all data (excluding temp file) and set correct Block Size and OFFSET.

PRM-DUL-DUL39

Then click the scan button. PRM will scan all segment header and extents in datafile, and record it into SEG$.DAT and EXT$.DAT. In Oracle, each partition table or non-partition table has a segment header. Once we find segment header, we could find the whole table extent map information. Through extent map, we can get all record on the table.

 

There is one exception, for example, there is one non-partition table that is stored in two database files. The segment header and half of data are stored in datafile A, and the others are stored in datafile B. But for some reasons, both system tablespace and datafile A are lost, PRM can’t find segment header associated with problem table. Instead, it can scan datafile B to get the rest extent map.

In order to recover data via segment header and extent map in no-dictionary mode.

PRM will create two files: SEG$.DAT (stores segment header info) and EXT$.DAT (stores extent info), and record them in PRM embedded database.

 

PRM-DUL-DUL41

PRM-DUL-DUL40

After scaning, there appears the database icon on the left. Now, there are 2 options:

1、 Scan Tables From Segments:

  • System tablespace is lost, but all application data tablespace exists

2、 Scan Tables From Extents

  • Doesn’t apply to data recovery of truncated data in Dictionary Mode.
  • Both system tablespace and datafile of segment header are lost.

 

It is not necessary to first use “Scan Tables From Extents” mode, unless you can’t find the needed data by “Scan Tables From Segment “mode.

 

Scan tables from segments should be your first choice.

 

 

PRM-DUL-DUL42

 

After scanning tables from segments, click the tree diagram on the left.

 
PRM-DUL-DUL43

Scan Tables is for constructing the data based on segment header in SEG$. Each node in the diagram represent a data segment, which is named by DATA OBJECT ID recorded in obj+ segment.

Click on a node and observe the right side of main interface:

 

PRM-DUL-DUL44

 

 

Intelligent field type analysis

 

Because of SYSTEM tablespace lost, there is not data structure information available in NO-Dictionary mode. The structure information includes field name and field type of the table. All these are stored in dictionary instead of table. Therefore, PRM needs to guess every field type.

PRM uses the advanced JAVA pre-analysis algorithm, and can parse up to 10 kinds of main data types.、

 

Intelligent analysis can successfully guess more than 90% of columns in most of cases.

 

The meaning of each field on the right side:

 

  • Col1 no
  • Seen Count

 

  • MAX SIZE
  • PCT NULL
  • String Nice
  • Number Nice
  • Date Nice
  • Timestamp Nice
  • Timestamp with timezone Nice

 

Sample Data Analysis:

PRM-DUL-DUL45

 

Intelligence Analysis will analyze 10 records and display the results. These results will help client to know the column information.

 

 

If the records on data segments are less than 10, it will displays all the records.

TRY TO ANALYZE UNKNOWN column type:

PRM-DUL-DUL46

If PRM cannot recognize the column’s data type, you can specify the data type by yourself.

 

So far, PRM does not support below types: XDB.XDB$RAW_LIST_T、XMLTYPE、User-defined type

 

 

Unload Statement:

Here are the UNLOAD statements PRM generated, and these statements can be only used by PRM development team and supporting engineers of ParnassusData.

PRM-DUL-DUL47

In “Non-Dictionary Mode”, the normal mode and Data Bridge are also applicable. Compared with” Dictionary Mode”, the user can perform the field type by themselves when using data bridge in Non-Dictionary Mode.  As below picture, the field type is UNKNOW. The field types might be types that PRM doesn’t supported yet, for instance:  XML.

 

If the user knows the data type in this table (from schema design documents), it is necessary to specify the correct column types manually.
PRM-DUL-DUL48

 

CASE 5: Deleted System Tablespace and Part of User tablespace datafile by mistake

 

The SA of Company D deleted the system tablespace and part of user tablespace datafile by mistake.

In this case, part of tablespace datafile were deleted, and they might include datafile which stored segment header. Therefore it is better to use “Scan Tables From Extents” than” Scan Tables From Segment Header”.

 

The brief steps are as follows:

 

  1. Enter Recovery Wizard, select No-Dictionary mode, and added all usable data file. Then perform scan
  2. Select database, and right click on Scan Tables From Extents
  3. Analyze the data and implement data extraction and Data Bright
  4. Following steps are the same with Case 4

 

CASE 6: Copy DB datafile from damaged ASM diskgroup

The Company D begins to uses ASM instead of other file system. Since there are many bugs in the version 11.2.0.1that it uses, causing that ASM DISKGROUP cannot be mounted and still does not work after repairing ASM Disk Header.

In this case, user can use the ASM Files Clone feature of PRM to rescue datafile from damaged ASM DiskGroup directly.

 

  1. Open main interface, and select ASM File(s) Clone under Tools:

PRM-DUL-DUL49

 

Enter ASM   Disks   Window,  click  SELECT…to  add  ASM  Disks, for  example:

/dev/asm-disk5(linux). Then click ASM  analyze.

PRM-DUL-DUL50

PRM-DUL-DUL51

PRM-DUL-DUL52

 

ASM Files Clone will analyze the specified ASM Disk header, in order to find corresponding files in Disk group and the File Extent Map. All of the information will be recorded into PRM embedded database for future use. PRM can collect, analyze and store all Metadata, and improve the basic functions of PRM in various forms, showing to users by diagram.

PRM-DUL-DUL53

After ASM Analyze, PRM will find the file list in Disk groups. So users can select the datafile/archivelog which need to be cloned to destination folder

 

Click ASM Clone to start file cloning…

PRM-DUL-DUL54

There is a progress bar of file cloning.

PRM-DUL-DUL55

ASM File Clone log as below:

 

 

Preparing selected files…Cloning +DATA2/ASMDB1/DATAFILE/TBS2.256.839732369:……………………..1024MB………………………………..2048MB………………………………..3072MB

 

………………………………….4096MB

………………………………..5120MB

………………………………….6144MB

……………………………….7168MB

…………………………………8192MB

…………………………………9216MB

…………………………………10240MB

…………………………………11264MB

…………………………………..12288MB

…………………………………….13312MB

…………………………….14336MB

……………………………………..15360MB

……………………………….16384MB

…………………………………17408MB

…………………………………18432MB

…………………………………………………………………………………………….19456MB

……………………………………

Cloned size for this file (in byte): 21475885056

 

Cloned successfully!

 

 

Cloning +DATA2/ASMDB1/ARCHIVELOG/2014_02_17/thread_1_seq_47.257.839732751:

……

Cloned size for this file (in byte): 29360128

 

Cloned successfully!

 

 

Cloning +DATA2/ASMDB1/ARCHIVELOG/2014_02_17/thread_1_seq_48.258.839732751:

……

Cloned size for this file (in byte): 1048576

 

Cloned successfully!

 

 

 

 

All selected files were cloned done.

 

It is necessary to validate cloned data via the “dbv” or “rman validate” command, for example:

rman target /RMAN> catalog datafilecopy ‘/home/oracle/asm_clone/TBS2.256.839732369.dbf’;cataloged datafile copy

 

datafile copy file name=/home/oracle/asm_clone/TBS2.256.839732369.dbf RECID=2 STAMP=839750901

 

RMAN> validate datafilecopy ‘/home/oracle/asm_clone/TBS2.256.839732369.dbf’;

 

Starting validate at 17-FEB-14

using channel ORA_DISK_1

channel ORA_DISK_1: starting validation of datafile

channel ORA_DISK_1: including datafile copy of datafile 00016 in backup set

input file name=/home/oracle/asm_clone/TBS2.256.839732369.dbf

channel ORA_DISK_1: validation complete, elapsed time: 00:03:35

List of Datafile Copies

=======================

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN

—- —— ————– ———— ————— ———-

16   OK     0              2621313      2621440         1945051

File Name: /home/oracle/asm_clone/TBS2.256.839732369.dbf

Block Type Blocks Failing Blocks Processed

———- ————– —————-

Data       0              0

Index      0              0

Other      0              127

 

Finished validate at 17-FEB-14

 

 

How to use PRM in ASM environment with ASMLIB?

asmlib related ASM DISK will be stored in OS as ll /dev/oracleasm/disks.

For example: Add files under /dev/oracleasm/disks into PRM ASM  DISK

 

$ll /dev/oracleasm/diskstotal 0brw-rw—-  1 oracle dba 8,  97 Apr 28 15:20 VOL001brw-rw—-  1 oracle dba 8,  81 Apr 28 15:20 VOL002brw-rw—-  1 oracle dba 8,  65 Apr 28 15:20 VOL003brw-rw—-  1 oracle dba 8,  49 Apr 28 15:20 VOL004

 

brw-rw—-  1 oracle dba 8,  33 Apr 28 15:20 VOL005

brw-rw—-  1 oracle dba 8,  17 Apr 28 15:20 VOL006

brw-rw—-  1 oracle dba 8, 129 Apr 28 15:20 VOL007

brw-rw—-  1 oracle dba 8, 113 Apr 28 15:20 VOL008

 

CASE 7: DB stored in ASM cannot be opened

One of CRM database in company D can’t be opened due to I/O error in a few disks that are added into ASM diskgroup, which generated some corrupted block in system tablespace datafile, and caused DB cannot be opened.

 

In this case, we can use PRM ASM Diskgroup to clone all datafile out of ASM.

 

 

Or, users can also use “Dictionary Mode(ASM)” to recover data from this ASM environment . Steps are as below:

 

  1. Recovery Wizard
  2. Dictionary Mode(ASM)
  3. Add ASM DISK (all ASM DISK in the ASM Disk Group that you want to recover)
  4. Click ASM analyze
  5. Select suitable Endian
  6. Select the needed datafile from the datafile lists by ASM analyze, or click “select all”
  7. Click “load”, following steps are the same with case3

 

PRM-DUL-DUL56

PRM-DUL-DUL57

PRM-DUL-DUL58

PRM-DUL-DUL59

PRM-DUL-DUL60

 

CASE 8: Recovery of Mistakenly deleted or Lost system tablespace in ASM 

The operation staff of Company D deleted system tablespace FILE#=1 datafile and user tablespace by mistake, causing the database cannot be opened.

In this case, users can use” Non-Dictionary Mode (ASM)” to recover data.

 

 

Steps are as below:

 

 

  1. Recovery Wizard
  2. Non-Dictionary Mode (ASM)
  3. Add necessary ASM Disk
  4. Click ASM analyze
  5. Select the suitable Endian and Character set. (Manually select character set due to Non-Dictionary Mode)
  6. Select all data file, or click “Select all”
  7. Click “scan”, following steps are the same with Case 3

 

PRM-DUL-DUL61

PRM-DUL-DUL62

PRM-DUL-DUL63

PRM-DUL-DUL64

 

CASE 9: Data Recovery of Dropped Tablespace

Staffs of Company D dropped a tablespace(DROP TABLESAPCE INCLUDING CONTENTS) by mistake. They want to recover data resided in that tablespace, but there is no RMAN backup.

Now we can use PRM in No-Dictionary mode to recover data. In this way, we can recover most of the data. However, the data is not mapping to the dictionary. Users need to manually recognize the table. Since it changed data dictionary by DROPPING TABLE and deleted objects in OBJ$, we cannot know the corresponding relations between DATA_OBJECT_ID and OBJECT_NAME. Below is the instruction of getting mapping.

 

select tablespace_name,segment_type,count(*) from dba_segments where owner=’PARNASSUSDATA’  group by tablespace_name,segment_type;TABLESPACE SEGMENT_TYPE      COUNT(*)———- ————— ———-USERS      TABLE                  126

 

USERS      INDEX                  136

 

SQL> select count(*) from obj$;

 

COUNT(*)

———-

75698

 

 

SQL> select current_scn, systimestamp from v$database;

 

CURRENT_SCN

———–

SYSTIMESTAMP

—————————————————————————

1895940

25-4月 -14 09.18.00.628000 下午 +08:00

 

 

 

SQL> select file_name from dba_data_files where tablespace_name=’USERS’;

 

FILE_NAME

——————————————————————————–

H:\PP\MACLEAN\ORADATA\PARNASSUS\DATAFILE\O1_MF_USERS_9MNBMJYJ_.DBF

 

 

SQL> drop tablespace users including contents;

 

 

C:\Users\maclean>dir H:\APP\MACLEAN\ORADATA\PARNASSUS\DATAFILE\O1_MF_USERS_9MNBMJYJ_.DBF

 

The volume is entertainment in drive H and SN is A87E-B792

 

H:\APP\MACLEAN\ORADATA\PARNASSUS\DATAFILE

 

The drive can not find the file

 

 

Here, we can use file recovery tools, for example: Undeleter on Windows, to restore the accidentally deleted datafile.

PRM-DUL-DUL65

 

Start up PRM => recovery Wizard => No-Dictionary mode

 

PRM-DUL-DUL66

PRM-DUL-DUL67

For it is in No-Dictionary mode, please select the correct character set manually.
PRM-DUL-DUL68

Add the recovered files and Click scan.

PRM-DUL-DUL69

PRM-DUL-DUL70

Then scan the table from the segment head/panel. If it fails to find all of the table from segment head, try to use extend scan:

PRM-DUL-DUL71

Now you can see lots of tables named OBJXXXXX, which is a combination of “OBJ” and

DATA_OBJECT_ID.     We   need   some   technicians   who are familiar   with schema design and application data, they can match this table with application tables through browsing sample data analysis.

 

PRM-DUL-DUL72

If no one can help clarify the relationship between data and table, try the following methods:

 

In this case, just the tablespace is dropped and Oracle still works, so we can use FLASHBACK QUERY to get the mapping between DATA_OBJECT_ID and table name.

SQL>  select count(*) from sys.obj$;COUNT(*)

 

 

———-

75436

 

SQL> select count(*) from sys.obj$ as of scn 1895940;

select count(*) from sys.obj$ as of scn 1895940

*

Error:

ORA-01555: Snapshot is too old,

 

Try to use DBA_HIST_SQL_PLAN of AWR and find the mapping between OBJECT# and OBJECT_NAME in recent 7 days.

 

SQL> desc DBA_HIST_SQL_PLAN

NAME                                        NULL? TYPE

—————————————– ——– ———————–

DBID                                      NOT NULL NUMBER

SQL_ID                                    NOT NULL VARCHAR2(13)

PLAN_HASH_VALUE                           NOT NULL NUMBER

ID                                        NOT NULL NUMBER

OPERATION                                          VARCHAR2(30)

OPTIONS                                            VARCHAR2(30)

OBJECT_NODE                                        VARCHAR2(128)

OBJECT#                                            NUMBER

OBJECT_OWNER                                       VARCHAR2(30)

OBJECT_NAME                                        VARCHAR2(31)

OBJECT_ALIAS                                       VARCHAR2(65)

OBJECT_TYPE                                        VARCHAR2(20)

OPTIMIZER                                          VARCHAR2(20)

PARENT_ID                                          NUMBER

DEPTH                                              NUMBER

POSITION                                           NUMBER

SEARCH_COLUMNS                                     NUMBER

COST                                               NUMBER

CARDINALITY                                        NUMBER

BYTES                                              NUMBER

OTHER_TAG                                          VARCHAR2(35)

PARTITION_START                                    VARCHAR2(64)

PARTITION_STOP                                     VARCHAR2(64)

PARTITION_ID                                       NUMBER

OTHER                                              VARCHAR2(4000)

DISTRIBUTION                                       VARCHAR2(20)

CPU_COST                                           NUMBER

IO_COST                                            NUMBER

TEMP_SPACE                                         NUMBER

ACCESS_PREDICATES                                  VARCHAR2(4000)

FILTER_PREDICATES                                  VARCHAR2(4000)

PROJECTION                                         VARCHAR2(4000)

TIME                                               NUMBER

QBLOCK_NAME                                        VARCHAR2(31)

REMARKS                                            VARCHAR2(4000)

TIMESTAMP                                          DATE

OTHER_XML                                          CLOB

 

 

For exmaple:

 

select object_owner,object_name,object# from DBA_HIST_SQL_PLAN where sql_id=’avwjc02vb10j4′

 

OBJECT_OWNER         OBJECT_NAME                                 OBJECT#

——————– —————————————- ———-

 

PARNASSUSDATA        TORDERDETAIL_HIS                              78688

 

 

 

Use below scrip for the mapping relationship between OBJECT_ID and OBJECT_NAME

 

Select * from

(select object_name,object# from DBA_HIST_SQL_PLAN

UNION select object_name,object# from GV$SQL_PLAN) V1 where V1.OBJECT# IS NOT NULL minus select name,obj# from sys.obj$;

 

select obj#,dataobj#, object_name from WRH$_SEG_STAT_OBJ where object_name not in (select name from sys.obJ$) order by object_name desc;

 

 

another script:

SELECT tab1.SQL_ID,

current_obj#,

tab2.sql_text

FROM DBA_HIST_ACTIVE_SESS_HISTORY tab1,

dba_hist_sqltext tab2

WHERE tab1.current_obj# NOT IN

(SELECT obj# FROM sys.obj$

)

AND current_obj#!=-1

AND tab1.sql_id  =tab2.sql_id(+);

 

 

Attention: Since it relies on AWR repository, the mapping table is not that accurate and exact.

 

CASE 10: Data Recovery of Dropped Table by mistake.

The application developers of Company D dropped one core application table in ASM without any backup.  Oracle has introduced recycle bin feature since 10g. First check whether the dropped table is in the recycle bin or not by viewing the DBA_RECYCLEBINS. If so, flashback to before drop via the recycle bin. Otherwise, use PRM for recovery as soon as possible.

The brief steps of Recovery by PRM:

  1. OFFLINE the tablespace where the dropped table
  2. Find the DATA_OBJECT_ID of dropped table by data dictionary query or logminer. If it fails, users have to recognize this table in No-dictionary
  3. Start PRM, enter No-dictionary mode, and add all datafiles of the dropped tablespace. Then SCAN DATABASE+SCAN TABLE from Extent MAP.
  4. Locate the data table by DATA_OBJECT_ID in object tree, and insert data back to source database by DataBridge.
SQL> select count(*) from “MACLEAN”.”TORDERDETAIL_HIS”;COUNT(*)———-984359 

 

SQL>

SQL> create table maclean.TORDERDETAIL_HIS1 as select * from  maclean.TORDERDETAIL_HIS;

 

Table created.

 

SQL> drop table maclean.TORDERDETAIL_HIS;

 

Table dropped.

 

We can get the general DATA_OBJECT_ID by logminer or the method provided in “CASE 9”:

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => ‘/oracle/logs/log1.f’, OPTIONS => DBMS_LOGMNR.NEW);EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => ‘/oracle/logs/log2.f’, OPTIONS => DBMS_LOGMNR.ADDFILE);Execute DBMS_LOGMNR.START_LOGMNR(DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG+DBMS_LOGMNR.COMMITTED_DATA_ONLY); 

 

SELECT * FROM V$LOGMNR_CONTENTS ;

 

EXECUTE DBMS_LOGMNR.END_LOGMNR;

 

Even no DATA_OBJECT_ID can be obtained, we can still locate the data table that we need to recover through artificial data identification, provided the data table is not much.

 

First, OFFLINE the tablespace of dropped table.

 

SQL> select tablespace_name from dba_segments where segment_name=’TPAYMENT’;TABLESPACE_NAME——————————USERS 

 

SQL> select file_name from dba_data_files where tablespace_name=’USERS’;

 

FILE_NAME

—————————————————————-

+DATA1/parnassus/datafile/users.263.843694795

 

SQL> alter tablespace users offline;

 

Tablespace altered.

 

Start PRM in NON-DICT mode, add the corresponding datafile and select SCAN DATABASE+SCAN TABLE From Extents:

PRM-DUL-DUL73

PRM-DUL-DUL74

 

Add all of the related ASM Disks and click ASM Analyze:

PRM-DUL-DUL75

 

Select the character set in Non-Dict  mode:

PRM-DUL-DUL76

Select the datafile of dropped table, and click scan:

PRM-DUL-DUL77

PRM-DUL-DUL78

Click the generated database name and right click to select scan tables from extents:

PRM-DUL-DUL79

PRM-DUL-DUL80

To find that the data of DATA_OBJECT_ID=82641 is mapped to the dropped TORDERDETAIL_HIS table through artificial identification, and pass them back to other tablespace in the source repository by DataBridge.

 

PRM-DUL-DUL81PRM-DUL-DUL82

 

PRM-DUL-DUL83

 

FAQ

  1. How to get my database character set information?

 

 

You can know your database character set information by Oracle Alert.log.

[oracle@mlab2 trace]$ grep     -i character alert_Parnassus.log Database Characterset is US7ASCII

Database Characterset is US7ASCII

alter database character set INTERNAL_CONVERT AL32UTF8

Updating character set in controlfile to AL32UTF8

Synchronizing connection with database character set information Refreshing type attributes with new character set information

Completed: alter database character set INTERNAL_CONVERT AL32UTF8

alter  database  national  character  set  INTERNAL_CONVERT  UTF8

Completed: alter database national character set INTERNAL_CONVERT UTF8 Database Characterset is AL32UTF8

Database Characterset is AL32UTF8

Database Characterset is AL32UTF8

 

  1. Why PRM failed with GC ” gc warning: Repeated allocation of very large block (appr.size 512000)”?

 

So far, most of the problems are caused by usage of Java environments that are not recommended. Especially, it easily leads to such problem to use redhat gcj java on Linux. ParnassusData suggests users use Open JDK 1.6 for PRM, or directly use $JAVA_HOME/bin/java –jar prm.jar to start PRM.

 

Open JDK for Linux download Link:

 

 

Open jdk x86_64 for Linux   5http://pan.baidu.com/s/1qWO740O
Tzdata-java x86_64 for Linux 5http://pan.baidu.com/s/1gdeiF6r
Open jdk x86_64 for Linux   6http://pan.baidu.com/s/1mg0thXm
Open jdk x86_64 for Linux   6http://pan.baidu.com/s/1sjQ7vjf
Open jdk x86 for Linux 5http://pan.baidu.com/s/1kT1Hey7
Tzdata-java x86 for Linux  5http://pan.baidu.com/s/1kT9iBAn
Open jdk x86 for Linux 6http://pan.baidu.com/s/1sjQ7vjf

 

 

Tzdata-java x86 for Linux  6http://pan.baidu.com/s/1kTE8u8n

 

 

JDK on Other platforms download link:

 

 

AIX JAVA SDK 7http://pan.baidu.com/s/1i3JvAlv
JDK Windows x86http://pan.baidu.com/s/1qW38LhM
JDK Windows x86-64http://pan.baidu.com/s/1qWDcoOk
Solaris JDK 7 x86-64bithttp://pan.baidu.com/s/1gdzgSvh
Solaris JDK 7 x86-32bithttp://pan.baidu.com/s/1mgjxFlQ
Solaris JDK 7 Sparchttp://pan.baidu.com/s/1pJjX3Ft

 

 

Oracle JDK download link:

 

 

http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-d ownloads-javase6-419409.html#jdk-6u45-oth-JPR

 

  1. If you find bugs in PRM, how to report the bug to ParnassusData?

Everyone is welcome to report bug to ParnassusData by sending emails to report_bugs@parnassusdata.com. Please enclose the detailed description of operating environment, including OS, Java environment and Oracle database versions, when reporting bug.

 

  1. What should I do if PRM failed with the following error?

Error:               no                `server’               JVM                 at               `D:\Program                 Files (x86)\Java\jre1.5.0_22\bin\server\jvm.dll’.

If users just installed JAVA Runtime Environment JRE without installing JDK, please start PRM without –server option. This option does not exist in the version before JRE 1.5.

ParnassusData recommends Open JDK 1.6 or above for running PRM.

 

 

The download link of JDK 1.6 on various OS: http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-d

ownloads-javase6-419409.html#jdk-6u45-oth-JPR

 

 

  1. Why does PRM display Chinese as messy code?

 

 

So far, there are two reasons for Chinese encoding problem:

 

 

  • The OS does not have Chinese language pack, thus PRM cannot display Chinese correctly
  • If OS have installed the necessary language package, please use Open JDK1.6 or above version. There might be some problem in 4.

Find More

 

 

 

Resource:                                http://www.parnassusdata.com/resources/ Technical Support:                                  service@parnassusdata.com

Sales:                                        sales@parnassusdata.com Download Software:                                                    http://www.parnassusdata.com/

Contact:                                   http://www.parnassusdata.com/zh-hans/contact

 

 

 

 

ParnassusData Corporation, Shanghai, GaoPing Road No. 733. China Phone: (+86) 13764045638

ParnassusData.com

Facebook: http://www.facebook.com/parnassusData Twitter:    http://twitter.com/ParnassusData

Weibo: http://weibo.com/parnassusdata

 

 

 

Copyright©2013, ParnassusData and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.

 

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

 

AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open Company, Ltd. 0410

 

Copyright © 2014 ParnassusData Corporation. All Rights Reserved.

Viewing all 175 articles
Browse latest View live