Quantcast
Channel: Oracle and MySQL Database Recovery Repair Software recover delete drop truncate table corrupted datafiles dbf asm diskgroup blogs
Viewing all 175 articles
Browse latest View live

Urgent recovery oracle database

$
0
0

I have a case in my Data Files (oracle 11gR2) with Header corrupted (block #1)– see log – kindly did you have any workaround solution to read the remaining data in file even if the file header missing, you scanned 10MB data only.

 

Answer:

 

Our prm can work with datafile header lost, need Special license key.


help on Ora-01157 cannot identify datafile 1, file not found!!!

$
0
0

 

My Oracle 7 Database installed on Unixware7 was crached and did not started up yet displaying this error messgae : 

Ora-01157 cannot identify datafile 1, file not found!!!

the system dbf file is corrupted and i did not have a backup!!!

I checked  permission, existence of the file, system maintenance operation without finding any strange thing.

Any idea for this disaster please?

 

How about the database size ?  Can you pls upload datafile to dropbox/google drive? We can help you recover data from corrupted database  using our tools.

 

database recover with DUL

$
0
0



I would like to know about you DUL as a service . 

I have a 2 TB database which is logically corrupted and we would like to unload that data.

we don’t have a DBA on our site and we would like to have a quote how much this service can cost.

 

prm-dul license costs 1500 USD. 

 

oracle Data recovery Issue

$
0
0
===SUMMARY OF THE ISSUE====
 
 
 
1. We lost data from few partitions and need  to restore and recover the data
 
 .The table is a Range partition and nearly we have lost 9 partitions.Each partition size is nearly 350GB( 9 partiions *350G=3TB)
 
2. The only backup available is from Jan 25 2015. This backup does not include SYSTEM tablespace backups.
 
3. We built a new environment and first restored datafiles from Jan 2015 backups.
 
4. Then used the most recent backup(Oct 2016) to restore SYSTEM tablespace to the new environment.Facing issues to open the database.
 
5. Data is available in the backup datafiles.Need to recover the data from datafiles.
 
 
 
I have gone through the  oracle PRM-DUL document and could not see any case study to recover the LOB data from the partitions,which were truncated.
We have to recover the data from few partitions(Lost 9 partitions) from Range partitioned table. 
Partitions contains the data "FILE_BLOB" BLOB, "FILE_XML" CLOB. Could you please provide the document which will explains to recover the LOB  data   from the partitioned table.
 
 prm can recover blob/clob from dropped partition in non-dictionary mode; first you use recovery wizard with non-dictionary mode , and scan the whole database /tablespace , and then you get your dropped partition listed by data object id ; select the dropped partition object in object tree, specify every column with right data type, and then you can use data bridge to recover your data. 
 
 pls let me know if you still have problem, actually this procedure is similar to recover from dropped table, but you have to specify  every column's data type, especially the lob column .
 

prm-dul beta release 4508

prm-dul beta release 4509

prm dul release 5108

$
0
0

 

prm dul release 5108

http://zcdn.parnassusdata.com/DUL5108.zip

 

changelogs:

1.release 5108 now supports Oracle 12.2, for Oracle 12.2 has expanded columns in sys.tab$ & sys.obj$ ,this make the dictionary is different from 12.1

2.prm now supports Oracle 12.2 PDB/CDB features ,Pluggable database(pdb) now can be boostraped as a single database .

3.FOR SYS_NC000 pseudo column , prm will ignore it 

4.prm now supports corrupted data file header, so if malware/ransomware damaged the file header , user can tell prm the data file's db version,tablespace no,relative file, so prm can bootstrap a file header damaged database as normal one 

5.prm now supports nvarchar,BLOB,CLOB column identify in non-bootstrap mode.

 

 

 

prm dul alpha release 5109


prm dul release 5108 rc2

prmscan oracle block fragmentation recovery

prm dul release 5108 rc3

$
0
0

changlogs:

 

export ddl  now supports multiple column primary key ,   nvarchar length no longer uses byte semantics

commons-io-2.6.jar now backport to commons-io-2.4.jar , asm clone now works 

change author laohu's contact info 

https://zcdn.parnassusdata.com/DUL5108rc3.zip

prm dul release 5108 rc5

prm dul guide video

$
0
0

 

 

 

prm dul recover oracle database easiest way                             https://youtu.be/mU3uip66DmY
prm dul databridge transfer oracle table                                https://youtu.be/yzvVSBnQ23g
prm dul export ddl from corrupted oracle database                       https://youtu.be/5l2hO5k5-PQ
prm dul recover oracle deleted rows                                     https://youtu.be/hIYutqNcVBI
prm dul recover oracle truncated table                                  https://youtu.be/KGrCi25sD3c
prm dul schema level databridge                                         https://youtu.be/RocbEFlPr3o
prm dul easiest way with ASM storage                                    https://youtu.be/EaMsSaCtje4
prm dul recover oracle dropped table                                    https://youtu.be/mdPGSjDvX6o
prm dul work with oracle 12c pdb pluggable database container database  https://youtu.be/QyDMsdmRfqU
prm dul recover malware/ransomware corrupted oracle datafile            https://youtu.be/jOT6k-KF8Hg
prmscan oracle block fragmentation recovery                             https://youtu.be/skH9nJOvIkQ
prmscan extract datafile from oracle asm diskgroup               https://youtu.be/Btt3kpPm3Qs
prm dul supports all version oracle pluggable database                https://youtu.be/NfGQ3HD4AGY
 
 
 
 
 
Using ORACLE PRM-DUL recover undelete deleted records/rows from table                https://youtu.be/EQeClR4sxUM
PRM-DUL untruncate Oracle Tables ,recover truncated oracle table data                      https://youtu.be/p7KQVt0raro
PRM For Oracle Database Schema Level DataBridge Key Feature                               https://youtu.be/XF57QJg89NI
How to recover truncated table without backup in oracle                                               https://youtu.be/z02YvkNP040
PRM 3.1 For Oracle ASM Extract Datafile From Damaged ASM Disk group                  https://youtu.be/rum9euHYuzw
 
 
 

What are odds MySQL table can be recovered?

$
0
0
This is the most asked question. Every single customer asks if their MySQL table can be recovered. Although it’s not possible to answer that with 100% confidence there are ways to estimate recovery chances. I will describe few tricks.
 
Generally speaking, if data is on media there are high odds TwinDB data recovery toolkit can fetch it. Where to look for depends on accident type.
 
Online MySQL data recovery toolkit
On our Data Recovery portal you can upload an .ibd file and check if the InnoDB tablespace contains any good records. The table space may be corrupt. The tool should handle that.
 
MySQL data recovery portal
 
DROP TABLE or DATABASE with innodb_file_per_table=OFF
If innodb_file_per_table is OFF InnoDB stores all tables in one file ibdata1. When a table or database is dropped pages with data are marked as free. InnoDB may reuse the pages for new data. It’s important to stop writes to ibdata1 as soon as possible, but if MySQL was running a while InnoDB might overwrite some data.
 
Let’s take table actor from sakila database as an example:
 
CREATE TABLE `actor` (
`actor_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
`first_name` varchar(45) NOT NULL,
`last_name` varchar(45) NOT NULL,
`last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`actor_id`),
KEY `idx_actor_last_name` (`last_name`)
) ENGINE=InnoDB AUTO_INCREMENT=201 DEFAULT CHARSET=utf8
Fields first_name and last_name are string and they come next to each other. In InnoDB page these fields also located next to each other. InnoDB does’t terminated strings with ‘\0’, so if first_name is WOODY and last_name is HOFFMAN in the InnoDB page you will find WOODYHOFFMAN string. So, take grep and try to find that string:
 
# grep WOODYHOFFMAN ibdata1
Binary file ibdata1 matches
So it’s likely the record is still in ibdata1 and not overwritten. However the string may be a remains of system buffers. To be sure the string comes from a good index page I use bvi. It stands for binary vi and works pretty much similar to vi. Particularly search works the same way. I can scroll down ibdata1 and see in what context WOODYHOFFMAN shows up. Here’s how InnoDB index page looks like.
 
0013C44C  00 00 00 00 07 0E 94 00 00 01 3D 02 ..........=.
0013C458  14 4A 55 4C 49 41 4D 43 51 55 45 45 .JULIAMCQUEE
0013C464  4E 43 F2 AF 59 07 05 04 00 E8 00 26 NC..Y......&
0013C470  00 1C 00 00 00 00 07 0E 94 00 00 01 ............
0013C47C  3D 02 1E 57 4F 4F 44 59 48 4F 46 46 =..WOODYHOFF
0013C488  4D 41 4E 43 F2 AF 59 05 04 00 00 F0 MANC..Y.....
0013C494  00 23 00 1D 00 00 00 00 07 0E 94 00 .#..........
0013C4A0  00 01 3D 02 28 41 4C 45 43 57 41 59 ..=.(ALECWAY
0013C4AC  4E 45 43 F2 AF 59 04 06 00 00 F8 00 NEC..Y......
0013C4B8  24 00 1E 00 00 00 00 07 0E 94 00 00 $...........
0013C4C4  01 3D 02 32 53 41 4E 44 52 41 50 45 .=.2SANDRAPE
0013C4D0  43 4B 43 F2 AF 59 08 05 00 01 00 00 CKC..Y......
0013C4DC  27 00 1F 00 00 00 00 07 0E 94 00 00 '...........
If you go up you’ll see the infimum and supremum records – those index page starts with:
 
0013C038  00 00 00 00 00 00 00 00 00 00 00 00 ............
0013C044  00 00 00 00 00 7D 00 00 00 00 00 00 .....}......
0013C050  01 EE 03 32 00 00 00 00 00 00 01 EE ...2........
0013C05C  02 72 01 00 02 00 1C 69 6E 66 69 6D .r.....infim
0013C068  75 6D 00 05 00 0B 00 00 73 75 70 72 um......supr
0013C074  65 6D 75 6D 07 08 00 00 10 00 29 00 emum......).
0013C080  01 00 00 00 00 07 0E 94 00 00 01 3D ...........=
0013C08C  01 10 50 45 4E 45 4C 4F 50 45 47 55 ..PENELOPEGU
0013C098  49 4E 45 53 53 43 F2 AF 59 08 04 00 INESSC..Y...
0013C0A4  00 18 00 26 00 02 00 00 00 00 07 0E ...&........
DROP TABLE or DATABASE with innodb_file_per_table=ON
The same principle applies if innodb_file_per_table is OFF.
 
The difference however is InnoDB deletes *.ibd file with data from file system when you DROP TABLE or DATABASE. That means the data maybe anywhere in free space of the file system. In this case I recommend to remount disk partition with MySQL data read-only as soon as possible. Otherwise not only MySQL but any process may overwrite the data.
 
To find the original records you can use grep:
 
# grep NICKWAHLBERG /dev/sda1
Binary file /dev/sda1 matches
bvi on large files works as bad as vi, so I use hexdump -C and less. Search however is less reliable because strings may be wrapped.
 
Corrupted InnoDB table
Depending on innodb_file_per_table you can look for the data in ibdata1 or respective *.ibd file. If records look good then the table is recoverable. Often corruption touches headers. For InnoDB it’s critical but data recover toolkit can ignore the corrupted bits and fetch what looks like good records.

MySQL Resolving ERROR 1050 42S01 at line 1 Table already exists

$
0
0
When ALTER TABLE crashes MySQL server it leaves orphaned records in InnoDB dictionary. It is annoying because next time you run the same ALTER TABLE query it will fail with error:
 
Shell
ERROR 1050 (42S01) at line 1: Table 'sakila/#sql-ib712' already exists
1
ERROR 1050 (42S01) at line 1: Table 'sakila/#sql-ib712' already exists
The post explains why it happens and how to fix it.
 
When you run ALTER table InnoDB follows the plan:
 
Block the original table
Create an empty temporary table with the new structure. The name of the new table is something like #sql-ib712.
Copy all records from the original table to the temporary one
Swap the temporary and original tables
Unblock the original table
The temporary table is a normal InnoDB table except it’s not visible to a user. InnoDB creates a record in the dictionary for the temporary table as for any other table.
 
If MySQL crashes in the middle of the ALTER process the dictionary ends up with an orphaned table.
 
We wouldn’t care much if the temporary table name were random. But it’s not and when you run ALTER TABLE again, InnoDB picks up the same name for the temporary table. As long as a record for a table with the same name already exists in the dictionary the subsequent ALTER fails.
 
How to fix “ERROR 1050 (42S01) at line 1: Table ‘sakila/#sql-ib712’ already exists”
MySQL suggests quite cumbersome method. In short you need to fool MySQL with a fake .frm file so you can DROP the temporary table with an SQL query. It works fine, but the structure of the fake table in .frm file must match the structure in the dictionary. It’s not that easy to find out. Fortunately you don’t need to.
 
An idea is following.
 
Not only DROP TABLE removes a records from InnoDB dictionary, DROP DATABASE does it too.
 
In case of DROP TABLE you need to specify exact name of the table while in case of DROP DATABASE InnoDB will delete all tables for a given database.
 
To get a clean dictionary for a given database we need to do following:
 
Create empty temporary database. Let it be tmp1234
Move all tables from the original database to tmp1234
Drop the original database (it’s empty by now, all tables are in tmp1234)
Create the original database again
Move all tables from the temporary database to the original one.
Drop the empty temporary database.
Here’s a script that performs this task. It must be run by root and mysql command should connect to the server without asking the password. Stop all writes to the database before running the script.
 
 
 
 
 
 
 
#!/usr/bin/env bash
set -eu
for db in `mysql -NBe "SHOW DATABASES" | grep -wv -e mysql -e information_schema -e mysql -e performance_schema`; do
        db_tmp=tmp$RANDOM
        c=`mysql -NBe "select COUNT(*) from information_schema.tables WHERE TABLE_SCHEMA = '$db' AND TABLE_TYPE <> 'BASE TABLE'"`
        if [ "$c" -ne 0 ]; then
                echo "There are non-base tables (views etc) in $db"
                continue
        fi
        mysql -e "CREATE DATABASE `$db_tmp`"
        IFS="
"
        for t in `mysql -NBe "SELECT TABLE_NAME FROM information_schema.TABLES WHERE TABLE_SCHEMA = '$db' AND TABLE_TYPE = 'BASE TABLE'"`; do
                echo "Moving $db.$t to $db_tmp.$t"
                mysql -e "RENAME TABLE `$db`.`$t` TO `$db_tmp`.`$t`"
        done
        n=`mysql -e "SHOW TABLES""$db"| wc -l`
        if [ $n -ne 0 ]; then
                echo "there are $n tables in $db , not gonna drop it!"
                exit -1
        fi
        datadir=`mysql -NBe "SELECT @@datadir"`
        rm -f "$datadir/$db/"*
        mysql -e "DROP DATABASE  `$db`"
        mysql -e "CREATE DATABASE `$db`"
        for t in `mysql -NBe "SELECT TABLE_NAME FROM information_schema.TABLES WHERE TABLE_SCHEMA = '$db_tmp' AND TABLE_TYPE = 'BASE TABLE'"`; do
                echo "Moving $db_tmp.$t to $db.$t"
                mysql -e "RENAME TABLE `$db_tmp`.`$t` TO `$db`.`$t`"
        done
        n=`mysql -e "SHOW TABLES""$db_tmp"| wc -l`
        if [ $n -ne 0 ]; then
                echo "there are $n tables in $db_tmp , not gonna drop it!"
                exit -1
        fi
        mysql -e "DROP DATABASE `$db_tmp`"
done

MySQL Resolving page corruption in compressed InnoDB tables

$
0
0
Sometimes corruption is not the true corruption. Corruption in compressed InnoDB tables may be a false positive.
 
Compressed InnoDB table may hit false checksum verification failure. The bug (http://bugs.mysql.com/bug.php?id=73689) reveals itself in the error log as follows:
 
Shell
2014-10-18 08:26:31 7fb114254700 InnoDB: Compressed page type (17855); stored checksum in field1 0; calculated checksums for field1: crc32 4289414559, innodb 0, none 3735928559; page LSN 24332465308430; page number (if stored to page already) 60727; space id (if stored to page already) 448
InnoDB: Page may be an index page where index id is 516
 
Every InnoDB page stores a checksum in first four bytes. When InnoDB reads a page it compares the checksum, stored in the page, and the checksum calculated from the page content. If the checksums mismatch InnoDB believes the page is corrupt and crashes to prevent further corruption.
 
Zero, however, is a valid checksum. In a database as large as 70 TB (2^32*16k) there will be one page that leads to zero checksum result. So, quite probable event on modern databases where terabyte MySQL instances aren’t rare.
 
The MySQL documentation suggests that with default settings the stored checksum must match either of three checksum algorithms: none, innodb or crc32.
 
 
 
 
 
 
 
innodb is old and the only up until 5.6.3 checksum algorithm. Since 5.6.2 crc32 is available. crc32 is faster implementation, besides it may be calculated in hardware if CPU supports that.
 
I got confused about none algorithm. Although the table hints that a page stores a hard-coded value that’s being checked while reading, actually none means checksums are disabled. This is what the manual says further on though.
 
Having said that, even though the stored checksum is zero, the calculated value is zero, should the verification should pass. Actually it doesn’t. InnoDB assumes the page must be empty if the stored checksum is zero:
 
 
 
$ cat page/page0zip.cc
 
page_zip_verify_checksum(
...
       /* declare empty pages non-corrupted */
        if (stored == 0) {
                /* make sure that the page is really empty */
                ulint i;
                for (i = 0; i < size; i++) {
                        if (*((const char*) data + i) != 0) {
                                return(FALSE);
                        }
                }
 
                return(TRUE);
        }
 
 
 
This bug is fixed in 5.6.22 that hasn’t been released yet, so to deal with the “corruption” crc32 should be used.
 
To convert InnoDB tablespace to crc32 checksums two steps should be done.
First, start MySQL with innodb_checksum_algorithm=none . That
 
 
 
This bug is fixed in 5.6.22 that hasn’t been released yet, so to deal with the “corruption” crc32 should be used.
 
To convert InnoDB tablespace to crc32 checksums two steps should be done.
First, start MySQL with innodb_checksum_algorithm=none . That
 
 
Disabled checksums let InnoDB read the page without crash and rebuild it.
 
 
 
$ cat /etc/my.cnf
...
[mysqld]
...
innodb_checksum_algorithm=crc32
 
 
 
 
And rebuild the table again:
 
mysql> ALTER TABLE sakila.actor ENGINE InnoDB ROW_FORMAT Compressed;
Query OK, 0 rows affected (0.04 sec)
Records: 0  Duplicates: 0  Warnings: 0
 
 
 
crc32 will produce different checksum so InnoDB will run fine. Of course, there is a non-zero probability crc2 will return zero on non-empty pages, so it’s better to upgrade to 5.6.22 when it’s released.
 
 
 

How to handle wrong page type in external pages

$
0
0
First step of successful MySQL data recovery is to find InnoDB pages with your data. Let’s call it first, because prerequisite steps are already done.
 
InnoDB page type is a two bytes integer stored in the header of a page. For MySQL data recovery two are important:
 
FIL_PAGE_INDEX. Pages of this type are nodes of B+ Tree index where InnoDB stores a table.
FIL_PAGE_TYPE_BLOB. So called external pages, where InnoDB keeps long values of BLOB or TEXT type.
stream_parser reads a stream of bytes, finds InnoDB pages and sorts them per type, per index or page id. It applies sophisticated algorithms tailored for particular page type. Of course, it assumes that page type in the header corresponds to the content of the page, otherwise it will ignore the page.
 
Recently I worked on a data recovery case that proved I was wrong in my assumptions. The customer dropped their database. They ran MySQL 5.0 with innodb_file_per_table=OFF. This is one of the easiest recovery scenarios, however not everything was recovered – the most important table missed BLOB fields. Total number of recover was close to the true value, but BLOB fields were truncated. Excessive “— #####CannotOpen_FIL_PAGE_TYPE_BLOB/0000000000000XYZ.page”” errors in the standard error output proved that stream_parser failed to find external pages.
 
Something went wrong, the data couldn’t be overwritten, the customer stopped MySQL immediately after the accident. I decided to investigate why external pages were not found. Page_id is a file offset in 16k units. dd can extract particular page X:
 
 
# dd if=/var/lib/mysql/ibdata1 of=page-8 bs=16k count=1 skip=8
1+0 records in
1+0 records out
16384 bytes (16 kB) copied, 0.00175834 s, 9.3 MB/s
 
 
Here’s the header of page id 8, which is index page:
 
 
# hexdump -C page-8 | head
00000000  9a 8f cd fc 00 00 00 08  ff ff ff ff ff ff ff ff  |................|
00000010  00 00 00 01 00 1b 3a 1c  <strong>45 bf</strong> 00 00 00 00 00 00  |......:.E.......|
00000020  00 00 00 00 00 00 00 02  00 b9 00 04 00 00 00 00  |................|
00000030  00 9e 00 02 00 01 00 02  00 00 00 00 00 00 00 00  |................|
00000040  00 01 00 00 00 00 00 00  00 01 00 00 00 00 00 00  |................|
00000050  00 02 03 f2 00 00 00 00  00 00 00 02 03 32 08 01  |.............2..|
00000060  00 00 03 00 85 69 6e 66  69 6d 75 6d 00 09 03 00  |.....infimum....|
00000070  08 03 00 00 73 75 70 72  65 6d 75 6d 00 11 0d 10  |....supremum....|
00000080  00 10 05 00 9e 53 59 53  5f 44 41 54 41 46 49 4c  |.....SYS_DATAFIL|
00000090  45 53 00 00 01 a3 1b 17  00 00 18 05 00 74 73 74  |ES...........tst|
 
 
FIL_PAGE_INDEX constant is defined as 17855 in MySQL. In hexadecimal it’s 0x45BF. It’s at position 0x18 in the example above.
 
And here’s an example of the external page:
 
 
# hexdump -C 0000000000001414.page | head
00000000  0a 4e 9d 5c 00 00 05 86  00 00 00 00 00 00 00 00  |.N.\............|
00000010  00 00 00 00 00 6b 72 b9  <strong>00 0a</strong> 00 00 00 00 00 00  |.....kr.........|
00000020  00 00 00 00 00 00 00 00  22 42 ff ff ff ff 22 30  |........"B...."0|
00000030  22 3e 3c 2f 46 4f 4e 54  3e 3c 2f 50 3e 53 49 5a  |"&gt;&lt;/FONT&gt;&lt;/P&gt;SIZ|
00000040  45 3d 22 31 22 20 41 4c  49 47 4e 3d 22 4c 45 46  |E="1" ALIGN="LEF|
00000050  54 22 3e 3c 46 4f 4e 54  20 46 41 43 45 3d 22 56  |T"&gt;&lt;FONT FACE="V|
00000060  65 72 64 61 6e 61 22 20  53 49 5a 45 3d 22 31 22  |erdana" SIZE="1"|
00000070  20 43 4f 4c 4f 52 3d 22  23 30 30 30 30 33 33 22  | COLOR="#000033"|
00000080  20 4c 45 54 54 45 52 53  50 41 43 49 4e 47 3d 22  | LETTERSPACING="|
00000090  30 22 20 4b 45 52 4e 49  4e 47 3d 22 30 22 3e 42  |0" KERNING="0"&gt;B|
 
 
 
Page type is 0x0A, as it should be.
 
When I extracted a page that stream_parser couldn’t find, it became clear why. Page type was 0x45BF ! The page was a BLOB page, but the page type in the header was FIL_PAGE_INDEX.
 
How can you detect InnoDB page if MySQL lies about its type? I believe the solution of this problem does exist, but for now there is a workaround.
 
c_parser by default reads external pages from directory specified by -d option:
 
 
 
# ./c_parser
 
Error: Usage: ./c_parser -4|-5|-6 [-dDV] -f &lt;InnoDB page or dir&gt; -t table.sql [-T N:M] [-b &lt;external pages directory&gt;]
 
...
 
-b &lt;dir&gt; -- Directory where external pages can be found. Usually it is pages-XXX/FIL_PAGE_TYPE_BLOB/
 
 
To read external pages from a file (e.g. ibdata1) option -i is introduced:
 
 
 
    -i  -- Read external pages at their offsets from &lt;file&gt;.
 
 
After this trick the table was successfully recovered.

MySQL Recover after DROP TABLE, innodb_file_per_table is OFF

$
0
0
Introduction
Human mistakes are inevitable. Wrong “DROP DATABASE” or “DROP TABLE” may destroy critical data on the MySQL server. Backups would help however they’re not always available. This situation is frightening but not hopeless. In many cases it is possible to recover almost all the data that was in the database or table.
Let’s look how we can do it. The recovery plan depends on whether InnoDB kept all data in a single ibdata1 or each table had its own tablespace . In this post we will consider the case innodb_file_per_table=OFF. This option assumes that all tables are stored in a common file, usually located at /var/lib/mysql/ibdata1.
 
Wrong action – table deletion
For our scenario we will use test database sakila that is shipped together with the tool.
Suppose we drop my mistake table actor:
 
 
mysql&gt; SELECT * FROM actor LIMIT 10;
+----------+------------+--------------+---------------------+
| actor_id | first_name | last_name    | last_update         |
+----------+------------+--------------+---------------------+
|        1 | PENELOPE   | GUINESS      | 2006-02-15 04:34:33 |
|        2 | NICK       | WAHLBERG     | 2006-02-15 04:34:33 |
|        3 | ED         | CHASE        | 2006-02-15 04:34:33 |
|        4 | JENNIFER   | DAVIS        | 2006-02-15 04:34:33 |
|        5 | JOHNNY     | LOLLOBRIGIDA | 2006-02-15 04:34:33 |
|        6 | BETTE      | NICHOLSON    | 2006-02-15 04:34:33 |
|        7 | GRACE      | MOSTEL       | 2006-02-15 04:34:33 |
|        8 | MATTHEW    | JOHANSSON    | 2006-02-15 04:34:33 |
|        9 | JOE        | SWANK        | 2006-02-15 04:34:33 |
|       10 | CHRISTIAN  | GABLE        | 2006-02-15 04:34:33 |
+----------+------------+--------------+---------------------+
10 rows in set (0.00 sec)
mysql&gt; CHECKSUM TABLE actor;
+--------------+------------+
| Table        | Checksum   |
+--------------+------------+
| sakila.actor | 3596356558 |
+--------------+------------+
1 row in set (0.00 sec)
 
mysql&gt; SET foreign_key_checks=OFF
mysql&gt; DROP TABLE actor;
Query OK, 0 rows affected (0.00 sec)
 
mysql&gt;
 
 
 
 
Recover after DROP TABLE from ibdata1
Now the table is gone, but information containing in the table can still be in the database file. The data remains untouched until InnoDB reuses free pages. Hurry up and stop MySQL ASAP!
For the recovery we’ll use TwinDB recovery toolkit. Check out our recent post “Recover InnoDB dictionary” for details on how to download and compile it.
 
Parse InnoDB tablespace
 
InnoDB stores all data in B+tree indexes. A table has one clustered index PRIMARY, all fields are stored here. If the table has secondary keys then each key has an index. Each index is identified by index_id.
 
If we want to recover a table we have to find all pages that belong to particular index_id.
 
stream_parser reads InnoDB tablespace and sorts InnoDB pages per type and per index_id.
 
 
 
 
root@test:~/undrop-for-innodb# ./stream_parser -f /var/lib/mysql/ibdata1
Opening file: /var/lib/mysql/ibdata1
File information:
 
ID of device containing file:        64768
inode number:                      1190268
protection:                         100660 (regular file)
number of hard links:                    1
user ID of owner:                      106
group ID of owner:                     114
device ID (if special file):             0
blocksize for filesystem I/O:         4096
number of blocks allocated:          69632
time of last access:            1404842312 Tue Jul  8 13:58:32 2014
time of last modification:      1404842478 Tue Jul  8 14:01:18 2014
time of last status change:     1404842478 Tue Jul  8 14:01:18 2014
total size, in bytes:             35651584 (34.000 MiB)
 
Size to process:                  35651584 (34.000 MiB)
All workers finished in 0 sec
root@test: ~/undrop-for-innodb#
 
 
 
 
Data from database pages is saved by the stream_parser to folder pages-ibdata1:
 
 
 
root@test:~/undrop-for-innodb/pages-ibdata1/FIL_PAGE_INDEX# ls
0000000000000001.page  0000000000000121.page  0000000000000382.page
0000000000000395.page  0000000000000408.page  0000000000000421.page
0000000000000434.page  0000000000000447.page  0000000000000002.page
...
0000000000000406.page  0000000000000419.page  0000000000000432.page
0000000000000445.page  0000000000000120.page  0000000000000381.page
0000000000000394.page  0000000000000407.page  0000000000000420.page
0000000000000433.page  0000000000000446.page
root@test: ~/undrop-for-innodb/pages-ibdata1/FIL_PAGE_INDEX
 
 
 
Now each index_id from InnoDB tablespace is saved in a separate file. We can use c_parser to fetch records from the pages. But we need to know what index_id corresponds to table sakila/actor. That information we can acquire from the dictionary – SYS_TABLES and SYS_INDEXES.
 
SYS_TABLES is always stored in file index_id 1 which is file pages-ibdata1/FIL_PAGE_INDEX./0000000000000001.page
Let’s find table_id of sakila/actor. If MySQL had enough time to flush changes to disk then add -D option which means “find deleted records”. The dictionary is always in REDUNDANT format, so we specify option -4:
 
 
 
 
root@test:~/undrop-for-innodb# ./c_parser -4Df pages-ibdata1/FIL_PAGE_INDEX/0000000000000001.page -t dictionary/SYS_TABLES.sql | grep sakila/actor
000000000B28  2A000001430D4D  SYS_TABLES  "sakila/actor"  158  4  1 0   0   ""  0
000000000B28  2A000001430D4D  SYS_TABLES  "sakila/actor"  158  4  1 0   0   ""  0
 
 
 
Note number 158 right after the table name. This is table_id.
 
The next thing do is to find the index id of the PRIMARY index of table actor. For this purpose we will  fetch records of SYS_INDEXES from file 0000000000000003.page (this table will contain information about  index_id and table_id). The structure of  SYS_INDEXES is passed with -t option.
 
 
 
root@test:~/undrop-for-innodb$ ./c_parser -4Df pages-ibdata1/FIL_PAGE_INDEX/0000000000000003.page -t dictionary/SYS_INDEXES.sql | grep 158
000000000B28    2A000001430BCA  SYS_INDEXES     158     376     "PRIMARY"       1       3       0       4294967295
000000000B28    2A000001430C3C  SYS_INDEXES     158     377     "idx\_actor\_last\_name"        1       0       0       4294967295
000000000B28    2A000001430BCA  SYS_INDEXES     158     376     "PRIMARY"       1       3       0       4294967295
000000000B28    2A000001430C3C  SYS_INDEXES     158     377     "idx\_actor\_last\_name"        1       0       0       4294967295
 
 
 
As you can see from the output, necessary index_id is 376. Therefore we will look for the actor data in the file 0000000000000376.page
 
 
root@test:~/undrop-for-innodb# ./c_parser -6f pages-ibdata1/FIL_PAGE_INDEX/0000000000000376.page -t sakila/actor.sql |  head -5
-- Page id: 895, Format: COMPACT, Records list: Valid, Expected records: (200 200)
000000000AA0    B60000035D0110  actor   1       "PENELOPE"      "GUINESS"       "2006-02-15 04:34:33"
000000000AA0    B60000035D011B  actor   2       "NICK"  "WAHLBERG"      "2006-02-15 04:34:33"
000000000AA0    B60000035D0126  actor   3       "ED"    "CHASE""2006-02-15 04:34:33"
000000000AA0    B60000035D0131  actor   4       "JENNIFER"      "DAVIS""2006-02-15 04:34:33"
root@test:~/undrop-for-innodb#
 
 
 
he resulting output looks correct, so let’s save the dump in a file. To make load simpler c_parser outputs LOAD DATA INFILE command to stderr.
 
We will use default location of this files: dump/default
 
 
 
 
root@test:~/undrop-for-innodb# mkdir -p dumps/default
root@test:~/undrop-for-innodb# ./c_parser -6f pages-ibdata1/FIL_PAGE_INDEX/0000000000000376.page -t sakila/actor.sql &gt; dumps/default/actor 2&gt; dumps/default/actor_load.sql
 
 
 
And here’s a command to load the table.
 
 
root@test:~/undrop-for-innodb# cat dumps/default/actor_load.sql
SET FOREIGN_KEY_CHECKS=0;
LOAD DATA LOCAL INFILE '/home/asterix/undrop-for-innodb/dumps/default/actor' REPLACE INTO TABLE `actor` FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '"' LINES STARTING BY 'actor\t' (`actor_id`, `first_name`, `last_name`, `last_update`);
root@test:~/undrop-for-innodb#
 
 
Load data back to the database
Now it’s time to recover the data into the database. But, before loading the dump we need to create empty structure of table actor:
 
 
 
mysql&gt; source sakila/actor.sql
mysql&gt; show create table actor\G
*************************** 1. row ***************************
       Table: actor
Create Table: CREATE TABLE `actor` (
  `actor_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
  `first_name` varchar(45) NOT NULL,
  `last_name` varchar(45) NOT NULL,
  `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`actor_id`),
  KEY `idx_actor_last_name` (`last_name`)
) ENGINE=InnoDB AUTO_INCREMENT=201 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
mysql&gt;
 
 
 
 
 
 
Now, the table actor is created. We can load our data after recovery.
 
 
 
root@test:~/undrop-for-innodb# mysql --local-infile -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
...
mysql&gt; USE sakila;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
 
Database changed
mysql&gt; source dumps/default/actor_load.sql
Query OK, 0 rows affected (0.00 sec)
 
Query OK, 600 rows affected (0.01 sec)
Records: 400  Deleted: 200  Skipped: 0  Warnings: 0
 
mysql&gt;
 
 
 
 
Checking recovered data
And the final step – check data quality. We will see total number of records, preview several records and calculate checksum.
 
 
 
mysql&gt; SELECT COUNT(*) FROM actor;
+----------+
| COUNT(*) |
+----------+
|      200 |
+----------+
1 row in set (0.00 sec)
 
mysql&gt; SELECT * FROM actor LIMIT 5;
+----------+------------+--------------+---------------------+
| actor_id | first_name | last_name    | last_update         |
+----------+------------+--------------+---------------------+
|        1 | PENELOPE   | GUINESS      | 2006-02-15 04:34:33 |
|        2 | NICK       | WAHLBERG     | 2006-02-15 04:34:33 |
|        3 | ED         | CHASE        | 2006-02-15 04:34:33 |
|        4 | JENNIFER   | DAVIS        | 2006-02-15 04:34:33 |
|        5 | JOHNNY     | LOLLOBRIGIDA | 2006-02-15 04:34:33 |
+----------+------------+--------------+---------------------+
5 rows in set (0.00 sec)
 
mysql&gt; CHECKSUM TABLE actor;
+--------------+------------+
| Table        | Checksum   |
+--------------+------------+
| sakila.actor | 3596356558 |
+--------------+------------+
1 row in set (0.00 sec)
 
mysql&gt;
 
 
 
 
As you can see, checksum after recovery is 3596356558 which is equal to the checksum taken before accidental drop of the table. Therefore we can be sure that all the data was recovered correctly.
In the next posts we will see other cases of recovery.
 
 

MySQL Recover after DROP TABLE, innodb_file_per_table is ON

$
0
0
Introduction
In the previous post we described the situation when TwinDB recovery toolkit can be used to recover accidentaly dropped table in the case innodb_file_per_table=OFF setting.
In this post we will show how to recover MySQL table or database in case innodb_file_per_table is ON. So, let’s assume that mysql server has setting innodb_file_per_table=ON. This option tells InnoDB to store each table with user in a separate data  file.
 
We will use for recovery test the same database sakila, that was used in the previous post.
 
 
 
 
root@test:/var/lib/mysql/sakila# ll
total 23468
drwx------ 2 mysql mysql     4096 Jul 15 04:26 ./
drwx------ 6 mysql mysql     4096 Jul 15 04:26 ../
-rw-rw---- 1 mysql mysql     8694 Jul 15 04:26 actor.frm
-rw-rw---- 1 mysql mysql   114688 Jul 15 04:26 actor.ibd
-rw-rw---- 1 mysql mysql     2871 Jul 15 04:26 actor_info.frm
-rw-rw---- 1 mysql mysql     8840 Jul 15 04:26 address.frm
-rw-rw---- 1 mysql mysql   163840 Jul 15 04:26 address.ibd
-rw-rw---- 1 mysql mysql     8648 Jul 15 04:26 category.frm
-rw-rw---- 1 mysql mysql    98304 Jul 15 04:26 category.ibd
-rw-rw---- 1 mysql mysql     8682 Jul 15 04:26 city.frm
-rw-rw---- 1 mysql mysql   114688 Jul 15 04:26 city.ibd
-rw-rw---- 1 mysql mysql     8652 Jul 15 04:26 country.frm
-rw-rw---- 1 mysql mysql    98304 Jul 15 04:26 country.ibd
...
-rw-rw---- 1 mysql mysql       36 Jul 15 04:26 upd_film.TRN
root@test:/var/lib/mysql/sakila#
 
 
 
Note the two files related to table country: country.frm, country.ibd.
We will drop this table and try to recover it. First we take the checksum and preview the records containing in this table:
 
 
 
Database changed
mysql&gt; SELECT * FROM country LIMIT 10;
+------------+----------------+---------------------+
| country_id | country        | last_update         |
+------------+----------------+---------------------+
|          1 | Afghanistan    | 2006-02-15 04:44:00 |
|          2 | Algeria        | 2006-02-15 04:44:00 |
|          3 | American Samoa | 2006-02-15 04:44:00 |
|          4 | Angola         | 2006-02-15 04:44:00 |
|          5 | Anguilla       | 2006-02-15 04:44:00 |
|          6 | Argentina      | 2006-02-15 04:44:00 |
|          7 | Armenia        | 2006-02-15 04:44:00 |
|          8 | Australia      | 2006-02-15 04:44:00 |
|          9 | Austria        | 2006-02-15 04:44:00 |
|         10 | Azerbaijan     | 2006-02-15 04:44:00 |
+------------+----------------+---------------------+
10 rows in set (0.00 sec)
 
mysql&gt; CHECKSUM TABLE country;
+----------------+------------+
| Table          | Checksum   |
+----------------+------------+
| sakila.country | 3658016321 |
+----------------+------------+
1 row in set (0.00 sec)
 
mysql&gt; SELECT COUNT(*) FROM country;
+----------+
| COUNT(*) |
+----------+
|      109 |
+----------+
1 row in set (0.00 sec)
 
mysql&gt;
 
 
 
 
 
 
 
 
Accidental drop
Now we will drop the table and look for the files, related to the table. As you can see from the list, files with country table data are gone:
 
 
 
mysql&gt; SET foreign_key_checks=OFF;
Query OK, 0 rows affected (0.00 sec)
 
mysql&gt; DROP TABLE country;
Query OK, 0 rows affected (0.00 sec)
 
mysql&gt;
mysql&gt; exit
Bye
root@test:~# cd /var/lib/mysql/sakila/
root@test:/var/lib/mysql/sakila# ll
total 23360
drwx------ 2 mysql mysql     4096 Jul 15 04:33 ./
drwx------ 6 mysql mysql     4096 Jul 15 04:26 ../
-rw-rw---- 1 mysql mysql     8694 Jul 15 04:26 actor.frm
-rw-rw---- 1 mysql mysql   114688 Jul 15 04:26 actor.ibd
-rw-rw---- 1 mysql mysql     2871 Jul 15 04:26 actor_info.frm
-rw-rw---- 1 mysql mysql     8840 Jul 15 04:26 address.frm
-rw-rw---- 1 mysql mysql   163840 Jul 15 04:26 address.ibd
-rw-rw---- 1 mysql mysql     8648 Jul 15 04:26 category.frm
-rw-rw---- 1 mysql mysql    98304 Jul 15 04:26 category.ibd
-rw-rw---- 1 mysql mysql     8682 Jul 15 04:26 city.frm
-rw-rw---- 1 mysql mysql   114688 Jul 15 04:26 city.ibd
-rw-rw---- 1 mysql mysql       40 Jul 15 04:26 customer_create_date.TRN
-rw-rw---- 1 mysql mysql     8890 Jul 15 04:26 customer.frm
-rw-rw---- 1 mysql mysql   196608 Jul 15 04:26 customer.ibd
-rw-rw---- 1 mysql mysql     1900 Jul 15 04:26 customer_list.frm
-rw-rw---- 1 mysql mysql      297 Jul 15 04:26 customer.TRG
-rw-rw---- 1 mysql mysql       65 Jul 15 04:26 db.opt
...
-rw-rw---- 1 mysql mysql       36 Jul 15 04:26 upd_film.TRN
root@ALtestTwinDB:/var/lib/mysql/sakila#
 
 
 
 
Recover after DROP TABLE
This situation is a little bit more complex, since we need to recover deleted file. If the database server has active communication with HDD, it is possible that deleted file will be rewritten by another data. Therefore it is critical to stop the server and to mount the partition read-only. But for the test we will just stop mysql service and continue with the recovery.
 
 
 
root@test:/var/lib/mysql/sakila# service mysql stop
mysql stop/waiting
 
 
Despite the fact that user data is stored in separate files per each table, data dictionary is still stored in ibdata1 file. That’s why we need to use stream_parser for /var/lib/mysql/ibdata1. For the details of usage, please refer to the post Recover after DROP TABLE.
In order to find table_id and index_id for the table country, we will use the dictionary, stored in SYS_TABLES and SYS_INDEXES. We will fetch the data from ibdata1 file. The dictionary records are always in REDUNDANT format, therefore we specify option -4. We assume that mysql server has flushed changes to the disk, so we add option -D option which means “find deleted records”. SYS_TABLES information is stored in the file with index_id=1 which is file pages-ibdata1/FIL_PAGE_INDEX./0000000000000001.page:
 
 
 
root@test:~/undrop-for-innodb# ./c_parser -4Df ./pages-ibdata1/FIL_PAGE_INDEX/0000000000000001.page -t ./dictionary/SYS_TABLES.sql | grep country
000000000CDC  62000001960684  SYS_TABLES      "sakila/country"        228     3       1       0       0       ""      88
000000000CDC  62000001960684  SYS_TABLES      "sakila/country"        228     3       1       0       0       ""      88
SET FOREIGN_KEY_CHECKS=0;
LOAD DATA LOCAL INFILE '/home/asterix/undrop-for-innodb/dumps/default/SYS_TABLES' REPLACE INTO TABLE `SYS_TABLES` FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '"' LINES STARTING BY 'SYS_TABLES\t' (`NAME`, `ID`, `N_COLS`, `TYPE`, `MIX_ID`, `MIX_LEN`, `CLUSTER_NAME`, `SPACE`);
 
root@test:~/undrop-for-innodb#
 
 
We can see that country table has table_id=228. Next step we will take is to find PRIMARY index of table country. For this purpose we will take records of SYS_INDEXES table from the file 0000000000000003.page (SYS_INDEXES table contains mapping between table_id and index_id). The structure of SYS_INDEXES is added to the tool with -t option.
 
 
 
root@test:~/undrop-for-innodb# ./c_parser -4Df ./pages-ibdata1/FIL_PAGE_INDEX/0000000000000003.page -t ./dictionary/SYS_INDEXES.sql | grep 228
000000000CDC    620000019605A8  SYS_INDEXES     228     547     "PRIMARY"       1       3       88      4294967295
000000000CDC    620000019605A8  SYS_INDEXES     228     547     "PRIMARY"       1       3       88      4294967295
SET FOREIGN_KEY_CHECKS=0;
LOAD DATA LOCAL INFILE '/home/asterix/undrop-for-innodb/dumps/default/SYS_INDEXES' REPLACE INTO TABLE `SYS_INDEXES` FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '"' LINES STARTING BY 'SYS_INDEXES\t' (`TABLE_ID`, `ID`, `NAME`, `N_FIELDS`, `TYPE`, `SPACE`, `PAGE_NO`);
 
root@test:~/undrop-for-innodb#
 
 
We can  see that index_id of the dropped table country is 547. The following step is different from the step we took in case of innodb_file_per_table=OFF. Since there is no file with data available, we will scan through all the storage device as raw device and look for data that fit in expected structure of the database pages. By the way, this approach can be taken in case we have corrupted data files. If some data is corrupted, recovery tool can perform partial data recovery. In the options of the tool we specify name of the device and device size (can be approximate).
 
 
 
root@test:~/undrop-for-innodb#./stream_parser -f /dev/vda -t 20000000k
Opening file: /dev/vda
File information:
 
ID of device containing file:            5
inode number:                         6411
protection:                          60660 (block device)
number of hard links:                    1
user ID of owner:                        0
group ID of owner:                       6
device ID (if special file):         64768
blocksize for filesystem I/O:         4096
number of blocks allocated:              0
time of last access:            1405411377 Tue Jul 15 04:02:57 2014
time of last modification:      1404625158 Sun Jul  6 01:39:18 2014
time of last status change:     1404625158 Sun Jul  6 01:39:18 2014
total size, in bytes:                    0 (0.000 exp(+0))
 
Size to process:               20480000000 (19.073 GiB)
Worker(0): 1.06% done. 2014-07-15 04:57:37 ETA(in 00:01:36). Processing speed: 199.848 MiB/sec
Worker(0): 2.09% done. 2014-07-15 04:57:37 ETA(in 00:01:35). Processing speed: 199.610 MiB/sec
Worker(0): 3.11% done. 2014-07-15 04:59:13 ETA(in 00:03:09). Processing speed: 99.805 MiB/sec
...
Worker(0): 97.33% done. 2014-07-15 04:57:15 ETA(in 00:00:05). Processing speed: 99.828 MiB/sec
Worker(0): 98.35% done. 2014-07-15 04:57:20 ETA(in 00:00:06). Processing speed: 49.941 MiB/sec
Worker(0): 99.38% done. 2014-07-15 04:57:17 ETA(in 00:00:01). Processing speed: 99.961 MiB/sec
All workers finished in 77 sec
root@test:~/undrop-for-innodb#
 
 
 
Stream parser stores the resulted files with pages to the folder pages-vda (name derived from the title of the device). We can see that necessary index is present in the files.
 
 
 
 
root@test:~/undrop-for-innodb/pages-vda/FIL_PAGE_INDEX# ll | grep 547
-rw-r--r-- 1 root root    32768 Jul 15 04:57 0000000000000547.page
root@test:~/undrop-for-innodb/pages-vda/FIL_PAGE_INDEX#
 
 
We will look for the data in the file 0000000000000547.page. Utility c_parser provide us information according to expected table structure, supplied with -t option.
 
 
root@test:~/undrop-for-innodb# ./c_parser -6f pages-vda/FIL_PAGE_INDEX/0000000000000547.page -t sakila/country.sql |  head -5
-- Page id: 3, Format: COMPACT, Records list: Valid, Expected records: (109 109)
000000000C4B    F30000038C0110  country 1       "Afghanistan"   "2006-02-15 04:44:00"
000000000C4B    F30000038C011B  country 2       "Algeria"       "2006-02-15 04:44:00"
000000000C4B    F30000038C0126  country 3       "American Samoa"        "2006-02-15 04:44:00"
000000000C4B    F30000038C0131  country 4       "Angola"        "2006-02-15 04:44:00"
root@test:~/undrop-for-innodb#
 
 
The result looks valid, so we will prepare files for loading data back to the database. LOAD DATA INFILE command with necessary options is sent to stderr device.
 
 
 
 
root@test:~/undrop-for-innodb# ./c_parser -6f pages-vda/FIL_PAGE_INDEX/0000000000000547.page -t sakila/country.sql &gt; dumps/default/country 2&gt; dumps/default/country_load.sql
 
 
 
Load data back to the database
We are going to load data back to the database. Before loading the data we create empty structure of table country:
 
 
root@test:~/undrop-for-innodb# service mysql start
mysql start/running, process 31035
root@test:~/undrop-for-innodb# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 37
Server version: 5.5.37-0ubuntu0.14.04.1 (Ubuntu)
 
Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.
 
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
mysql&gt; 
 
mysql&gt; use sakila;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
 
Database changed
mysql&gt; source sakila/country.sql
Query OK, 0 rows affected (0.00 sec)
...
 
Query OK, 0 rows affected (0.00 sec)
 
mysql&gt;
 
mysql&gt; show create table country\G
*************************** 1. row ***************************
       Table: country
Create Table: CREATE TABLE `country` (
  `country_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
  `country` varchar(50) NOT NULL,
  `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`country_id`)
) ENGINE=InnoDB AUTO_INCREMENT=110 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
 
mysql&gt;
 
 
 
And now we are loading data itself.
 
 
root@testB:~/undrop-for-innodb# mysql --local-infile -uroot -p
Enter password:
...
mysql&gt; USE sakila;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
 
Database changed
mysql&gt; source dumps/default/country_load.sql
Query OK, 0 rows affected (0.00 sec)
 
Query OK, 327 rows affected (0.00 sec)
Records: 218  Deleted: 109  Skipped: 0  Warnings: 0
mysql&gt;
 
 
Checking data quality
So, the last thing that remained is to check the quality of recovered data. We will preview several records, calculate total number of records and checksum.
 
 
mysql&gt; SELECT COUNT(*) FROM country;
+----------+
| COUNT(*) |
+----------+
|      109 |
+----------+
1 row in set (0.00 sec)
 
mysql&gt; SELECT * FROM country LIMIT 5;
+------------+----------------+---------------------+
| country_id | country        | last_update         |
+------------+----------------+---------------------+
|          1 | Afghanistan    | 2006-02-15 04:44:00 |
|          2 | Algeria        | 2006-02-15 04:44:00 |
|          3 | American Samoa | 2006-02-15 04:44:00 |
|          4 | Angola         | 2006-02-15 04:44:00 |
|          5 | Anguilla       | 2006-02-15 04:44:00 |
+------------+----------------+---------------------+
5 rows in set (0.00 sec)
 
mysql&gt; CHECKSUM TABLE country;
+----------------+------------+
| Table          | Checksum   |
+----------------+------------+
| sakila.country | 3658016321 |
+----------------+------------+
1 row in set (0.00 sec)
 
mysql&gt;
 
 
So, we are lucky. Despite the facts that we used for mysql data the system volume (which is not the recommended practice) and that we have not re-mounted partition as read-only (and other processes were continuing to perform writing to the disk), we managed to recover all the records. Calculated checksum after the recovery (3658016321) is equal to the checksum taken before the drop (3658016321).

MySQL Recover InnoDB dictionary

$
0
0
Why do we need to recover InnoDB dictionary
c_parser is a tool from TwinDB recovery toolkit that can read InnoDB page and fetch records out of it. Although it can scan any stream of bytes recovery quality is higher when you feed c_parser with pages that belong to the PRIMARY index of the table. All InnoDB indexes have their identifiers a.k.a. index_id. The InnoDB dictionary stores correspondence between table name and index_id. That would be reason number one.
 
Another reason – it is possible to recover table structure from the InnoDB dictionary. When a table is dropped MySQL deletes respective .frm file. If you had neither backups nor table schema it becomes quite a challenge to recover the table structure. This topic however deserves a separate post which I write some other day.
 
Let’s assume you’re convinced enough and we can proceed with InnoDB dictionary recovery.
 
Compiling TwinDB recovery toolkit
The source code of the toolkit is hosted on GitHub. You will need git to get the latest revision, so make sure you have it:
 
 
# which git
/usr/bin/git
 
 
Get the latest revision of the toolkit:
 
[root@twindb-dev tmp]# cd undrop-for-innodb/
[root@twindb-dev undrop-for-innodb]# ll
total 136
-rw-r--r-- 1 root root  6271 Jun 24 00:41 check_data.c
-rw-r--r-- 1 root root 27516 Jun 24 00:41 c_parser.c
drwxr-xr-x 2 root root  4096 Jun 24 00:41 dictionary
drwxr-xr-x 2 root root  4096 Jun 24 00:41 include
-rw-r--r-- 1 root root  1203 Jun 24 00:41 Makefile
-rw-r--r-- 1 root root 15495 Jun 24 00:41 print_data.c
drwxr-xr-x 2 root root  4096 Jun 24 00:41 sakila
-rw-r--r-- 1 root root  5223 Jun 24 00:41 sql_parser.l
-rw-r--r-- 1 root root 21137 Jun 24 00:41 sql_parser.y
-rw-r--r-- 1 root root 22236 Jun 24 00:41 stream_parser.c
-rw-r--r-- 1 root root  2237 Jun 24 00:41 tables_dict.c
-rwxr-xr-x 1 root root  6069 Jun 24 00:41 test.sh
[root@twindb-dev undrop-for-innodb]#
 
 
As prerequisites we would need gcc, flex and bison. Check that you have them:
 
[root@twindb-dev undrop-for-innodb]# which gcc
/usr/bin/gcc
[root@twindb-dev undrop-for-innodb]# which bison
/usr/bin/bison
[root@twindb-dev undrop-for-innodb]# which flex
/usr/bin/flex
 
 
Good. Now let’s compile the code:
 
 
[root@twindb-dev undrop-for-innodb]# make
gcc -g -O3  -I./include -c stream_parser.c
gcc -g -O3  -I./include  -pthread -lm stream_parser.o -o stream_parser
#flex -d sql_parser.l
flex sql_parser.l
#bison -r all -o sql_parser.c sql_parser.y
bison -o sql_parser.c sql_parser.y
sql_parser.y: conflicts: 5 shift/reduce
gcc -g -O3  -I./include -c sql_parser.c
gcc -g -O3  -I./include -c c_parser.c
gcc -g -O3  -I./include -c tables_dict.c
gcc -g -O3  -I./include -c print_data.c
gcc -g -O3  -I./include -c check_data.c
gcc -g -O3  -I./include  -pthread -lm sql_parser.o c_parser.o tables_dict.o print_data.o check_data.o -o c_parser
[root@twindb-dev undrop-for-innodb]#
 
 
If there are no errors we are ready to proceed.
 
Splitting ibdata1
The InnoDB dictionary is stored in ibdata1. So we need to parse it and get pages that store records of the dictionary. stream_parser does it.
 
 
# ./stream_parser -f /var/lib/mysql/ibdata1
...
Size to process:                  79691776 (76.000 MiB)
All workers finished in 1 sec
 
 
stream_parser finds InnoDB pages in ibdata1 and stores them sorted by page type(FIL_PAGE_INDEX or FIL_PAGE_TYPE_BLOB) by index_id.
Here’s the indexes:
 
SYS_TABLES
 
 
[root@twindb-dev undrop-for-innodb]# ll pages-ibdata1/FIL_PAGE_INDEX/0000000000000001.page
-rw-r--r-- 1 root root 16384 Jun 24 00:50 pages-ibdata1/FIL_PAGE_INDEX/0000000000000001.page
 
 
SYS_INDEXES
 
[root@twindb-dev undrop-for-innodb]# ll pages-ibdata1/FIL_PAGE_INDEX/0000000000000003.page
-rw-r--r-- 1 root root 16384 Jun 24 00:50 pages-ibdata1/FIL_PAGE_INDEX/0000000000000003.page
 
SYS_COLUMNS
 
[root@twindb-dev undrop-for-innodb]# ll pages-ibdata1/FIL_PAGE_INDEX/0000000000000002.page
-rw-r--r-- 1 root root 49152 Jun 24 00:50 pages-ibdata1/FIL_PAGE_INDEX/0000000000000002.page
 
and SYS_FIELDS
 
[root@twindb-dev undrop-for-innodb]# ll pages-ibdata1/FIL_PAGE_INDEX/0000000000000004.page
-rw-r--r-- 1 root root 16384 Jun 24 00:50 pages-ibdata1/FIL_PAGE_INDEX/0000000000000004.page
 
As you can see the dictionary is pretty small, just one page per index.
 
Dumping records from SYS_TABLES and SYS_INDEXES
To fetch records out of the index pages you need c_parser. But first, let’s create directory for dumps
 
 
 
 
[root@twindb-dev undrop-for-innodb]# mkdir -p dumps/default
[root@twindb-dev undrop-for-innodb]#
 
 
InnoDB dictionary is always in REDUNDANT format, so options -4 is mandatory:
 
[root@twindb-dev undrop-for-innodb]# ./c_parser -4f pages-ibdata1/FIL_PAGE_INDEX/0000000000000001.page -t dictionary/SYS_TABLES.sql &gt; dumps/default/SYS_TABLES 2&gt; dumps/default/SYS_TABLES.sql
[root@twindb-dev undrop-for-innodb]#
 
 
Here’s our sakila tables:
 
 
[root@twindb-dev undrop-for-innodb]# grep sakila dumps/default/SYS_TABLES | head -5
0000000052D5    D9000002380110  SYS_TABLES  "sakila/actor"  753 4   1   0   80  ""  739
0000000052D8    DC0000014F0110  SYS_TABLES  "sakila/address"    754 8   1   0   80  ""  740
0000000052DB    DF000002CA0110  SYS_TABLES  "sakila/category"   755 3   1   0   80  ""  741
0000000052DE    E2000002F80110  SYS_TABLES  "sakila/city"   756 4   1   0   80  ""  742
0000000052E1    E5000002C50110  SYS_TABLES  "sakila/country"    757 3   1   0   80  ""  743
[root@twindb-dev undrop-for-innodb]#
 
 
dumps/default/SYS_TABLES is a dump of the table eligible for LOAD DATA INFILE command. The exact command c_parsers prints to standard error output. I saved it in dumps/default/SYS_TABLES.sql
 
 
[root@twindb-dev undrop-for-innodb]# cat dumps/default/SYS_TABLES.sql
SET FOREIGN_KEY_CHECKS=0;
LOAD DATA INFILE '/root/tmp/undrop-for-innodb/dumps/default/SYS_TABLES' REPLACE INTO TABLE `SYS_TABLES` FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '"' LINES STARTING BY 'SYS_TABLES\t' (`NAME`, `ID`, `N_COLS`, `TYPE`, `MIX_ID`, `MIX_LEN`, `CLUSTER_NAME`, `SPACE`);
[root@twindb-dev undrop-for-innodb]#
 
 
The same way let’s dump SYS_INDEXES:
 
 
[root@twindb-dev undrop-for-innodb]# ./c_parser -4f pages-ibdata1/FIL_PAGE_INDEX/0000000000000003.page -t dictionary/SYS_INDEXES.sql &gt; dumps/default/SYS_INDEXES 2&gt; dumps/default/SYS_INDEXES.sql
[root@twindb-dev undrop-for-innodb]# 
 
 
Make sure we have sane result in the dumps
 
[root@twindb-dev undrop-for-innodb]# head -5 dumps/default/SYS_INDEXES
-- Page id: 11, Format: REDUNDANT, Records list: Valid, Expected records: (153 153)
000000000300    800000012D0177  SYS_INDEXES 11  11  "ID\_IND"   1   3   0   302
000000000300    800000012D01A5  SYS_INDEXES 11  12  "FOR\_IND"  1   0   0   303
000000000300    800000012D01D3  SYS_INDEXES 11  13  "REF\_IND"  1   0   0   304
000000000300    800000012D026D  SYS_INDEXES 12  14  "ID\_IND"   2   3   0   305
[root@twindb-dev undrop-for-innodb]# head -5 dumps/default/SYS_INDEXES.sql
SET FOREIGN_KEY_CHECKS=0;
LOAD DATA INFILE '/root/tmp/undrop-for-innodb/dumps/default/SYS_INDEXES' REPLACE INTO TABLE `SYS_INDEXES` FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '"' LINES STARTING BY 'SYS_INDEXES\t' (`TABLE_ID`, `ID`, `NAME`, `N_FIELDS`, `TYPE`, `SPACE`, `PAGE_NO`);
[root@twindb-dev undrop-for-innodb]#
 
 
Now we can work with the dictionary, but it’s more convenient if the tables are in MySQL.
 
Loading dictionary tables into MySQL
The main usage of SYS_TABLES and SYS_INDEXES is to get index_id by table name. It’s possible to run two greps. Having SYS_TABLES and SYS_INDEXES in MySQL makes job easier.
 
Before we can process let’s make sure mysql user can read from the root’s home directory. Maybe it’s not wise from security standpoint. If it’s your concern create whole recovery environment somewhere in /tmp.
 
 
 
 
[root@twindb-dev undrop-for-innodb]# chmod 711 /root/
[root@twindb-dev undrop-for-innodb]#
 
 
Create empty dictionary tables in some database(e.g. test)
 
[root@twindb-dev undrop-for-innodb]# mysql test &lt; dictionary/SYS_TABLES.sql
[root@twindb-dev undrop-for-innodb]# mysql test &lt; dictionary/SYS_INDEXES.sql
[root@twindb-dev undrop-for-innodb]#
 
 
And load the dumps:
 
 
[root@twindb-dev undrop-for-innodb]# mysql test &lt; dumps/default/SYS_TABLES.sql
[root@twindb-dev undrop-for-innodb]# mysql test &lt; dumps/default/SYS_INDEXES.sql
[root@twindb-dev undrop-for-innodb]#
 
 
Now we have the InnoDB dictionary in MySQL and we can query it as any other MySQL table:
 
mysql&gt; SELECT * FROM SYS_TABLES WHERE NAME = 'sakila/actor';
+--------------+-----+--------+------+--------+---------+--------------+-------+
| NAME         | ID  | N_COLS | TYPE | MIX_ID | MIX_LEN | CLUSTER_NAME | SPACE |
+--------------+-----+--------+------+--------+---------+--------------+-------+
| sakila/actor | 753 |      4 |    1 |      0 |      80 |              |   739 |
+--------------+-----+--------+------+--------+---------+--------------+-------+
1 row in set (0.00 sec)
mysql&gt; SELECT * FROM SYS_INDEXES WHERE TABLE_ID = 753;
+----------+------+---------------------+----------+------+-------+---------+
| TABLE_ID | ID   | NAME                | N_FIELDS | TYPE | SPACE | PAGE_NO |
+----------+------+---------------------+----------+------+-------+---------+
|      753 | 1828 | PRIMARY             |        1 |    3 |   739 |       3 |
|      753 | 1829 | idx_actor_last_name |        1 |    0 |   739 |       4 |
+----------+------+---------------------+----------+------+-------+---------+
2 rows in set (0.00 sec)
 
 
Here we can see that sakila.actor has two indexes: PRIMARY and idx_actor_last_name. Respective index_id are 1828 and 1829.
 
Stay tuned to learn what to do with them and how to recover sakila.actor
 

 

Viewing all 175 articles
Browse latest View live