Re: Repair thin pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2016-02-17 10:48 GMT+08:00 Mars <kirapangzi@gmail.com>:
> Hi,
>
> Thank you very much for giving us so many advices.
>
> Here are some progresses based on you guys mail conversations:
>
> 1,check metadata device:
>
> [root@stor14 home]# thin_check /dev/mapper/vgg145155121036c-pool_nas_tmeta0
> examining superblock
> examining devices tree
> examining mapping tree
>
> 2,dump metadata info:
>
> [root@stor14 home]# thin_dump /dev/mapper/vgg145155121036c-pool_nas_tmeta0
> -o nas_thin_dump.xml -r
> [root@stor14 home]# cat nas_thin_dump.xml
> <superblock uuid="" time="1787" transaction="3545" data_block_size="128"
> nr_data_blocks="249980672">
> </superblock>
>
> Compared with other normal pools, it seems like all device nodes and mapping
> info in the metadata lv have lost.

Two possibilities: The device details tree was broken, or worse, the data mapping tree was broken.

> Is there happened to be 'orphan nodes'? and could you give us your semi-auto
> repair tools so we can repair it?

Sorry, the code is not finished. Please try my binary first (static binary compiled on Ubuntu 14.04):
https://www.dropbox.com/s/6g8gm1hndxp3rpd/pdata_tools?dl=0

Please provide the output of thin_ll_dump:
./pdata_tools thin_ll_dump /dev/mapper/vgg145155121036c-pool_nas_tmeta0 -o nas_thin_ll_dump.xml
(it needs some minutes to go, since that it scan through the entire metadata (16GB!). I'll improve it later.


Ming-Hung Tsai

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux