Re: Repair thin pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




2016-02-10 18:32 GMT+08:00 Joe Thornber <thornber redhat com>:
> Yep, I definitely want these for upstream.  Send me what you've got,
> whatever state it's in; I'll happily spend a couple of weeks tidying
> this.
>
> - Joe
The feature was completed & workable, but the code is based on v0.4.1.
I need some days to clean up & rebase. Please wait.
syntax:
thin_ll_dump /dev/mapper/corrupted_tmeta [-o thin_ll_dump.xml]
thin_ll_restore -i edited_thin_ll_dump.xml -E
/dev/mapper/corrupted_tmeta -o /dev/mapper/fixed_tmeta
Ming-Hung Tsai
-------------
Hi,
Thank you very much for giving us so many advices.

Here are some progresses based on you guys mail conversations:
1,check metadata device:
[root@stor14 home]# thin_check /dev/mapper/vgg145155121036c-pool_nas_tmeta0
examining superblock
examining devices tree
examining mapping tree
2,dump metadata info:
[root@stor14 home]# thin_dump /dev/mapper/vgg145155121036c-pool_nas_tmeta0 -o nas_thin_dump.xml -r
[root@stor14 home]# cat nas_thin_dump.xml 
<superblock uuid="" time="1787" transaction="3545" data_block_size="128" nr_data_blocks="249980672">
</superblock>
Compared with other normal pools, it seems like all device nodes and mapping info in the metadata lv have lost.
Is there happened to be 'orphan nodes'? and could you give us your semi-auto repair tools so we can repair it?

Thank you very much!
Mars


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux