Re: Looking for a life-save LVM Guru

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 27, 2015 at 5:35 PM, Khemara Lyn <lin.kh@xxxxxxxxxxxx> wrote:
> Dear All,
>
> I am in desperate need for LVM data rescue for my server.
> I have an VG call vg_hosting consisting of 4 PVs each contained in a
> separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1).
> And this LV: lv_home was created to use all the space of the 4 PVs.

Mirrored? Or linear (default)? If this is linear, then it's like one
big hard drive. The single drive equivalent of losing 1 of 4 drives in
linear would be ablating 1/4 of the surface of the drive making it
neither readable nor writable. Because critical filesystem metadata is
distributed across the whole volume, the filesystem is almost
certainly irreparably damaged.[1]


> Right now, the third hard drive is damaged; and therefore the third PV
> (/dev/sdc1) cannot be accessed anymore.

Damaged how? Is it dead?

What file system?


>I would like to recover whatever
> left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1).
>
> I have tried with the following:
>
> 1. Removing the broken PV:
>
> # vgreduce --force vg_hosting /dev/sdc1
>   Physical volume "/dev/sdc1" still in use
>
> # pvmove /dev/sdc1
>   No extents available for allocation
>
> 2. Replacing the broken PV:
>
> I was able to create a new PV and restore the VG Config/meta data:
>
> # pvcreate --restorefile ... --uuid ... /dev/sdc1
> # vgcfgrestore --file ... vg_hosting
>
> However, vgchange would give this error:
>
> # vgchange -a y
>           device-mapper: resume ioctl on  failed: Invalid argument
>           Unable to resume vg_hosting-lv_home (253:4)
>           0 logical volume(s) in volume group "vg_hosting" now active
>
> Could someone help me please???

# vgcfgrestore -tv vg_hosting

If this produces some viable sign of success then do it again without
the -t. If you get scary messages, I advise not proceeding. If the
command without -t succeeds then try this:

# lvs -a -o +devices

In any case, there's a huge hole where both filesystem metadata and
file data was located, so I'd be shocked (like, really shocked) if
either ext4 or XFS will mount, even read only. So I expect this is
going to be a scraping operation with testdisk or debugfs.


[1] Btrfs can survive this to some degree because by default the
filesystem (metadata) is dup on single drive (except SSD) and raid1 on
multiple devices. So while you lose files (data) on the missing drive,
the fs itself is intact and will even mount normally.


-- 
Chris Murphy
-- 
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org




[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux