Heinz: I’ve posted to various newsgroups, combed through FAQ’s,
and can’t find an answer to my problem. I have my root “/” on a logical volume
/dev/Volume00/RootVol, which is in turn mounted on a software RAID-5 volume,
/dev/md1. I wanted to test my crash
recovery, i.e. see if my backups were working. So rather than test it on my working
system, I decided to create a new logical volume, /dev/Volume00/TestRoot. I did the following:
When my kernel gets to the point where it need to access “/”
I get an error, “cannot mount /dev/Volume00/TempRoot, bad superblock, try e2fsck –b 8193, etc. etc. etc.”. When it
dumps me to a shell and I run “vgscan” I get “no volumes found” yet,
when I run “pvscan” it lists one volume
found “Volume00” on /dev/md1. Did I miss something in my restore? Do I need to restore a configuration of
some sort? I have read through the manpages on vgcfgbackup and vgcfgrestore and believe that these write out
configurations to the physical volumes.
As I can boot my old root (or boot from cd)
and both see and mount /dev/Volume00/TempVol I’m sure there’s
nothing wrong with my volume group configuration on the disks… Sorry to ask such a question, but as I said I’ve
looked and looked and looked for answers elsewhere to no avail. Any answer greatly appreciated. Thanks Dave |
_______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/