----- Original Message ----- | Dear All, | | I am in desperate need for LVM data rescue for my server. | I have an VG call vg_hosting consisting of 4 PVs each contained in a | separate hard drive (/dev/sda1, /dev/sdb1, /dev/sdc1, and /dev/sdd1). | And this LV: lv_home was created to use all the space of the 4 PVs. | | Right now, the third hard drive is damaged; and therefore the third PV | (/dev/sdc1) cannot be accessed anymore. I would like to recover whatever | left in the other 3 PVs (/dev/sda1, /dev/sdb1, and /dev/sdd1). | | I have tried with the following: | | 1. Removing the broken PV: | | # vgreduce --force vg_hosting /dev/sdc1 | Physical volume "/dev/sdc1" still in use | | # pvmove /dev/sdc1 | No extents available for allocation This would indicate that you don't have sufficient extents to move the data off of this disk. If you have another disk then you could try adding it to the VG and then moving the extents. | 2. Replacing the broken PV: | | I was able to create a new PV and restore the VG Config/meta data: | | # pvcreate --restorefile ... --uuid ... /dev/sdc1 | # vgcfgrestore --file ... vg_hosting | | However, vgchange would give this error: | | # vgchange -a y | device-mapper: resume ioctl on failed: Invalid argument | Unable to resume vg_hosting-lv_home (253:4) | 0 logical volume(s) in volume group "vg_hosting" now active There should be no need to create a PV and then restore the VG unless the entire VG is damaged. The configuration should still be available on the other disks and adding the new PV and moving the extents should be enough. | Could someone help me please??? | I'm in dire need for help to save the data, at least some of it if possible. Can you not see the PV/VG/LV at all? -- James A. Peltier IT Services - Research Computing Group Simon Fraser University - Burnaby Campus Phone : 778-782-6573 Fax : 778-782-3045 E-Mail : jpeltier@xxxxxx Website : http://www.sfu.ca/itservices Twitter : @sfu_rcg Powering Engagement Through Technology "Build upon strengths and weaknesses will generally take care of themselves" - Joyce C. Lock _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos