how to remove a device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
I'm currently using cLVM. I moved a RAID0 disc to a RAID5 disc using
pvmove on my first node and removed from it's volume group using vgreduce.

After that I deleted de first disk. Now on the secondary node I'm getting
IO errors like that:

[root@inf18 ~]# pvscan
  /dev/sdb: read failed after 0 of 4096 at 214748299264: Input/output error
  /dev/sdb1: read failed after 0 of 512 at 214745481216: Input/output error
  PV /dev/sde1           VG padicat.bench   lvm2 [200.00 GB / 0    free]
  PV /dev/sda1           VG vm.fibra        lvm2 [10.00 GB / 0    free]
  PV /dev/sdd1           VG vm.fibra        lvm2 [25.00 GB / 0    free]
  PV /dev/sdc2           VG vm.fibra        lvm2 [1012.00 MB / 492.00 MB
free]
  PV /dev/cciss/c0d0p3                      lvm2 [91.69 GB]
  Total: 5 [327.66 GB] / in use: 4 [235.98 GB] / in no VG: 1 [91.69 GB]
[root@inf18 ~]# vgreduce padicat.bench /dev/sdb1
  Physical Volume "/dev/sdb1" not found in Volume Group "padicat.bench"

Also I can't use the LV on the VG that was that disk. Anyone can help me
on telling LVM not to use sdb anymore? It's a production system, so by now
reboot it it's no an option to me.

Thanks!
Jordi

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux