'not usable' value radically increased

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

I have several raid units that are combined into a single LVM volume. Each RAID is a physical volume and then the physical volumes are combined to form one large volume. For some reason, all of a sudden, one of the PVs has 12.00 TB of its space marked as 'not usable' and that space is missing from my XFS file system, which used to be 63 TB, but is now 51 TB total. About 7 TB of this 'lost' space is data. About 4 TB should be free space.

Here is the relevant output from pvdisplay:

[root@nimbus ~]# pvdisplay
 --- Physical volume ---
 PV Name               /dev/sdc1
 VG Name               vg1
 PV Size               12.73 TB / not usable 12.00 TB
 Allocatable           yes (but full)
 PE Size (KByte)       4096
 Total PE              3337859
 Free PE               0
 Allocated PE          3337859
 PV UUID               sDWFWu-fJ5u-wITT-ikmU-6ytD-OdUf-pHnRd1



I haven't a clue as to why out of 12.73 TB, 12 TB is unusable. One would think it would be all or nothing. The /dev/sdc1 shows up in lsscsi.

I hope some expert can help me out with this. I'm stumped (and a little freaked out).

thanks,

Eli

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux