time passes. You are eaten by a Grue. Sheesh this is taking a long time. simon@proxmox:~$ top top - 09:26:19 up 20:36, 11 users, load average: 2.46, 2.49, 2.39 Tasks: 334 total, 9 running, 325 sleeping, 0 stopped, 0 zombie Cpu(s): 52.8%us, 9.4%sy, 0.0%ni, 35.2%id, 2.5%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 12299244k total, 12227584k used, 71660k free, 11042868k buffers Swap: 11534332k total, 0k used, 11534332k free, 208204k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 18896 root 20 0 12288 1072 460 S 63 0.0 431:16.55 ntfs-3g 18909 root 20 0 12288 1064 460 R 62 0.0 419:17.11 ntfs-3g 18920 root 20 0 12288 1100 492 R 62 0.0 428:17.33 ntfs-3g 9210 root 20 0 4068 520 328 S 54 0.0 661:59.92 gzip 9138 root 20 0 4068 524 328 R 53 0.0 647:29.07 gzip 9247 root 20 0 4068 524 328 S 53 0.0 651:28.36 gzip 25678 root 20 0 4068 524 328 R 52 0.0 439:49.20 gzip 24957 root 20 0 4068 524 328 R 51 0.0 437:44.01 gzip 25792 root 20 0 4068 524 328 S 48 0.0 433:28.14 gzip This is hardly touching the CPU on the box (in my opinion), any advice on using renice ? I've never used it before, but now seems like a good time ? tia Simon On 16 February 2011 19:36, Phil Turmel <philip@xxxxxxxxxx> wrote: > On 02/16/2011 02:15 PM, Simon McNair wrote: >> proxmox:/home/simon# vgscan --verbose >> Wiping cache of LVM-capable devices >> Wiping internal VG cache >> Reading all physical volumes. This may take a while... >> Finding all volume groups >> Finding volume group "pve" >> Found volume group "pve" using metadata type lvm2 >> Finding volume group "lvm-raid" >> Found volume group "lvm-raid" using metadata type lvm2 >> proxmox:/home/simon# >> proxmox:/home/simon# lvscan --verbose >> Finding all logical volumes >> ACTIVE '/dev/pve/swap' [11.00 GB] inherit >> ACTIVE '/dev/pve/root' [96.00 GB] inherit >> ACTIVE '/dev/pve/data' [354.26 GB] inherit >> inactive '/dev/lvm-raid/RAID' [8.19 TB] inherit >> >> proxmox:/home/simon# vgchange -ay >> 3 logical volume(s) in volume group "pve" now active >> 1 logical volume(s) in volume group "lvm-raid" now active > > Heh. Figures. > >> proxmox:/home/simon# fsck.ext4 -n /dev/mapper/lvm-raid-RAID > > Actually, I wanted you to try with a capital N. Lower case 'n' is similar, but not quite the same. > >> e2fsck 1.41.3 (12-Oct-2008) >> fsck.ext4: No such file or directory while trying to open /dev/mapper/lvm-raid-RAID >> >> The superblock could not be read or does not describe a correct ext2 >> filesystem. If the device is valid and it really contains an ext2 >> filesystem (and not swap or ufs or something else), then the superblock >> is corrupt, and you might try running e2fsck with an alternate superblock: >> e2fsck -b 8193 <device> >> >> proxmox:/home/simon# fsck.ext4 -n /dev/mapper/ >> control lvm--raid-RAID pve-data pve-root pve-swap > > Strange. I guess it does that to distinguish dashes in the VG name from dashes between VG and LV names. > >> proxmox:/home/simon# fsck.ext4 -n /dev/mapper/lvm--raid-RAID >> e2fsck 1.41.3 (12-Oct-2008) >> /dev/mapper/lvm--raid-RAID has unsupported feature(s): FEATURE_I31 >> e2fsck: Get a newer version of e2fsck! >> >> my version of e2fsck always worked before ? > > v1.41.14 was release 7 weeks ago. But, I suspect there's corruption in the superblock. Do you still have your disk images tucked away somewhere safe? > > If so, try: > > 1) The '-b' option to e2fsck. We need to experiment with '-n -b offset' to find the alternate superblock. Trying 'offset' = to 8193, 16384, and 32768, per the man-page. > > 2) A newer e2fsprogs. > > Finally, > 3) mount -r /dev/lvm-raid/RAID /mnt/whatever > > Phil > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html