Le Sat, 10 Jul 2010 23:32:57 -0700 vous écriviez: > I got some automated emails this Sunday about I/O errors coming from > the computer That smells like a hardware problem. What type of RAID is this? RAID-5, RAID-10, RAID-6? are there any alarms from the RAID controller? Can you test the SMART status of the drives? What are the JBODs, are these dell MD-1000? > One one of the > physical volumes (PVs) - on /dev/sdc1, I noticed when I ran > pvdisplay that of the 12.75 TB comprising the volume, 12.00! TB was > being shown as 'not usable'. Smells more like a hardware problem. Check all your systems logs for IO errors and errors coming from the sas driver. Are you using mptsas or megaraid driver? Grep the logs with the driver name to check for any message (time outs, IO errors, etc). > thinking it might find the missing > data. Instead the filesystem decreased back to 51 TB. I rebooted and > tried again a couple of times and the same thing happened. I'd > really, really like to get that data back somehow and also to get the > filesystem to where we can start using it again. Check the dmesg output right after the xfs_repair. My bet : there is an IO error (bad cable? hosed drive?) (message from the controller), the PV is failed (message from LVM), then xfs_repair does what it must do : it truncates the filesystem to the size of the underlying device. Unfortunately the data may still be on the drives, but a tool like photorec is probably your only chance to get it back from the raw drives. Metadata, filenames, directory hierarchies are almost certainly gone once and for all. -- ------------------------------------------------------------------------ Emmanuel Florac | Direction technique | Intellique | <eflorac@xxxxxxxxxxxxxx> | +33 1 78 94 84 02 ------------------------------------------------------------------------ _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs