Re: RAID5 created by 8 disks works with xfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/31/2012 9:05 PM, daobang wang wrote:
> There is another problem occurred, it seems that the file system was
> damaged when the pressure is very high, 

What kernel version are you using?  Did you get an oops?  What's in dmesg?

> it reported input/output error

Actual errors would be very helpful.

> when i typed ls or other command, and I tried to repair it with
> xfs_repair /dev/vg00/lv0000, the xfs_repir alloc memory failed, we

Did XFS automatically unmount the filesystem?  If not, the error
reported may not indicate a problem with the filesystem.  XFS shuts
filesystems down when it encounters serious problems.

> have 4GB memory on the machine, and the logical volume was a little
> more than 15TB, Could it be repair successfully if we have enough
> memory?

Hard to say.  Depends on what happened and the extent of the damage, if
any.  You've presented no log or debug information.  I would think 4GB
should be plenty to run xfs_repair.  Try

$ xfs_repair -n -vv -m 1 /dev/vg00/lv0000

dmem = in the output tells you how much RAM is needed for xfs_repair.
If it's more than 2GB and you're using a PAE kernel, switch to a 64 bit
kernel and 64 bit userland.  If dmem is over 4GB then you need more
DIMMs in the machine.  Or maybe simply dropping caches before running
xfs_repair might help:

$ echo 3 > /proc/sys/vm/drop_caches

-- 
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux