RE: RAID5 / 6 Growth

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> With dev/backup I mean if the Linux machine you're running is a a
> backup system or a development machine that you can at least test this
> on, it'd be convenient.

	Oh, I see.  I'll think about it.  I have the machine offline
entirely right now, making some hardware changes.

> The reason why you can't mount even though repair suggests you do
> might be because of a bug in xfsprogs/libs not a complete corruption
> in the filesystem itself.

	It's possible, I guess, but the diagnostics seem pretty coherent.
Running xfs_repair in test mode finds a small but reasonable number of
issues.

> Perhaps even the newer version of the progs & libs might be able to
> handle the kind of corruption in the filesystem you're having now,
> without the need to clear the log.

	Again, it's possible, of course.  I'll take it under advisement.
The question at hand, however is, "Is there possibly something at a lower
level (md) that could potentially be addressed which could clear the issues
that xfs thinks it has?"

 
> In case you resort to clearing the log, wouldn't running xfs_repair
> result in eventually finding the lost inodes and putting them in
> lost+found?

	The lost inodes, yes.  Xfs is reporting a few other errors, as well.
They don't look to be too heinous, so it might be easier all the way around
to just clear the log and proceed.  An rsych should easily recover any lost
files.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux