Re: all of my drives are spares

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Wol, et al --

...and then Wol said...
% On 09/09/2023 12:26, David T-G wrote:
% > 
% > Wow ...  I'm used to responses pointing out either what I've left
% > out or how stupid my setup is, but total silence ...  How did I
% > offend and how can I fix it?
% 
% Sorry, it's usually me that's the quick response, everyone else takes ages,

True!


% and I'm feeling a bit burnt out with life in general at the moment.

Oh!  Sorry to hear that.  Not much I can do from here, but I can think
you a hug :-)  I hope things look up soon.


% > 
% > I sure could use advice on the current hangup before perhaps just
% > destroying my entire array with the wrong commands ...
% 
% I wonder if a controlled reboot would fix it. Or just do a --stop followed

I've tried a couple of reboots; they're stuck that way.  I'll try the
stop and assemble.


% by an assemble. The big worry is the wildly varying event counts. Do your
% arrays have journals.

No, I don't think so, unless they're created automagically.  Alas, I
don't recall the exact creation command :-/

How can I check for usre?  The -D flag output doesn't mention a journal,
whether enabled or missing.


% > 
% > With fingers crossed,
% > :-D
% 
% If the worst comes to the worst, try a forced assemble with the minimum
% possible drives (no redundancy). Pick the drives with the highest event
% counts. You can then re-add the remaining ones if that works.

Hmmmmm ...  For a RAID5 array, the minimum would be one left out, right?
So five instead of all six.  And the event counts seem to be three and
three, which is interesting but also doesn't point to any one favorite to
drop :-/


% 
% Iirc this is actually not uncommon and it shouldn't be hard to recover from.
% I really ought to go through the archives, find a bunch of occasions, and
% write it up.

In your copious free time :-)  That would, indeed, be awesome.


% 
% The only real worry is that the varying event counts mean that some data
% corruption is likely. Recent files, hopefully nothing important. One thing

Gaaaaah!

Fortunately, everything is manually mirrored out to external drives, so
if everything did go tits-up I could reload.  I'll come up with a diff
using the externals as the sources and check ... eventually.


% that's just struck me, this is often caused by a drive failing some while
% back, and then a glitch on a second drive brings the whole thing down. When
% did you last check your array was fully functional?

Let me get back to you on that.  It's actually been a couple of weeks in
this state just waiting to get to it; life has been interesting here,
too.  But I have a heartbeat script that might have captured happy data
...  These are all pretty new drives after the 4T drive disaster a year
(or two?) ago, so they *should* be OK.


% 
% Cheers,
% Wol


Thanks again & stay tuned

:-D
-- 
David T-G
See http://justpickone.org/davidtg/email/
See http://justpickone.org/davidtg/tofu.txt




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux