Re: How to recover after md crash during reshape?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good morning Andras,

On 10/20/2015 11:52 PM, andras@xxxxxxxxxxxxxxxx wrote:
> Phil,
> 
> Thank you so much for the detailed explanation and your patience with
> me! Sorry for not being more responsive - I don't have access to this
> mail account from work.

No worries.

>> for x in /sys/block/*/device/timeout ; do echo 180 > $x ; done
>>
>> (Arrange for this to happen on every boot, and keep doing it manually
>> until your boot scripts are fixed.)
> 
> Yes, will do. In your links below it seems that you're half advocating
> for using desktop drives in RAID arrays, half advocating against. From
> what I can tell, it seems the recommendation might depend on the
> use-case. If one doesn't care too much about instant performance in case
> of errors, one might want to use desktop drivers (with the above fix).
> If one wants reliable performance, one probably wants NAS drives. Did I
> understand the basic trade-off correctly?

Times change.  At the time some of those were written, desktop drives
with scterc support were still available, but default off.  Those are ok
in a raid if you have the appropriate smartctl command in your boot scripts.

Long timeouts with non-scterc drives, in my opinion, create a user
impression that things are broken, even if the drive is fine (UREs are
natural and unavoidable in the life of a drive).  Users are prone to
drastic measures when they think something is broken.  Also,
*applications* might not wait that long for their read, either.  So, I
only recommend the long timeout solution when an array is already in
trouble with such drives.

> It seems that people also think that green drives are a bad idea in
> RAIDs in general - mostly because the frequent parking of heads reduces
> life-time. Is that a correct statement?

I don't have enough experience with green drives to say.  The few that I
have (bought before I discovered the dropped scterc support) became part
of my offsite backup rotation.

> Yes sir! I will go through the steps and report back. One question: the
> reason I shouldn't attempt to re-create the new 10-disk array is that it
> would wipe out the 7->10 grow progress, so MD would think that it's a
> fully grown 10-disk array, right?

Right.  Your three extra drives never really were incorporated into the
array, so the data layout is still a 7-drive pattern.

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux