Re: Uneven wear on raid1 devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25/01/16 16:40, Roman Mamedov wrote:
On Mon, 25 Jan 2016 16:29:02 +1100
Adam Goryachev <adam@xxxxxxxxxxxxxxxxxxxxxx> wrote:

    9 Power_On_Hours          -O--CK   098   098   000    -    6435
    9 Power_On_Hours          -O--CK   095   095   000    -    23178
2nd drive has almost 4x as much power-on time than the first one. My guess
would be that it accumulated all that write usage back before you put it into
this RAID1.

If you want to ensure the RAID1 usage is even, record the SMART data you have
now, and compare to the readings you will have a month later.

Hmmm, oops, I should have looked at that. Now of course, I realise one drive was replaced under warranty when it failed around 6 months ago (well, the value says just under 9 months actually), and that would explain the difference in wear and power on hours (aside from the initial full sync when the new device was first installed).

So, 2.5 years old, and 74% life remaining, and 9 months with 93% life remaining.

Sounds good to me, I should expect these to last at least the 5 years I was hoping for, and probably before that happens I will want to upgrade them to increase capacity anyway.

Thanks for your help

Regards,
Adam

--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux