Re: Inactive arrays

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 14, 2016 at 12:16 PM, Daniel Sanabria <sanabria.d@xxxxxxxxx> wrote:
> BRAVO!!!!!!
>
> Thanks a million Chris! After following your advice on recovering the
> MBR and the GPT the arrays re-assembled automatically and all data is
> there.
>
> I already changed the type to make it consistent (FD00 on both
> partitions) and working on setting up the timeouts to 180 at boot
> time. Other than replacing the green drives with something more
> suitable (any suggestions are welcome), what else would you suggest to
> change to make the setup a bit more consistent and upgrade proof (i.e.
> having different metadata versions doesn't look right to me)?

Like I mentioned there's something about Greens spinning down that you
might look at. I'm not sure if delays in spinning back up is a
contributing factor to anything? I'd kinda expect that if the
kernel/libata send a command to the drive, and one spins up slow, the
kernel is just going to wait up to whatever the command timer is set
to. So if you set that to 180 seconds, it should be fine because no
drive takes 3 minute to spin up. But... I dunno if there's some other
vector for these drives to cause confusion.

Umm, yeah I don't think you need to worry too much about the metadata.
0.9 is deprecated, uses kernel autodetect rather than initrd based
detection like metadata 1.x, and can be more complex to troubleshoot.
But so long as it's working I honestly wouldn't mess with it. If you
do want to simplify it just make sure you have current backups because
changes are a RIPE time for mistakes that end up in user data loss. I
would pretty much just assume the user will break something, you have
a not too complex layout compared to others I've seen, but there are
some opportunities to make simple mistakes that will just blow shit up
and then you're screwed.

So I'd say it's easier to just plot a future when you're going to buy
a bunch of new drives and do a complete migration, rather than change
the existing setup metadata just for the sake of changing it.

And one thing to incorporate in the planning stage is LVM RAID.  You
could take all of your drives into one big pool, and create LV's like
you are individual RAIDs, and each LV can have its own RAID level. In
many ways it's easier because you're already using LVM on top of RAID
on top of partitioning. Instead you can create basically one
partition, add them all to LVM, and then manage the LV and raid level
at the same time. The main issue here is, familiarity with all the
tools. If you're more comfortable with mdadm, then use that. If you
can get over the hurdle that is lvm tools (it's like emacs for
storage, its metric piles of flags, documentation, features, and as
yet doesn't have all the same features as mdadm still for the raid
stuff). But it'll do scrubs, and device replacements, all the basic
stuff is there. Monitoring for drive failures is a little different, I
don't think it has a way to email you like mdadm does in case of drive
failures/ejections. So you'll have to look at that also. Note that on
the backend LVM raid uses the md kernel driver just like mdadm does,
it's just the user space tools and on disk metadata that differ.



-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux