Re: Best way to add caching to a new raid setup.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I use mdadm raid.  From what I can tell mdadm has been around a lot
longer and is better understood by a larger group of users.   Hence if
something does go wrong there are a significant number of people that
can help.

I have been running mythtv on mdadm since early-2006, using LVM over
top of it.  I have migrated from 4x500 to 4x1.5tb and am currently on
7x3tb.

One trick I did do on the 3tb's is I did partition the disk into 4
750gb partitions and then each set of 7 makes up a PV.  Often if a
disk gets a bad block or a random io failure it only takes a single
raid from +2 down to +1, and when rebuilding them it rebuilds faster.
I created mine like below:, making sure md13 has all sdX3 disks on it
as when you have to add devices the numbers are the same.  This also
means that when enlarging it that there are 4 separate enlarges, but
no one enlarge takes more than a day.  So there might be a good reason
to say separate a 12tb drive into 6x2 or 4x3 just so if you enlarge it
it does not take a week to finish.   Also make sure to use a bitmap,
when you re-add a previous disk to it the rebuilds are much faster
especially if the drive has only been out for a few hours.

Personalities : [raid6] [raid5] [raid4]
md13 : active raid6 sdi3[9] sdg3[6] sdf3[12] sde3[10] sdd3[1] sdc3[5] sdb3[7]
      3612623360 blocks super 1.2 level 6, 512k chunk, algorithm 2
[7/7] [UUUUUUU]
      bitmap: 0/6 pages [0KB], 65536KB chunk

md14 : active raid6 sdi4[11] sdg4[6] sdf4[9] sde4[10] sdb4[7] sdd4[1] sdc4[5]
      3612623360 blocks super 1.2 level 6, 512k chunk, algorithm 2
[7/7] [UUUUUUU]
      bitmap: 1/6 pages [4KB], 65536KB chunk

md15 : active raid6 sdi5[11] sdg5[8] sdf5[9] sde5[10] sdb5[7] sdd5[1] sdc5[5]
      3612623360 blocks super 1.2 level 6, 512k chunk, algorithm 2
[7/7] [UUUUUUU]
      bitmap: 1/6 pages [4KB], 65536KB chunk

md16 : active raid6 sdi6[9] sdg6[7] sdf6[11] sde6[10] sdb6[8] sdd6[1] sdc6[5]
      3615495680 blocks super 1.2 level 6, 512k chunk, algorithm 2
[7/7] [UUUUUUU]
      bitmap: 0/6 pages [0KB], 65536KB chunk



On Sat, Aug 29, 2020 at 11:00 AM Roman Mamedov <rm@xxxxxxxxxxx> wrote:
>
> On Sat, 29 Aug 2020 16:34:56 +0100
> antlists <antlists@xxxxxxxxxxxxxxx> wrote:
>
> > On 28/08/2020 21:39, Ram Ramesh wrote:
> > > One thing about LVM that I am not clear. Given the choice between
> > > creating /mirror LV /on a VG over simple PVs and /simple LV/ over raid1
> > > PVs, which is preferred method? Why?
> >
> > Simplicity says have ONE raid, with ONE PV on top of it.
> >
> > The other way round is you need TWO SEPARATE (at least) PV/VG/LVs, which
> > you then stick a raid on top.
>
> I believe the question was not about the order of layers, but whether to
> create a RAID with mdadm and then LVM on top, vs. abandoning mdadm and using
> LVM's built-in RAID support instead:
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/mirror_create
>
> Personally I hugely prefer mdadm, due to the familiar and convenient interface
> of the program itself, as well as of /proc/mdstat.
>
> --
> With respect,
> Roman



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux