Re: Best way to add caching to a new raid setup.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/29/20 11:26 AM, Roger Heflin wrote:
I use mdadm raid.  From what I can tell mdadm has been around a lot
longer and is better understood by a larger group of users.   Hence if
something does go wrong there are a significant number of people that
can help.

I have been running mythtv on mdadm since early-2006, using LVM over
top of it.  I have migrated from 4x500 to 4x1.5tb and am currently on
7x3tb.

One trick I did do on the 3tb's is I did partition the disk into 4
750gb partitions and then each set of 7 makes up a PV.  Often if a
disk gets a bad block or a random io failure it only takes a single
raid from +2 down to +1, and when rebuilding them it rebuilds faster.
I created mine like below:, making sure md13 has all sdX3 disks on it
as when you have to add devices the numbers are the same.  This also
means that when enlarging it that there are 4 separate enlarges, but
no one enlarge takes more than a day.  So there might be a good reason
to say separate a 12tb drive into 6x2 or 4x3 just so if you enlarge it
it does not take a week to finish.   Also make sure to use a bitmap,
when you re-add a previous disk to it the rebuilds are much faster
especially if the drive has only been out for a few hours.

Personalities : [raid6] [raid5] [raid4]
md13 : active raid6 sdi3[9] sdg3[6] sdf3[12] sde3[10] sdd3[1] sdc3[5] sdb3[7]
       3612623360 blocks super 1.2 level 6, 512k chunk, algorithm 2
[7/7] [UUUUUUU]
       bitmap: 0/6 pages [0KB], 65536KB chunk

md14 : active raid6 sdi4[11] sdg4[6] sdf4[9] sde4[10] sdb4[7] sdd4[1] sdc4[5]
       3612623360 blocks super 1.2 level 6, 512k chunk, algorithm 2
[7/7] [UUUUUUU]
       bitmap: 1/6 pages [4KB], 65536KB chunk

md15 : active raid6 sdi5[11] sdg5[8] sdf5[9] sde5[10] sdb5[7] sdd5[1] sdc5[5]
       3612623360 blocks super 1.2 level 6, 512k chunk, algorithm 2
[7/7] [UUUUUUU]
       bitmap: 1/6 pages [4KB], 65536KB chunk

md16 : active raid6 sdi6[9] sdg6[7] sdf6[11] sde6[10] sdb6[8] sdd6[1] sdc6[5]
       3615495680 blocks super 1.2 level 6, 512k chunk, algorithm 2
[7/7] [UUUUUUU]
       bitmap: 0/6 pages [0KB], 65536KB chunk



On Sat, Aug 29, 2020 at 11:00 AM Roman Mamedov <rm@xxxxxxxxxxx> wrote:
On Sat, 29 Aug 2020 16:34:56 +0100
antlists <antlists@xxxxxxxxxxxxxxx> wrote:

On 28/08/2020 21:39, Ram Ramesh wrote:
One thing about LVM that I am not clear. Given the choice between
creating /mirror LV /on a VG over simple PVs and /simple LV/ over raid1
PVs, which is preferred method? Why?
Simplicity says have ONE raid, with ONE PV on top of it.

The other way round is you need TWO SEPARATE (at least) PV/VG/LVs, which
you then stick a raid on top.
I believe the question was not about the order of layers, but whether to
create a RAID with mdadm and then LVM on top, vs. abandoning mdadm and using
LVM's built-in RAID support instead:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/mirror_create

Personally I hugely prefer mdadm, due to the familiar and convenient interface
of the program itself, as well as of /proc/mdstat.

--
With respect,
Roman
Roger,

   Good point about breaking up the disk into partitions and building same numbered partition in to a raid volume. Do you recommend this procedure even if I do only raid1? I am afraid to make raid6 over 4x14TB disks. I want to keep rebuild simple and not thrash the disks each time I (have to) replace one. Even if I split into 3tb partitions, I replace one disk all of them will rebuild and it will be a seek festival. I am hoping simplicity of raid1 will be more suited when expected URE size is smaller than a single disk capacity. I like the +2 redundancy of raid6 over +1 raid1 (not doing raid1 over 3 disks as I fee that is a huge waste)

Regards
Ramesh



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux