Re: 3-way mirror to RAID-6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2017-12-28 13:34 GMT+01:00 Phil Turmel <philip@xxxxxxxxxx>:
> I've never put a mail archive on raid6.  I'd be concerned.  It's
> basically a random access workload with small reads and writes.  That is
> a recipe that'll maximize the write-amplification slowdown of raid6.

Ok, just to stay on the safety side, this is the current 3-way raid1:

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Jul 25 12:55:48 2016
     Raid Level : raid1
     Array Size : 488382841 (465.76 GiB 500.10 GB)
  Used Dev Size : 488382841 (465.76 GiB 500.10 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Mon Jan  8 12:26:43 2018
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

           Name : mail:0  (local to host mail)
           UUID : b2a5ed53:42890b73:dc6de22a:1ac12524
         Events : 30658

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1



# pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/md0   vg1  lvm2 a-   465,76g    0

# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  vg1    1   6   0 wz--n- 465,76g    0


# lvs
  LV         VG   Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  lv_dovecot vg1  -wi-ao  10,00g
  lv_log     vg1  -wi-ao  15,00g
  lv_mail    vg1  -wi-ao 410,76g
  lv_root    vg1  -wi-ao  20,00g
  lv_swap    vg1  -wi-ao   8,00g
  lv_tmp     vg1  -wi-ao   2,00g


I want to replace all disks with 2TB SAS disks.

So, which is the best procedure to follow to increse the total usable
space, without loosing redundancy while the system is running ?

I would like to have some confirms from skilled users in this ML.
On my own, i would replace one disk per time (fail+remove+add) and
wait for resync to complete, then, resize the array:

mdadm --grow /dev/md0 --bitmap none
mdadm --grow /dev/md0 --size=max

wait for it to finish

then: pvresize /dev/md0, lvresize (each LVs), resize2fs (each FS)

Is this ok ?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux