On Wed, Feb 9, 2022 at 3:12 AM Red Wil <redwil@xxxxxxxxx> wrote: > > Hello, > > It started as the subject said: > - goal was to replace all 10 disks in a R6 > - context and perceived constraints > - soft raid (no imsm and or ddl containers) > - multiple disk partition. partitions across 10 disks formed R6 > - downtime not an issue > - minimize the number of commands > - minimize disks stress > - reduce the time spent with this process > - difficult to add 10 spares at once in the rig > - after a reshape/grow from 6 to 10 disks offset of data in raid > members was all over the place from cca 10ksect to 200ksect > > Approaches/solutions and critique > 1- add one by one a 'spare' and 'replace' raid member > critique: > - seem to me long and tedious process > - cannot/will not run in parallel > 2- add all the spares at once and perform 'replace' on members > critique > - just tedious - lots of cli commands which can be prone to mistakes. > next ones assume I have all the 'spares' in the rig > 3- create new arrays on spares, fresh fs and copy data. > 4- dd/ddrescue copy each drive to a new one. Advantage can be done one > by one or in parallel. less commands in the terminal. > > In the end I decided I will use route (3). > - flexibility on creation > - copy only what I need > - old array is a sort of backup > When I did mine I did a combination of 3 and 2. I bought new disks that were 2x the size of the devices in the original array, and partitioned those new disks with partition the correct size for the old array. I used 2 of new disks to remove 2 disks that were not behaving, and I used another new disk to replace a 3rd original device that was behaving just fine. I used the 3rd device I replaced to add to the 3 new disk partitions and created a 4 disk raid6 (3 new + 1 old/replaced device) and rearranged a subset of files from the original array to its own mount point on the new array.