Hello, It started as the subject said: - goal was to replace all 10 disks in a R6 - context and perceived constraints - soft raid (no imsm and or ddl containers) - multiple disk partition. partitions across 10 disks formed R6 - downtime not an issue - minimize the number of commands - minimize disks stress - reduce the time spent with this process - difficult to add 10 spares at once in the rig - after a reshape/grow from 6 to 10 disks offset of data in raid members was all over the place from cca 10ksect to 200ksect Approaches/solutions and critique 1- add one by one a 'spare' and 'replace' raid member critique: - seem to me long and tedious process - cannot/will not run in parallel 2- add all the spares at once and perform 'replace' on members critique - just tedious - lots of cli commands which can be prone to mistakes. next ones assume I have all the 'spares' in the rig 3- create new arrays on spares, fresh fs and copy data. 4- dd/ddrescue copy each drive to a new one. Advantage can be done one by one or in parallel. less commands in the terminal. In the end I decided I will use route (3). - flexibility on creation - copy only what I need - old array is a sort of backup Question: Just for my curiosity regarding (4) assuming array is offline: Besides being not recommended in case of imsm/ddl containers which (as far as i understood) keep some data on the hardware itself In case of pure soft raid is anything technical or safety related that prevents a 'dd' copy of a physical hard drive to act exactly as the original. Thanks Red