Re: MDADM grow /dev/md0 - chunk size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jan 15 2017, J. Cassidy wrote:

> Hello all/Neil,
>
>
>
>
> I am trying to change the chunk size on a RAID 0 (two SSD) from 512K to 64K.
>
> I am running Debian Stretch with a 4.10 kernel.
>
> MDADM version is 4.0 (GIT).
>
> This is the command string being issued -
>
> mdadm --grow -c 64 --backup-file=/zz/backup.file /dev/md0
>
> or
>
> mdadm --grow -c 64  /dev/md0
>
> both of the abovementioned commands produce this message -
>
>
> "mdadm: /dev/md0: could not set level to raid4"
>
>
> A snippet from dmesg -
> .
> .
> md/raid:md0: cannot takeover raid0 with more than one zone.
> md: md0: raid4 would not accept array

Your two partitions that form the RAID0 array are different sizes.
This causes raid0 to create 2 zones, one which covers all of the smaller
partition and an equal portion of the larger partition, and one which
covers the remainder of the larger partition.

raid4 does not have a similar concept of zones, so it is not possible to
convert the raid0 into a degraded raid4.
raid0 does not support chunk-size changes (or any changes) directly.
These are performed by transforming the RAID0 to RAID4 and having the
raid4 module perform the change.

The consequence of all this is that: sorry, you cannot change the chunk
size of the array.

And... please don't send nag emails so soon - it was barely more than
24hours after the original.  This just comes across as rude and
impatient.  People have other commitments.
My rule of thumb is to wait at least a week before resending - and then
resend the full text of the original.  Your nag email was not only too
soon, but contained no detail and so was useless.

NeilBrown


> .
> .
>
> My MDADM setup -
>
>
> mdadm --detail /dev/md0
> /dev/md0:
>         Version : 1.2
>   Creation Time : Sat Jan 14 16:51:54 2017
>      Raid Level : raid0
>      Array Size : 497783808 (474.72 GiB 509.73 GB)
>    Raid Devices : 2
>   Total Devices : 2
>     Persistence : Superblock is persistent
>
>     Update Time : Sat Jan 14 16:51:54 2017
>           State : clean
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
>
>      Chunk Size : 512K
>
>            Name : Pezenas:0  (local to host Pezenas)
>            UUID : 77cd6f4e:f98bf2b0:862948df:12da38fa
>          Events : 0
>
>     Number   Major   Minor   RaidDevice State
>        0     259        4        0      active sync   /dev/nvme0n1p2
>        1     259        2        1      active sync   /dev/nvme1n1p1
>
>
> I recall doing something similiar a few years ago and it worked, though not using
> NVME drives.
>
>
> Any help/pointers much appreciated.
>
>
>
>
> Regards,
>
>
>
> John
>
>
>
>
>
>
> John Cassidy
>
> Obere Bühlstrasse 21
> 8700 Küsnacht (ZH)
> Switzerland / Suisse / Schweiz
>
>
> Mobile:    +49  152 58961601 (Germany)
> Mobile:    +352 621 577 149  (Luxembourg)
> Mobile:    +41  78 769 17 97 (CH)
> Landline:  +41  44 509 1957
> Mobile email: mobile@xxxxxxxxxxxx
>
> http://www.jdcassidy.eu
>
> "Aut viam inveniam aut faciam" - Hannibal.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux