Re: md-linear accidental(?) removal, removed significant(?) use case?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/01/2025 at 13:16, Roman Mamedov wrote:

I fully support keeping md-linear for users with existing deployments.

Of course. Breaking existing setups is bad.

Wanted to only ask out of curiosity, did you try using md-raid0 for the same
scenario?

It can use different sized devices in RAID0. In case of two disks it will
stripe between them over the matching portion of the sizes, and then the tail
of the larger device will be accessed in a linear fashion. Not sure it can
handle 3 or more in this manner, will there be multiple steps of the striping,
each time with a smaller number of the remaining larger devices (but would not
be surprised if yes).

Yes. If I remember correctly, md-raid0 divides disks in as many zones as different disk sizes. The first zone contains the area equal the size of the smallest disk(s) on all disks, and the last zone contains the remaining area on the biggest disk(s).

Given that a loss of one device in md-linear likely means complete data loss
anyway (relying on picking up pieces with data recovery tools is not a good
plan), seems like using md-raid0 here instead would have no downsides but
likely improve performance by a lot.

A downside is that adding a disk to a RAID0 array requires a reshape.

Aside from all that, the "industry" way to do the task of md-linear currently
would be a large LVM LV over a set of multiple PVs in a volume group.

I fully agree. LVM adds some complexity but also provides much more flexibility. You cannot hot-swap, resize or remove a disk in a md-linear array.





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux