Re: RAID5 with two drive sizes question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 05 Jun 2012 19:27:53 +0200
"Joachim Otahal (privat)" <Jou@xxxxxxx> wrote:

> Hi,
> Debian 6.0.4 / superblock 1.2
> sdc1 = 1.5 TB
> sdd1 = 1.5 TB (cannot be used during --create, contains still data)
> sde1 = 1 TB
> sdf1 = 1 TB
> sdg1 = 1 TB
> 
> Target: RADI5 with 4.5 TB capacity.
> 
> The normal case would be:
> mdadm -C /dev/md3 --bitmap=internal -l 5 -n 5 /dev/sdc1 /dev/sdd1 
> /dev/sde1 /dev/sdf1 /dev/sdg1
> What I expect: since the first and the second drive are 1.5 TB size the 
> third fouth and fifth drive are treated like 2*1.5 TB, creating a 4.5 TB 
> RAID.

Lolwhat.

> What would really be created: I know here are people that know and not 
> guess : ).

5x1TB RAID5. Lowest common device size across all RAID members is utilized in
an array.

But what you do after that, is you also create a separate 2x0.5TB RAID1 from
the 1.5TB drives' "tails", and join both arrays into a single larger volume using LVM.

The result: 4.5 TB of usable space, with one-drive-loss tolerance (provided by
RAID5 in the first 4 TB, and by RAID1 in the 0.5TB "tail").

-- 
With respect,
Roman

~~~~~~~~~~~~~~~~~~~~~~~~~~~
"Stallman had a printer,
with code he could not see.
So he began to tinker,
and set the software free."

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux