Re: >16TB RAID0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 16, 2009 at 8:59 PM, NeilBrown<neilb@xxxxxxx> wrote:
> I'm not 100% sure, but a quick look at the code suggests that
> 16TB is the upper limit for normal read/write operations on
> a block device like /dev/md0 on a 32bit host.  This is because it uses
> the page cache, and that uses 4K pages with a 32bit index, hence
> 16TB.

Yep, you're right.  Sorry, I should have looked more into that first.
I was assuming LBD took care of that somehow, because the kernel docs
just say that LBD allows it to go over 2TB, and doesn't mention the
new 16TB upper limit.

> However this doesn't explain why it seems to work for RAID5.  If I
> am right, RAID5 should fail in the same way as RAID0.
> But I would certainly expect RAID0 to work if anything does.
>

And again you're correct.  It eventually failed the comparison on
RAID5.  But it failed in a much different way on RAID5 than RAID0 for
some reason.  The same disk set on an x86_64 machine appears to be
working fine.  Sorry for the false alarm.  So I guess now my question
is, should the kernel and/or mdadm refuse to create or run a >16TB
array on a 32-bit kernel? :)

Thanks much for your help!
-Justin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux