Re: [Linux-cluster] GFS 2Tb limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 01, 2004 at 12:31:41PM +0100, Stephen Willey wrote:
> There was a post a while back asking about 2Tb limits and the consensus 
> was that with 2.6 you should be able to exceed the 2Tb limit with GFS.  
> I've been trying several ways to get GFS working including using 
> software raidtabs and LVM (seperately :) ) and everytime I try to use 
> mkfs.gfs on a block device larger than 2Tb I get the following:
> Command: mkfs.gfs -p lock_dlm -t cluster1:gfs1 -j 8 /dev/md0
> Result: mkfs.gfs: can't determine size of /dev/md0: File too large
> (/dev/md0 is obviously something different when using LVM or direct 
> block device access)
> Does anyone have a working GFS filesystem larger than 2Tb (or know how 
> to make one)?
> Without being able to scale past 2Tb, GFS becomes pretty useless for us...
> Thanks for any help,

Either your utility is not opening the file with O_LARGEFILE or an
O_LARGEFILE check has been incorrectly processed by the kernel. Please
strace the utility and include the compressed results as a MIME
attachment. Remember to compress the results, as most MTA's will reject
messages of excessive size, in particular, mine.


-- wli


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux