Kenneth Goodwin wrote:
>> ... kernel couldn't recognize a HW-RAID volume of large
>> 1.6TB size...
You were probably dealing with a 32 bit kernel with 512
byte blocks and signed 32 bit integers
for storing block numbers - maximum filesystem size would be
around a terabyte.
You could obviously pop it up to a larger block size (2048)
to get up to around 4TB, you would still have a 2gb filesize
limit.
Really? Right now with ext3fs on RHAS-2.1 I don't have a
2GB filesize limit I can create 10 or 15GB files without any
problems. But I did have the approx. 1TB DEVICE limit.
It wasn't a matter of creating a large filesystem, it was
a matter of even seeing the device as being that big.
Before I could do a mkfs I had to have a device, and the
kernel didn't even see the SCSI device.
or upgrade to a 64 bit version of the kernel etc.....
Do you have success experience with 64-bit versions of linux
and creating a large disk volume or file-system? I could
create a system on an Opteron based system with RHEL 64-bit.
That is possible, but I wouldn't want to bother doing that
unless I had some hope that it would solve some problems.
In any case, you should take, IMHO, a different tactic here.
You are being too complex in your approach to the problem.
It actually leans more towards a circular ring buffer design
which will be simpler to manage.
Yeah I have already thought about that. I am trying to avoid it.
That's why I was asking if anyone had experience doing large FS
and also compressed FS's.
--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list