Big Filesystem and Compressed Filesystem?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(see below for long story background )

The last time I created a large HW RAID5 volume (1.6 TB) the kernel was unable to see all of it... If I create several smaller block devices (like 400GB each) can LVM bind them together into a larger single filesystem? ( I am aiming for 4-6 TB )

Is there anyway to greate a BIG robust random rw access filesystem that is transparently compressed and supports large files (up to several gigabytes each? )

I have seen cramfs and squashfs both of which have sub 2GB filesize limit problems and read/write random access issues. So they don't fit my needs. Please don't respond saying "just gzip each file" duh... I already thought of that.

Background info:.....................................
My employer has large amounts (10GB/day) of telecommunications
related billing data that in it's raw form is BIG (1-2GB ea) flat
ascii text files with fixed record formats with carriage returns.
We do tape
backups every night (incrementals) and we have weekly full backups
that go offsite. We currently keep 3 months of data online and
this is fine for any urgent issues.
Unfortunately every few months a customer (or the IRS or a local
tax agency ) will
challenge some accounting information and we have to dig up
several months of data from tape. This is very time consuming.
The raw billing data will *always* continue to be backed up to
tape and sent offsite. However because it is such a resource
drain to restore old tapes from offsite archives I have asked if
we could keep an "online" archive locally. The answer of course


is yes, but only if it is CHEAP. This raw billing data compresses
at a rate of 20:1 or 30:1 is just fixed records, most of which is
just ascii spaces or ascii 0-9. So if I could find a compressed
filesystem solution that would handle this it would be great, I
could get an IDE or SATA disk array with hardware RAID 5 and have
a huge (4 or 6 TB ) filesystem and with transparent compressed


      filesystem (even if it was only 5:1) that would be enough for
      several years worth of data.  Even if it is very slow disk I/O, it
      would still be faster than offsite tape. And it would be immensely
      easier!
..................................

--
redhat-list mailing list
unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list

[Index of Archives]     [CentOS]     [Kernel Development]     [PAM]     [Fedora Users]     [Red Hat Development]     [Big List of Linux Books]     [Linux Admin]     [Gimp]     [Asterisk PBX]     [Yosemite News]     [Red Hat Crash Utility]


  Powered by Linux