Dan Carl wrote:
I know its a raid 0 is a stripe.
Its my swap partition.
Why would I need fault tolerance on my swap.
Things can get weird if swap is on a failed device or the device its on
fails while the system is running. I went the opposite route:
[root@fraud ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/md5 2877756 184776 2546796 7% /
/dev/md3 248783 12014 223925 6% /boot
none 387852 0 387852 0% /dev/shm
/dev/md2 37057024 2122108 33052508 7% /home
/dev/md0 10317752 3161284 6632356 33% /usr
/dev/md1 10317752 1177868 8615772 13% /var
/dev/hdh1 9621848 24 9133048 1% /stage
/dev/hdh2 147945308 60761372 79668732 44% /share
/dev/md4 is swap. /stage is my amanda staging area and /share is for
large, replaceable files (e.g., movies, ISO images, etc.) They are
mounted by rc.local so even if that drive fails, the system comes up.
I tested by shutting the box down, pulling a ribbon cable off of a drive
and then booting the system. Lather, rinse, repeat. Any one drive can
die and the system still runs. The system has also survived a drive
failure. Got my e-mail from mdadm and just had to figure out which
drive in the RAID-1 pair had died.
Cheers,
Dave
--
Politics, n. Strife of interests masquerading as a contest of principles.
-- Ambrose Bierce
--
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list