Re: Suggestions for partition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



[Repost - for some reason my reply from earlier this morning did not go through]

Thanks everyone for all your suggestions/comments.

>Ask yourself this question: Does the company loose money when the build system
>is down for restore?  How much? How long does a restore take?

No, no money lost.  If I keep a spare drive, it should take less than an hour to restore the system.

> Mirroring disks is not a replacement for backup. It is a way to improve
> availability of a system (no downtime when a disc dies), so it might even be
> interesting when there is no important data on the machine. If this is
> important for you use RAID-1 for the entire discs.

I would waste the most disk space, but this is certainly a possibility.

> If decreased availability is not a problem for you (you can easily afford a
> day of downtime when a disc dies) use RAID-0 for the entire discs. It will
> give you a nice performance boost. Especially on a build host people will
> love the extra performance of the disc array.

But if either disk dies, the whole system is unusable.  I don't think I will use this option.

> A combination of RAID-0 and RAID-1 may also be an option: Make a small RAID-1
> partition for the operating system (say 20GB) and a big RAID-0 partition for
> the data. This way you will get maximum performance on the data partition,
> but when a disc dies you do not need to reinstall the operating system. Just
> put in a new disc, let the RAID-1 rebuild itself in the background and
> restore your data. This can reduce the downtime (and the amount of work for
> you) when a disc dies considerably.

Hmm, this sounds like a possibility.  I have to figure out how to do this (I haven't used HW RAID before).

> HW vs SW RAID: Kind of a religious question. HW has some advantages when using
> RAID-5 or RAID-6 (less CPU load). When using RAID-0 or RAID-1 there should
> not be any difference performance wise. HW RAID gives you some advantages in
> terms of handling, i.e. hotplugging of discs, nice administration console,
> RAID-10 during install ;-), etc. It's up to you to decide whether it is worth
> the money. Plus you need to find a controller that is well supported in
> Linux.

Does anyone know if the RAID controller that comes in an IBM x3550 is supported on CentOS 4 & 5?  I assume that it is.

> P.s. Putting lots of RAM into the machine (for the buffer cache) has more
> impact than RAID-0 in my experience. Of course that depends on your
> filesystem usage pattern.

The system has 4GB.

> P.p.s. Creating one swap partition on each disc is correct, because swapping
> to RAID-0 is useless. Only if you decide to use RAID-1 for the whole disc you
> should also swap to RAID-1.

Will do.

> Three raid1 sets:
> 
> raid1 #1 = /
> raid1 #2 = swap
> raid1 #3 = rest of disk on /home
> 
> for the simple fact that a dead disk won't bring down your system and halt your > > builds until your rebuild the machine.

Yes, I like that.

> But if you really only care about max speed and are not worried about crashes & > > their consequences, then replace the raid1 with raid0.

I like the earlier suggestions on combining RAID0 and RAID1.

> I have no reason for using LVM on boot/OS/system partitions. If I have something > > that fills the disk that much, I move it to an other storage device. In your case, > striped LVM could be used instead of raid0.

That's why I can't decide what the best approach is.  So many different ways to skin this cat.

Thanks,
Alfred

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux