All Thank you for the responses. When I wrote the initial email I left out detail in order to keep the things short. To address a few of the concerns and suggestions. The construction of the ?Disk Farm? will be as needed starting with two 200G dirves raid 1 that will be NFS and Samba advertised and two 160G drives raid 1 that will be separately advertised. I picked these up yesterday. In the future raid 1 pairs will be added as required and offered as individual resources that I will manage. Other members of my family tend to use disk space and I need to control whos computer can consume what. I am by far the biggest culprit. One other point, that I will test, is in the event of a server or disk failure I can disconnect the good drive and connect it to another system and manually mount it to retrieve the data instead of replacing the failed drive and rebuilding the raid. This is why I am using raid 1 and not raid 5. ANY COMMENT ON THIS appreciated. 160G ATA drives can be found around here for $39.99 on sale and $200G for $89 any day of the week. Each drive in a pair will be connected through separate USB ports on the server and USB Hubs. This pushes the single point of failure back to the server on one end the power supply in the disk farm on the other. I am using USB to IDE cables without individual power supplies. All of the drives will be mounted in a 2U case with a PC Power and Cooling supply with an opto-isolater controlling the supply turn on slaved to the server?s 5 volt supply. Unused outputs on the PC Power and Cooling supply that require a minimum load will be terminated with resistors. As I add disks the spin up load on the supply will be measured to stay in spec. The nice thing with the USB connection is I can start another farm in another case, within reason. The number 14 for total disk count is limited by the case size and more importantly the thermal management inside the case. A serial connected temperature probe located near the air exit port will be monitored by the server. Professionally I use Sun Solaris on high end machines and am switching to RedHat from Unixware for the low end. I run Oracle on both. Personally I use the CENTOS, which is where the disk farm will be attached. A second CENTOS system will take over by Oracle database from Unixware. I picked CENTOS because I only have time to port Unixware to one other OS and CENTOS is RedHat. RedHat was selected purely based on support availability for the OS/Oracle combination. Further suggestions or discussion are welcome Bill Hess bhess@xxxxxxxxxxxx On Thu Jan 12 12:20 , ross@xxxxxxxxxxxxxxxxx (Ross Vandegrift) sent: On Thu, Jan 12, 2006 at 11:16:36AM +0000, David Greaves wrote: > ok, first off: a 14 device raid1 is 14 times more likely to lose *all* > your data than a single device. No, this is completely incorrect. Let A denote the event that a single disk has failed, A_i denote the event that i disks have failed. Suppose P(A) = x. Then by Bayes's Law the probability that an n disk RAID will lose all of your data is: n_1 = P(A) = x n_2 = P(A_2) = P(A) * P(A_1 | A) = x^2 n_3 = P(A_3) = P(A) * P(A_2 | A) = x^3 ... n_i = P(A_i) = P(A) * P(A_{i-1} | A) = x^i ie, RAID1 is expoentially more reliable as you add extra disks! This assumes that disk failures are independant - ie, that you correctly configure disks (don't use master and slave on an IDE channel!), and replace failed disks as soon as they fail. This is why adding more disks to a RAID1 is rare - x^2 is going to be a really low probability! It will be far, far more common for operator error to break a RAID than for both devices to honestly fail. -- Ross Vandegrift ross@xxxxxxxxxxxx "The good Christian should beware of mathematicians, and all those who make empty prophecies. The danger already exists that the mathematicians have made a covenant with the devil to darken the spirit and to confine man in the bonds of Hell." --St. Augustine, De Genesi ad Litteram, Book II, xviii, 37 ross@xxxxxxxxxxxxxxxxx (Ross Vandegrift) David Greaves <david@xxxxxxxxxxxx> bhess@xxxxxxxxxxxx, linux-raid@xxxxxxxxxxxxxxx - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html