Your requirements are contradictory. You want to span all your devices with a single storage system, but you do not want to use any devices for redundancy and expect the filing system on them to remain consistent should any of the devices fail. That is simply impossible for file-systems, which is what block-device aggregation such as mdadm is designed to support. Were you to loose any random device out of the four portions of the filesystem metadata as well as your actual data would be missing. That may be tolerable for special cases (regularly sampled data, such as sensor outputs comes to mind, when you don't -require- the sensor data, but merely want to have it), however those cases are all application specific, not general solution. One typical way a specific application might use four devices would be a round-robin method. In this a list of currently online devices would be kept, then each cohesive unit would be stored to the next device in the list. Should a device be added the list would grow, should a device fail (be removed) it would be taken out of the list. You have four choices then: 1) What I described above 2) A Raid 0 that gives you 100% storage, but all devices working or none. 3) A Raid 1+0 or 10 (same idea different drivers) solution, you're already trying it and disliking it though. 4) Raid 5; you spend more CPU but you use one of the devices for recovery data, so that you can tolerate a single failure. 5) Technically you might also have raid 6; but I'm not counting it because you're already complaining about loosing 50% of your data and this has the addition of being slower (BUT surviving -literally- any 2 devices, instead of any 1 device of the correct set.) -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html