----- Message from pg_lxra@xxxxxxxxxxxxxxxxxx --------- Date: Mon, 18 Feb 2008 19:05:02 +0000 From: Peter Grandi <pg_lxra@xxxxxxxxxxxxxxxxxx> Reply-To: Peter Grandi <pg_lxra@xxxxxxxxxxxxxxxxxx> Subject: Re: RAID5 to RAID6 reshape? To: Linux RAID <linux-raid@xxxxxxxxxxxxxxx>
On Sun, 17 Feb 2008 07:45:26 -0700, "Conway S. Smith" <beolach@xxxxxxxxx> said:
Consider for example the answers to these questions: * Suppose you have a 2+1 array which is full. Now you add a disk and that means that almost all free space is on a single disk. The MD subsystem has two options as to where to add that lump of space, consider why neither is very pleasant.
No, only one, at the end of the md device and the "free space" will be evenly distributed among the drives.
* How fast is doing unaligned writes with a 13+1 or a 12+2 stripe? How often is that going to happen, especially on an array that started as a 2+1?
They are all the same speed with raid5 no matter what you started with. You read two blocks and you write two blocks. (not even chunks mind you)
* How long does it take to rebuild parity with a 13+1 array or a 12+2 array in case of s single disk failure? What happens if a disk fails during rebuild?
Depends on how much data the controllers can push. But at least with my hpt2320 the limiting factor is the disk speed and that doesn't change whether I have 2 disks or 12.
* When you have 13 drives and you add the 14th, how long does that take? What happens if a disk fails during rebuild??
..again pretty much the same as adding a fourth drive to a three-drives raid5. It will continue to be degraded..nothing special.
beolach> Well, I was reading that LVM2 had a 20%-50% performance beolach> penalty, which in my mind is a really big penalty. But I beolach> think those numbers where from some time ago, has the beolach> situation improved? LVM2 relies on DM, which is not much slower than say 'loop', so it is almost insignificant for most people.
I agree.
But even if the overhead may be very very low, DM/LVM2/EVMS seem to me to have very limited usefulness (e.g. Oracle tablespaces, and there are contrary opinions as to that too). In your stated applications it is hard to see why you'd want to split your arrays into very many block devices or why you'd want to resize them.
I think the idea is to be able to have more than just one device to put a filesystem on. For example a / filesystem, swap and maybe something like /storage comes to mind. Yes, one could to that with partitioning but lvm was made for this so why not use it.
The situation looks different with Raid6, there the write penalty becomes higher with more disks but not with raid5.
Regards, Alex. ----- End message from pg_lxra@xxxxxxxxxxxxxxxxxx ----- - -- Alexander Kuehn Cell phone: +49 (0)177 6461165 Cell fax: +49 (0)177 6468001 Tel @Home: +49 (0)711 6336140 Mail mailto:Alexander.Kuehn@xxxxxxxxxx ---------------------------------------------------------------- cakebox.homeunix.net - all the machine one needs..
Attachment:
pgpG52SqrrBlU.pgp
Description: PGP Digital Signature