FYI: I just:
- removed the bitmap
- installed a new bitmap with larger chunk-size
on 4 arrays, on each of two machines (redundant high-availability
cluster setup)
Took me all of about 5 minutes, of which most of the time was waiting
for virtual machines to migrate from one machine to the other, and then
back.
All seems to be working, performance seems just a little snappier - but
who can really tell until the next time an array rebuilds.
Thanks all (and Roman in particular) for your guidance.
Miles
Roman Mamedov wrote:
On Fri, 11 Jun 2010 00:46:47 -0400
Miles Fidelman<mfidelman@xxxxxxxxxxxxxxxx> wrote:
Looks like my original --bitmap internal creation set a very large chunk
size initially
md3 : active raid6 sda4[0] sdd4[3] sdc4[2] sdb4[1]
947417088 blocks level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 6/226 pages [24KB], 1024KB chunk
unless that --bitmap-chunk=131072 recommendation is translates to
131072KB (if so, are you really running 131MB chunks?)
Yes, this is correct.
This will only mean that after an unclean shutdown, at least 128MB-sized
areas of the array will be invalidated for a resync, and not smaller areas
with 1MB-granularity like on yours currently. 128 megabytes is just about 1
second of read throughput on modern drives, so I am okay with that. Several
128MB-windows here and there are still faster to resync than the whole array.
And this had an extemely good effect on write performance for me (increased it
by more than 1.5x) compared to a small chunk. Test for yourself, first without
the bitmap, then with various chunk sizes of it (ensure there's no other load
on the array, and note the speeds):
dd if=/dev/zero of=/your-raid/zerofile bs=1M count=2048 conv=notrunc,fdatasync
--
In theory, there is no difference between theory and practice.
In<fnord> practice, there is. .... Yogi Berra
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html