On Thu, 18 Dec 2014 10:14:57 -0600 Goldwyn Rodrigues <rgoldwyn@xxxxxxx> wrote: > Hello, > > This is an attempt to make MD-RAID cluster-aware. The advantage of > redundancy can help highly available systems to improve uptime. > Currently, the implementation is limited to RAID1 but with further work > (and some positive feedback), we could extend this to other compatible > RAID scenarios. > > The design document (first patch) is pretty descriptive of how > the md has been made cluster-aware and how DLM is used to safeguard data > and communication. > > This work requires some patches to the mdadm tool [1] > > A quick howto: > > 1. With your corosync/pacemaker based cluster running execute: > # mdadm --create md0 --bitmap=clustered --raid-devices=2 --level=mirror --assume-clean /dev/sda /dev/sdb > > 2. On other nodes, issue: > # mdadm --assemble md0 /dev/sda /dev/sdb > > References: > [1] mdadm tool changes: https://github.com/goldwynr/mdadm branch:cluster-md > [2] Patches against stable 3.14: https://github.com/goldwynr/linux branch: cluster-md-devel > > Regards, > hi Goldwyn, thanks for these - and sorry for the long delay. Lots of leave over southern summer, and the lots of email etc to deal with. This patch set is very close and I am tempted to just apply it and then fix things up with subsequent patches. In order to allow that, could you please: - rebase against current upstream - fix the checkpatch.pl errors and warnings. The "WARNING: line over 80 characters" are often a judgement call so I'm not particularly worried about those. Most, if not all, of the others should be followed just to have consistent layout. Then I'll queue them up for 3.21, providing I don't find anything that would hurt non-cluster usage .... On that topic: why initialise rv to -EINVAL in "metadata_update sends message...". That looks wrong. I noticed that a number of times a patch will revert something that a previous patch added. It would be much nicer to fold these changes back into the original patch. Often this is just extra blank lines, but occasionally variable names are changed (md -> mddev). It should be given the final name when introduced. Every chunk in every patch should be directly relevant to that patch. Some other issues, that could possibly be fixed up afterwards: - Is a clustername 64 bytes or 63 bytes? I would have thought 64, but the use of strlcpy make is 63 plus a nul. Is that really what is wanted? - Based on https://lkml.org/lkml/2012/10/23/580 it might be good to add "default n" to Kconfig, and possible add a WARN() if anyone tries to use the code. - I'm a bit concerned about the behaviour on node failure. When a node fails, two things must happen w.r.t the bits in that node's bitmap. 1/ The corresponding regions of the array need to be resynced. You do have code to do this. 2/ Other nodes must avoid read-balancing on those regions until the resync has completed. You do have code for this second bit, but it looks wrong. It avoids read-balancing if ->area_resyncing(). That isn't sufficient. The "area_resyncing" is always (I assume) a relatively small region of the array which will be completely resynced quite quickly. It must be because writes are blocked to this area. However the region in which we must disable re-balancing can be much larger. It covers *all* bits that are set in any unsynced bitmap. So it isn't just the area that is currently being synced, but all areas that will be synced. - I think md_reload_sb() might be too simple. It probably should check that nothing serious has changed. The "mddev->raid_disks = 0" look suspicious. I'll have to think about this a bit more. That's all I can see for now. I'll have another look once I have it all in my tree. Thanks, NeilBrown
Attachment:
pgp5oHoC4ai2u.pgp
Description: OpenPGP digital signature