Re: Improving dm-mirror as a final year project

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Feb 15, 2011, at 6:52 AM, Miklos Vajna wrote:

On Mon, Feb 14, 2011 at 03:31:00PM -0600, Jonathan Brassow <jbrassow@xxxxxxxxxx > wrote:
Thanks for the patches.  I've seen the first one before (slightly
different) - I'll discuss it with others whether to include in rhel5.
There is no read-balancing in rhel6/upstream.

Oh, do I read the code correctly that rhel6/upstream always reads from
the first mirror and switches only in case there is a read failure?

yes


The second patch addresses which device should be primary.  This can
be done when creating the mirror. I'm not sure how much benefit there
is to doing this additional step.  Most people will access dm mirrors
through LVM - not through the dm message interface. If it makes sense
upstream - and you can argue for it - though, I would consider it.

Is there such a messaging interface for lvm as well? I choosed this way
as in this case I did not have to alter the metadata.

There is no msg interface via LVM (although LVM could use the message interface for some things.)

One useful use case I can imagine is when both data legs of the mirror
are provided by iscsi and the administrator does not realise what is the
faster leg and her bad decision is found out only after there is some
data on the mirror.

Perhaps, but if you don't encode this in the LVM metadata, you will have to perform the action every time you reboot. Instead, you could reorder the devices in userspace and reload the table.

My patch allows one to just set the first mirror in that case, without
saving data, recreating the mirror and restoring data. (Unless I missed
some other neat trick on how to do so.)

Wether changes are going into rhel5 or rhel6, we still like it when
they go upstream first.  We generally don't like feature inversion.

Sure - I was not aware at all that the round robin part of the code is
RHEL5-specific.

If you have any interest in dm-raid, these are some of the things that
need to be done:

Thanks for the list - I must admit that some of the points are Chinese
to me; I'm not that familiar with the codebase, just with the basic LVM
commands anc concepts.

1) Definition of new MD superblock: Some of this is started, and I've
got a working version, but I'm sure there are pieces missing related
to offsets that must be tracked for RAID type conversion, etc.
2) Bitmap work:  The bitmap keeps track of which areas of the array
are being written.  Right now, I take all the bitmap code "as-is".
There are a number of things in this area to be improved. Firstly, we
don't necessarily need all the fields in the bitmap superblock -
perhaps this could be streamlined and added to the new MD superblock.
Secondly, things are way too slow.  I get a 10x slowdown when using a
bitmap with RAID1 through device-mapper.  This could be due to  the
region-size chosen, the bitmap being at a stupid offset, or something
else.  This problem could be solved by trial-and-error or through
profiling and reason... seems like a great small project.
3) Conversion code:  New device-mapper targets (very simple small
ones) must be written to engage the MD RAID conversion code (like when
you change RAID4 to RAID5, for example)
4) Failure testing
5) LVM code: to handle creation of RAID devices
6) dmeventd code: to handle device failures

Before choosing from this list:

I first have to evaluate the current status of dm-raid so that we can
decide with my mentors if the topic of my thesis should be dm-mirror or dm-raid (ie. if dm-raid is mature enough that I can write a thesis about it). Where is the newest version of dm-raid.c? I saw the upstream kernel
has a single commit from this January, but I guess the rawhide / rhel
kernel contained this earlier - maybe there is a newer version than
upstream somewhere?

The basic component that covers RAID456 is available upstream, as you saw. I have an additional set of ~12 (reasonably small) patches that add RAID1 and superblock/bitmap support. These patches are not yet upstream nor are they in any RHEL product.

Also, is there any documentation on dm-raid? Google found
http://www.linux-archive.org/device-mapper-development/454656-dm-raid-wrapper-target-md-raid456.html
but maybe there is now a better way to create raid4 than using
gime_raid.pl?

Yes, I have a script called 'gime_raid.pl' that creates the device- mapper tables for dm-raid. Eventually, this will be pushed into LVM, but it was much easier (for testing purposes) to start with a perl script.

And a last question: is support for raid1 a planned feature? I think
that would be interesting as well. (If dm-raid is going to replace
dm-mirror in the long run.)

Yes, RAID1 is planned; and works to a large extent.

 brassow

For convenience, I've attached the patches I'm working on (quilt directory) and the latest gime_raid.pl script.

Attachment: dm-raid-patches.tgz
Description: Binary data





--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux