On Friday, January 16, 2009 you wrote: > On Thu, Jan 15, 2009 at 2:51 PM, Dan Williams <dan.j.williams@xxxxxxxxx> wrote: >> On Mon, Dec 8, 2008 at 2:57 PM, Yuri Tikhonov <yur@xxxxxxxxxxx> wrote: >> What's the reasoning behind changing the logic here, i.e. removing >> must_compute and such? I'd feel more comfortable seeing copy and >> paste where possible with cleanups separated out into their own patch. >> > Ok, I now see why this change was made. Please make this changelog > more descriptive than "Rewrite handle_stripe_dirtying6 function to > work asynchronously." Sure, how about the following: " md: rewrite handle_stripe_dirtying6 in asynchronous way Processing stripe dirtying in asynchronous way requires some changes to the handle_stripe_dirtying6() algorithm. In the synchronous implementation of the stripe dirtying we processed dirtying of a degraded stripe (with partially changed strip(s) located on the failed drive(s)) inside one handle_stripe_dirtying6() call: - we computed the missed strips from the old parities, and thus got the fully up-to-date stripe, then - we did reconstruction using the new data to write. In the asynchronous case of handle_stripe_dirtying6() we don't process anything right inside this function (since we under the lock), but only schedule the necessary operations with flags. Thus, if handle_stripe_dirtying6() is performed on the top of a degraded array we should schedule the reconstruction operation when the failed strips are marked (by previously called fetch_block6()) as to be computed (with the R5_Wantcompute flag), and all the other strips of the stripe are UPTODATE. The schedule_reconstruction() function will set the STRIPE_OP_POSTXOR flag [for new parity calculation], which is then handled in raid_run_ops() after the STRIPE_OP_COMPUTE_BLK one [which causes computing of the data missed]. " Regards, Yuri -- Yuri Tikhonov, Senior Software Engineer Emcraft Systems, www.emcraft.com -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html