Followup to: <6.0.1.1.2.20040217232659.0211de48@mail.athenet.net> By author: Nathan Lewis <nathapl@cs.okstate.edu> In newsgroup: linux.dev.raid > > I've decided to base my rs-raid work on 2.6, and thus raid5.c and > raid6main.c. I'm pretty sure I can utilize all the stripe buffer code > pretty much verbatim, and most of the rework will need to be done to > handle_stripe(). I've dug through most of it, updating things to > correspond to m parity disks instead of 1 or 2. However, I've encountered > something strange. From what I can tell, after a call to compute_block_1 > or compute_block_2, I assumed that all the data in the stripe (including > parity) was valid. However, around line 1270 in raid6main.c, > compute_block_1 or 2 may be called, then immediately after the PRINTK, it > calls compute_parity too. Isn't this redundant somehow? The logic that > sets must_compute is also really complicated - can anyone explain this > section to me? > must_compute counts the number of drives for which (a) we don't have data already in memory and (b) we can't perform I/O, and (c) we need data from. You're absolutely correct in that invoking compute_parity() there in the case ( must_compute > 0 ) is redundant. In fact, so is using (failed) rather than (must_compute) in the switch statement (since must_compute <= failed.) None of these are significant for performance, however, and it made the already complex bookkeeping slightly easier. Once I'm more convinced the code is actually stable I will try to clean up stuff like this. -hpa - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html