On Tue, 28 May 2013 15:46:33 +0300 Alexander Lyakas <alex.bolshoy@xxxxxxxxx> wrote: > Hello Neil, > we continue testing last-drive RAID1 failure cases. > We see the following issue: > > # RAID1 with drives A and B; drive B was freshly-added and is rebuilding > # Drive A fails > # WRITE request arrives to the array. It is failed by drive A, so > r1_bio is marked as R1BIO_WriteError, but the rebuilding drive B > succeeds in writing it, so the same r1_bio is marked as > R1BIO_Uptodate. > # r1_bio arrives to handle_write_finished, badblocks are disabled, > md_error()->error() does nothing because we don't fail the last drive > of raid1 > # raid_end_bio_io() calls call_bio_endio() > # As a result, in call_bio_endio(): > if (!test_bit(R1BIO_Uptodate, &r1_bio->state)) > clear_bit(BIO_UPTODATE, &bio->bi_flags); > this code doesn't clear the BIO_UPTODATE flag, and the whole master > WRITE succeeds, back to the upper layer. > > # This keeps happening until rebuild aborts, and drive B is ejected > from the array[1]. After that, there is only one drive (A), so after > it fails a WRITE, the master WRITE also fails. > > It should be noted, that I test a WRITE that is way ahead of > recovery_offset of drive B. So after such WRITE fails, subsequent READ > to the same place would fail, because drive A will fail it, and drive > B cannot be attempted to READ from there (rebuild has not reached > there yet). > > My concrete suggestion is that this behavior is not reasonable, and we > should only count a successful WRITE to a drive that is marked as > InSync. Please let me know what do you think? Sounds reasonable. Could you make and test a patch? Then I'll apply it. Thanks, NeilBrown
Attachment:
signature.asc
Description: PGP signature