Hello, i've hit a weird problem with creating raid1 bitmaps. Setting a internal bitmap while creating a raid1 (mdadm --create /dev/mdX --level=1 --raid-disks=2 --bitmap=internal /dev/vg0/a /dev/vg0/b) or growing a raid1 with a internal bitmap (mdadm --grow /dev/mdX --bitmap=internal) i a got a failure for specific device sizes of /dev/vg0/a and /dev/vg0/b. The sizes i get errors are 7680MB, 15GB, 30GB, 60GB, 120GB and 240GB. I get no errors with 10GB, 16GB, 17GB, 20GB, 25GB, 29GB, 31GB, 38GB, 40GB, 50GB, 59GB, 70GB, 75GB and 90GB. It doesn't matter if i set the bitmap while creating the raid1 or after the raid1 is created and synced. I'm using Debian lenny with mdadm 2.6.7 (2.6.7.1-1) and the problem occurs with kernel 2.6.26-1-xen-amd64 and also with kernel 2.6.18 from xensource. The mdadm message is (while create): mdadm: RUN_ARRAY failed: Input/output error mdadm: stopped /dev/md9 The kernel message is: md: bind<dm-1> md: bind<dm-2> md: md9: raid array is not clean -- starting background reconstruction raid1: raid set md9 active with 2 out of 2 mirrors md9: bitmap file is out of date (0 < 1) -- forcing full recovery md9: bitmap file is out of date, doing full recovery attempt to access beyond end of device dm-2: rw=16, want=31457288, limit=31457280 attempt to access beyond end of device dm-1: rw=16, want=31457288, limit=31457280 md9: bitmap initialized from disk: read 15/16 pages, set 489472 bits, status: -5 md9: failed to create bitmap (-5) md: pers->run() failed ... md: md9 stopped. md: unbind<dm-2> md: export_rdev(dm-2) md: unbind<dm-1> md: export_rdev(dm-1) Perhaps the important message is "dm-2: rw=16, want=31457288, limit=31457280" which i think is some sort of rounding error? Is this some known bug or perhaps is this bug allready fixed (i didn't find anything similar on this list)? For now i'll avoid these sizes for raid1. -- greetings eMHa
Attachment:
pgp8WD5ZeFiWz.pgp
Description: PGP signature