Re: mdadm failed to create internal bitmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

About this issue, it is because of setting rdev size that

leads to integer overflow while adding internal bitmap.

Besides fixing integer overflow, I was wondering are there

any concerns if we always set the rdev size to the max while doing expansion?

Thanks,


‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

> Hi all,
>
> I got a raid1 composed with 2 disks
> /dev/sda -- 2T
> /dev/sdb -- 4T
>
> mdadm version is 3.3 and md metadata version is 1.0
>
> At first, I was only using 1T of the each disk,
>
> then I grew the array recently with the command
>
> mdadm --grow /dev/md1 --size=1951944704K
>
> I also tried to add the internal bitmap after expansion finished
>
> mdadm --grow /dev/md1 --bitmap=internal
>
> But I got the following message
>
> mdadm: failed to create internal bitmap - chunksize problem.
>
> I found that Avail Dev Size in superblock examine of two disks
>
> are the same, same as the value I set when I expanded the array (1951944704K).
>
> Then I found that in mdadm bitmap chunksize calculation,
>
> in function add_internal_bitmap1 (super1.c)
>
> variable "room" and and "max_bits" seems to be overflowed under this situation
>
> /dev/sdb3:
> Magic : a92b4efc
> Version : 1.0
> Feature Map : 0x0
> Array UUID : 8d7b8858:e0e93d83:7c87e6e0:bd1628b8
> Name : 1
> Creation Time : Sun Apr 8 09:54:47 2018
> Raid Level : raid1
> Raid Devices : 2
>
> Avail Dev Size : 3903889408 (1861.52 GiB 1998.79 GB)
> Array Size : 1951944704 (1861.52 GiB 1998.79 GB)
> Super Offset : 7810899368 sectors
> Unused Space : before=0 sectors, after=3907009952 sectors
> State : clean
> Device UUID : 3546fab2:3bfd9a17:39d78059:3d1eb830
>
> Update Time : Sun May 17 10:24:33 2020
> Bad Block Log : 512 entries available at offset -8 sectors
> Checksum : cf552c50 - correct
> Events : 93088
>
> Device Role : Active device 0
> Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
>
> /dev/sda3:
> Magic : a92b4efc
> Version : 1.0
> Feature Map : 0x0
> Array UUID : 8d7b8858:e0e93d83:7c87e6e0:bd1628b8
> Name : 1
> Creation Time : Sun Apr 8 09:54:47 2018
> Raid Level : raid1
> Raid Devices : 2
>
> Avail Dev Size : 3903889408 (1861.52 GiB 1998.79 GB)
> Array Size : 1951944704 (1861.52 GiB 1998.79 GB)
> Super Offset : 3903891368 sectors
> Unused Space : before=0 sectors, after=1952 sectors
> State : clean
> Device UUID : 980038ac:99f4e8c6:39d91851:bdf6ed6d
>
> Update Time : Sun May 17 10:24:33 2020
> Bad Block Log : 512 entries available at offset -8 sectors
> Checksum : c3ce8290 - correct
> Events : 93088
>
> Device Role : Active device 1
> Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
>
> I was wondering is this because mdadm set the size of the rdevs in the array
>
> before doing expansion (in function Grow_reshape)
>
> that caused the sb->data_size not equals to actual raw device size
>
> and consequently led to bitmap chunksize calculation error
>
> or it is simply a data type issue.
>
> Thanks,






[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux