On 03/29/2018 06:47 PM, Fisher wrote:
Hi,
I was using kernel 4.14, and latest version of mdadm cloning from github
this situation can only happen with *super 1.0*
Also bitmap is not used, that is why I can't reproduce it before. Just
curious why
not choose 1.2 metadata.
The initial value of rdev->sectors (1933615488 ) was assigned in function
super_1_load in kernel while created with the value of sb->data_size, which is
"sectors in this device that can be used for data" comment in super1.c
of mdadm source code. So I think the value is right and does not
represent 5G capacity.
Hmm, this number represents the whole space of disk.
With current code, the max_sectors should equal with rdev->sectors in
the second grow.
/* minor version 0; superblock after data */
sector_t sb_start;
sb_start = (i_size_read(rdev->bdev->bd_inode) >> 9) - 8*2;
sb_start &= ~(sector_t)(4*2 - 1);
max_sectors = rdev->sectors + sb_start - rdev->sb_start;
if (!num_sectors || num_sectors > max_sectors)
num_sectors = max_sectors;
rdev->sb_start = sb_start;
Since the position of metadata 1.0 is "At least 8K, but less than 12K,
from end of device",
maybe set it like, just my 2 cents.
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 0ff1bbf6c90e..403998f549c9 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -1946,7 +1946,8 @@ super_1_rdev_size_change(struct md_rdev *rdev,
sector_t num_sectors)
sector_t sb_start;
sb_start = (i_size_read(rdev->bdev->bd_inode) >> 9) - 8*2;
sb_start &= ~(sector_t)(4*2 - 1);
- max_sectors = rdev->sectors + sb_start - rdev->sb_start;
+ /* the first 8 to ensure metadata less than 12k, the
second 8 for bad block log */
+ max_sectors = sb_start - 8 - 8;
if (!num_sectors || num_sectors > max_sectors)
num_sectors = max_sectors;
rdev->sb_start = sb_start;
Regards,
Guoqing
Thanks,
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On March 29, 2018 4:57 PM, Guoqing Jiang <gqjiang@xxxxxxxx> wrote:
On 03/29/2018 09:45 AM, Fisher wrote:
Hi,
I have a raid1 array composed with 2 disks which has 931GB size
At first, I created this array with --size=5G and --metadata=1.0 set
after that I grew the array to --size=10G and it succeed.
Then I grew the array again to --size=20G
but it failed this time with the following message
mdadm: Cannot set device size for /dev/md0: No space left on device
after I traced the code, I found it was because the following code
int Grow_reshape(char *devname, int fd, int quiet, char backup_file,/ skip */
for (mdi = sra->devs; mdi; mdi = mdi->next) {
sysfs_set_num(sra, mdi, "size", s->size == MAX_SIZE ? 0
: s->size);
this changed the value of rdev->size from 1933615488 to 20971520 at first grow
but second grow was limited by kernel when mdadm tried to set 41943040 to size
20971520 and 41943040 can match 10G and 20G well, but how 1933615488
could represent 5G capacity?
static unsigned long long
super_1_rdev_size_change(struct md_rdev rdev, sector_t num_sectors)/ skip */
max_sectors = rdev->sectors + sb_start - rdev->sb_start;
^^^^^^^^^^^^^
20971520
if (!num_sectors || num_sectors > max_sectors)
num_sectors = max_sectors;
seems like there's no way this array can be grown again.
Which kernel version are you used? I can't reproduce it with 4.16.0-rc1.
Thanks,
Guoqing
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html