Currently if metadata requires more then 1M, data offset will be rounded down to closest MB. This is not correct, since less then required space is reserved. Always round data offset up to multiple of 1M. Signed-off-by: Pawel Baldysiak <pawel.baldysiak@xxxxxxxxx> --- super1.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/super1.c b/super1.c index 86ec850d..b15a1c7a 100644 --- a/super1.c +++ b/super1.c @@ -2796,8 +2796,7 @@ static int validate_geometry1(struct supertype *st, int level, headroom >>= 1; data_offset = 12*2 + bmspace + headroom; #define ONE_MEG (2*1024) - if (data_offset > ONE_MEG) - data_offset = (data_offset / ONE_MEG) * ONE_MEG; + data_offset = ROUND_UP(data_offset, ONE_MEG); break; } if (st->data_offset == INVALID_SECTORS) -- 2.13.0 -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html