File system corruption during setting new size (native/extarnal metatdat) after expansion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I've met again array_size change problem. A while ago I've indicated it, during array expansion, file system corruption occurs.
The previous conclusion was that my tests were wrong (too small arrays, on bigger arrays it seems to be ok), 
I've got now reproduction in our lab on quite big arrays (250G) and on my small test arrays also.

This problem is related to array expansion using native and external metadata.
 
Problem is that after expansion file system on array is corrupted.
Operation that corrupts file system is array size change via sysfs (and for native metadata during reshape finalization in md).
In logs I've got information:
	VFS: busy inodes on changed media or resized disks

My investigation in md goes to block_dev.c/inode.c and flush_disk() (called from check_disk_size_change()).
I've found above text there (about busy inodes). 
My traces shows that in __invalidate_inodes(), inode->i_count == 2 and it is locked. This makes me to look how invalidate_inodes() is called.
Before invalidate_inodes() is called, cache is cleared by shrink_dcache_sb().
If I've commented shrink_dcache_sb() file system seems to be safe. It is the same when invalidate_inodes() is commented (shrink_cache_sb() called only).
If I've changed functions order (1. invalidate_inodes(), 2. shrink_dcache_sb()) in __invalidate_device() (block_dev.c) there is no corruption also.
When after shrink_dcache_sb() is called invalidate_inodes() on busy inodes corruption occurs (if it called earlier it doesn't matter).

This problem can be reproduced on mounted arrays only. If array is not mounted, file system corruption disappears.
This means that whole expansion process is correct.
In some cases (rarely) inodes on mounted device are not locked and then FS corruption doesn't happened.


I'd like to know your opinion.
Do you think that problem is in releasing cache but not all inodes (busy case), commands order...
... or probably changing size on mounted array is a mistake (for current code)?

BR
Adam


---------------------------------
My test commands:	

#create container
mdadm -C /dev/md/imsm0 -amd -e imsm -n 3 /dev/sdb /dev/sdc /dev/sde -R

#create volume
mdadm -C /dev/md/raid5vol_0 -amd -l 5 --chunk 64 --size 104857 -n 3 /dev/sdb /dev/sdc /dev/sde -R
mkfs /dev/md/raid5vol_0
mount /dev/md/raid5vol_0 /mnt/vol

#copy some files from current directory
cp * /mnt/vol

#add spare
mdadm --add /dev/md/imsm0 /dev/sdd

#start reshape
mdadm --grow /dev/md/imsm0 --raid-devices 4 --backup-file=/backup.bak


-------------------------------------------------------------
Intel Technology Poland Sp. z o.o.
email: adam.kwolek@xxxxxxxxx    
phone: +48 58 766 1773


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux