On 03/02/2016 03:13 PM, Eric Sandeen wrote:
On 3/1/16 2:45 PM, Eric Sandeen wrote:
On 8/5/15 2:13 PM, Eric Sandeen wrote:
The BLKRRPART ioctl already fails today if any partition under
the device is mounted. However, if we mkfs a whole disk and mount
it, BLKRRPART happily proceeds down the invalidation path, which
seems like a bad idea.
Check whether the whole device is mounted by checking bd_super,
and return -EBUSY if so.
Signed-off-by: Eric Sandeen <sandeen@xxxxxxxxxx>
---
I don't know for sure if this is the right approach, but figure
I'll ask in the form of a patch. ;)
I'm now thinking that this is not the right approach. :( I got a
bug report stating that during some md raid1 testing with replacing
failed disks, filesystems were losing data. I haven't reproduced
that part yet, but...
It's hitting the "bd_super" case added in the patch below, and returning
-EBUSY to md when mdadm tries to to remove a disk:
# mdadm /dev/md0 -r /dev/loop0
mdadm: hot remove failed for /dev/loop0: Device or resource busy
FWIW, just ignore me, I was being an idiot. a) the patch *prevents*
the corruption; does not cause it; without the EBUSY, drop_partitions
will get to invalidate_inodes() etc, and no wonder data is lost.
And b) the above EBUSY is because I forgot to fail the disk first. :/
Nothing to see here, move along, sorry!
Still beats a regression :-)
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html