On 09/10/2015 10:21 PM, Dušan Čolić wrote:
Does dm snapshot returns EIO when filled?
From Documentation/device-mapper/snapshot.txt :
" <COW device> will often be
smaller than the origin and if it fills up the snapshot will become
useless and be disabled, returning errors. So it is important to monitor
the amount of free space and expand the <COW device> before it fills up."
OK, I agree :)
On Thu, Sep 10, 2015 at 10:10 PM, Edward Shishkin
<edward.shishkin@xxxxxxxxx> wrote:
On 09/10/2015 09:42 PM, Dušan Čolić wrote:
On Thu, Sep 10, 2015 at 9:20 PM, Edward Shishkin
<edward.shishkin@xxxxxxxxx> wrote:
Actually, this is not a regression.
The attached patch prevents the panic.
By default on IO errors reiser4 partition will be remounted as
"readonly".
No ideas why accessing /dev/dm-X causes IO error.
Reiser4 is not the culprit ;) You can narrow down this, if interesting.
I think that something is wrong with LVM settings...
I looked more into this: This test fills up dm snapshot (writes 5MB to
4MB snapshot) so we get IO error because it's full.
IO error because of no space - it is very strange,
especially when reading the device (as in my case):
# dd if=/dev/dm-3 of=/dev/null bs=4096 count=1
It doesn't trigger
on ccreg40 because ccreg40 compresses data so 5MB of compressible data
< 4MB space ;)
xfstests uses compressible data for disk full tests so on ccreg40 they
give false positive. Are ccreg40 and reg40 code paths for disk full
situation the same?
I'll look into fixing xfstetsts.
Thanks,
Edward.
On 09/04/2015 09:35 AM, Dušan Čolić wrote:
Kernel: 4.1.6
R4 patch: 4.1.5
Test: xfstests/generic/081 "Test I/O error path by fully filling an dm
snapshot."
Test passes cleanly with ccreg40 but kernel panics with reg40 plugin.
Picture of the panic, as I had no other means of capturing it, is in
attachment.
xfstests local.config section:
[r4Hybrid]
MKFS_OPTIONS="-o create=reg40"
MOUNT_OPTIONS="-o noatime"
Have a nice day.
Dushan
--
To unsubscribe from this list: send the line "unsubscribe reiserfs-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html