Jan, Thanks! This also solves the issue and it's easier, since I can issue that from user mode. Alex. On Mon, Mar 19, 2012 at 11:34 AM, Jan Kara <jack@xxxxxxx> wrote: > Hi, > > On Sun 18-03-12 17:35:45, Alexander Lyakas wrote: >> Jan, >> thank you for your hint. I tried to look at this path and some other >> code, and saw some places, in which PageError() macro is called, and >> based on that -EIO may be returned. >> To solve the issue I close the "struct file" handle and re-open. This >> seems to get rid of stale cache entries (then, of course, I may be >> wrong, but this solves the issue). It would be good if VFS provided >> such API without closing the "struct file". > Ah, I had to think for a while why that works. It's because when last > file reference to a device is closed, the whole device cache is evicted. So > in particular closing the device won't solve your problem if someone else > has the device open as well. But what should be more reliable is calling > BLKFLSBUF ioctl on the device to flush caches. > > Honza > >> On Fri, Mar 16, 2012 at 11:44 AM, Jan Kara <jack@xxxxxxx> wrote: >> > On Tue 13-03-12 22:09:22, Alexander Lyakas wrote: >> >> Greetings all, >> >> I apologize if my question should not have been posted to this list. >> >> >> >> I am working with code that issues vfs_writev() to a fd, which was >> >> opened using filp_open(). The pathname, which has been opened, is a >> >> DeviceMapper devnode (like /dev/dm-1), which is a linear DeviceMapper >> >> mapped to a local drive. >> >> >> >> At some point, I switch the DeviceMapper to "error" table (using >> >> "dmsetup reload" and then "dmsetup resume"). As expected, >> >> vfs_writev() starts returning -EIO. >> >> >> >> Then later, I switch the DeviceMapper back to "linear" table mapped to >> >> the same local drive. However, the vfs_writev() still returns -EIO >> >> several times, before it starts completing successfully. If do a >> >> direct IO at this point to the DM device (like dd if=/dev/urandom >> >> of=/dev/dm-1 oflag=direct), I don't hit any IO errors. I also added >> >> some prints to dm-linear code, and verified that it does not return >> >> any IO errors at this point. So it seems that the VFS layer somehow >> >> "remembers" that previously there were IO errors from that device. >> >> >> >> I started digging in the kernel code to get some clue on this, but at >> >> this point I only saw functions like make_bad_inode() and >> >> is_bad_inode(), which may be relevant somehow, but I was not able to >> >> trace where the -EIO is returned from. >> > Hmm, the only significant difference I can think of is that your buffered >> > writes (vfs_writev()) would go through blkdev_write_begin() -> >> > block_write_begin() which could return EIO if it's not able to read in rest >> > of the page (if you are not writing full page-sized blocks). So I'd have a >> > look at block_write_begin() and see what it returns... >> > >> > >> >> Can someone pls point me which code I should look at to debug this >> >> issue. I am running kernel 2.6.38-8 (stock ubuntu natty). Any clue is >> >> appreciated. >> > >> > Honza >> > -- >> > Jan Kara <jack@xxxxxxx> >> > SUSE Labs, CR > -- > Jan Kara <jack@xxxxxxx> > SUSE Labs, CR -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html