hi, Anand:
Another question on "writeback" is: I observe disk performance boost(40x)(using IOMeter in VM) when setting cache=writeback in comparsion with cache=none or writethrough.
Another question on "writeback" is: I observe disk performance boost(40x)(using IOMeter in VM) when setting cache=writeback in comparsion with cache=none or writethrough.
Do you have any ideas about that? :-)
Best Regards.
Jules Wang
At 2012-08-07 15:10:30,"Anand Avati" <anand.avati@xxxxxxxxx> wrote:
At 2012-08-07 15:10:30,"Anand Avati" <anand.avati@xxxxxxxxx> wrote:
cache=writeback should not be necessary (including disabling write-behind). This is clearly a bug. We are looking into it.On Mon, Aug 6, 2012 at 11:51 PM, Jules Wang <lancelotds@xxxxxxx> wrote:
Bharata:Alternatively,You could add ",cache=writeback" after "if=virtio".Good Luck.Best Regards.Jules Wang
At 2012-08-07 14:29:38,"Bharata B Rao" <bharata.rao@xxxxxxxxx> wrote: >Hi, > >With latest QEMU and latest gluster git, I observe guest root >filesystem corruption when the VM boots. > >QEMU command line I am using is this: >qemu-system-x86_64 --enable-kvm --nographic -m 1024 -smp 4 -drive >file=/mnt/F17,if=virtio -net nic,model=virtio -net user -redir >tcp:2000::22 > >Gluster volume is mounted in this manner: >glusterfs -s bharata --volfile-id=test -L DEBUG -l glusterfs.log /mnt > >I see IO errors like this when VM boots: >[ 1.698583] end_request: I/O error, dev vda, sector 9680896 >[ 1.699328] Buffer I/O error on device vda3, logical block 1081344 >[ 1.699328] lost page write due to I/O error on vda3 >[ 1.706644] end_request: I/O error, dev vda, sector 1030144 >[ 1.707630] Buffer I/O error on device vda3, logical block 0 >[ 1.707630] lost page write due to I/O error on vda3 >[ 1.718671] dracut: >/dev/disk/by-uuid/d29b972f-3568-4db6-bf96-d2702ec83ab6: clean, >21999/623392 files, 796916/2492672 blocks >[ 1.723455] dracut: Remounting >/dev/disk/by-uuid/d29b972f-3568-4db6-bf96-d2702ec83ab6 with -o ro > >VM eventually comes up with RO rootfs. Shutdown path sees these kinds of error: > >[ 16.034271] EXT4-fs (vda3): previous I/O error to superblock detected >[ 16.041699] end_request: I/O error, dev vda, sector 1030144 >[ 16.042679] EXT4-fs error (device vda3): ext4_remount:4418: Abort >forced by user >[ 16.046465] EXT4-fs (vda3): re-mounted. Opts: (null) > >Full glusterfs log is too big to go with this mail. I can see this >kind of errors: > >[2012-08-07 05:56:29.598636] T [io-cache.c:128:ioc_inode_flush] >0-test-io-cache: locked inode(0xd2d2c0) >[2012-08-07 05:56:29.598642] T [io-cache.c:132:ioc_inode_flush] >0-test-io-cache: unlocked inode(0xd2d2c0) >[2012-08-07 05:56:29.598651] T [fuse-bridge.c:2113:fuse_writev_cbk] >0-glusterfs-fuse: 319: WRITE => 4096/4096,4956618752/10737418240 >[2012-08-07 05:56:29.598749] T [fuse-bridge.c:2293:fuse_fsync_resume] >0-glusterfs-fuse: 320: FSYNC 0xd6f060 >[2012-08-07 05:56:29.598845] W [write-behind.c:2809:wb_fsync] >0-test-write-behind: write behind wb_inode pointer is not stored in >context of inode(0x7f9cb1eab0c0), returning EBADFD >[2012-08-07 05:56:29.598858] W [fuse-bridge.c:1063:fuse_err_cbk] >0-glusterfs-fuse: 320: FSYNC() ERR => -1 (File descriptor in bad >state) >[2012-08-07 05:56:29.605831] T [fuse-bridge.c:2154:fuse_write_resume] >0-glusterfs-fuse: 322: WRITE (0xd6f060, size=4096, offset=527433728) > >Just to clarify, I am not using GlusterFS block backend in QEMU (via >libgfapi) here but instead using it with normal FUSE mount. > >Regards, >Bharata. >-- >http://bharata.sulekha.com/blog/posts.htm, http://raobharata.wordpress.com/> >_______________________________________________ >Gluster-devel mailing list >Gluster-devel@xxxxxxxxxx >https://lists.nongnu.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
https://lists.nongnu.org/mailman/listinfo/gluster-devel