I see similar failures when installing a distribution (and not necessarily running FIO) on a VM image present on BD backend too... On Mon, Oct 29, 2012 at 3:16 PM, Bharata B Rao <bharata.rao@xxxxxxxxx> wrote: > Hi, > > With GlusterFS block backend in QEMU, running FIO from inside VM > causes a segfault like this... > > (gdb) bt > #0 0x00007ffff5ff03e8 in __memcpy_ssse3_back () from /lib64/libc.so.6 > #1 0x00007fffeb95cd5e in iov_unload (buf=0x7fffeaeec000 "", > vector=0x7fff24001610, count=1) > at ../../../../libglusterfs/src/common-utils.h:343 > #2 0x00007fffeb95f1c1 in __wb_collapse_small_writes > (holder=0x7fff08001c40, req= > 0x7fff24001c70) at write-behind.c:903 > #3 0x00007fffeb95f427 in __wb_preprocess_winds (wb_inode=0x7fff14000a10) > at write-behind.c:979 > #4 0x00007fffeb95f6e5 in wb_process_queue (wb_inode=0x7fff14000a10) > at write-behind.c:1064 > #5 0x00007fffeb95fbbe in wb_writev (frame=0x5555565cdac4, > this=0x7fffd4003e80, fd= > 0x555556b55d6c, vector=0x555559e15180, count=1, offset=2493120512, > flags=0, iobref= > 0x7fff240014c0, xdata=0x0) at write-behind.c:1160 > #6 0x00007fffeb75394b in ra_writev (frame=0x5555565d2cbc, > this=0x7fffd40049e0, fd= > 0x555556b55d6c, vector=0x555559e15180, count=1, offset=2493120512, > flags=0, iobref= > 0x7fff240014c0, xdata=0x0) at read-ahead.c:682 > #7 0x00007fffeb5429c9 in ioc_writev (frame=0x5555565cd564, > this=0x7fffd4005380, fd= > 0x555556b55d6c, vector=0x555559e15180, count=1, offset=2493120512, > flags=0, iobref= > 0x7fff240014c0, xdata=0x0) at io-cache.c:1250 > #8 0x00007fffeb328c78 in qr_writev (frame=0x5555565cdecc, > this=0x7fffd4005e30, fd= > 0x555556b55d6c, vector=0x555559e15180, count=1, off=2493120512, > wr_flags=0, iobref= > 0x7fff240014c0, xdata=0x0) at quick-read.c:1525 > #9 0x00007fffeb11925f in mdc_writev (frame=0x5555565ce17c, > this=0x7fffd4006880, fd= > 0x555556b55d6c, vector=0x555559e15180, count=1, offset=2493120512, > flags=0, iobref= > 0x7fff240014c0, xdata=0x0) at md-cache.c:1420 > #10 0x00007fffeaf0789b in io_stats_writev (frame=0x5555565cd96c, > this=0x7fffd40072d0, fd= > 0x555556b55d6c, vector=0x555559e15180, count=1, offset=2493120512, > flags=0, iobref= > 0x7fff240014c0, xdata=0x0) at io-stats.c:2091 > #11 0x00007ffff5c79ed0 in syncop_writev (subvol=0x7fffd40072d0, > fd=0x555556b55d6c, vector= > 0x555559e15180, count=1, offset=2493120512, iobref=0x7fff240014c0, flags=0) > at syncop.c:1096 > #12 0x00007ffff72ac51e in glfs_pwritev (glfd=0x555556b6ef50, > iovec=0x5555565691e0, iovcnt= > 6, offset=2493120512, flags=0) at glfs-fops.c:543 > #13 0x00007ffff72ac04c in glfs_io_async_task (data=0x555556a1e660) at > glfs-fops.c:396 > #14 0x00007ffff5c732ab in synctask_wrap (old_task=0x5555571145e0) at > syncop.c:125 > #15 0x00007ffff5ef0360 in ?? () from /lib64/libc.so.6 > #16 0x0000000000000000 in ?? () > > git bisect points to > > commit c903de38da917239fe905fc6efa1f413d120fc04 > Author: Anand Avati <avati@xxxxxxxxxx> > Date: Thu Sep 13 22:26:59 2012 -0700 > > write-behind: implement causal ordering and other cleanup > > I hadn't seen this earlier since I had resorted to > performance.write-behind=off setting for my test volume. > > Regards, > Bharata. > -- > http://bharata.sulekha.com/blog/posts.htm, http://raobharata.wordpress.com/ -- http://bharata.sulekha.com/blog/posts.htm, http://raobharata.wordpress.com/