On 06/08/2012 01:06 PM, Boaz Harrosh wrote: > > NONE-RPC layout-Drivers call nfs_writeback_done() as part > of their completion of IO. (through pnfs_ld_write_done()) > > Inside nfs_writeback_done() there is code that does: > > else if (resp->count < argp->count) { > ... > > /* This a short write! */ > nfs_inc_stats(inode, NFSIOS_SHORTWRITE); > > ... /* Prepare the remainder */ > > rpc_restart_call_prepare(task); > } > > But for none-rpc LDs (objects, blocks) there is no task->tk_ops > and this code will crash. > <snip> Trond hi Sorry for the late response I was sick (at Hospital and was away) I must push these fixes to Linus ASAP I want to push them tomorrow. They are for 3.5-rc7 and CC to stable@. I would love to also push the objlayout patches as one push as well. Please give me your blessing and ACK so I can do this. I have done a small rebase over 3.5-rc5 and a few cleanups mainly Peng's comment about ZERO_PAGE. You can see the pending push request here: http://git.open-osd.org/gitweb.cgi?p=linux-open-osd.git;a=shortlog;h=refs/heads/for-linus These are the list of patches: ore: Fix NFS crash by supporting any unaligned RAID IO ore: Remove support of partial IO request (NFS crash) ore: Unlock r4w pages in exact reverse order of locking These above are ORE patches that actually fix the NFS CRASH pnfs-obj: don't leak objio_state if ore_write/read ... This above is an important mem leak in the error case pnfs-obj: Fix __r4w_get_page when offset is beyond ... NFS41: add pg_layout_private to nfs_pageio_descriptor pnfs-obj: Better IO pattern in case of unaligned offset for-linus I would also love to push these 3 please advise Please look into this ASAP, as we are so late already because of my absence. If you'll inspect them carefully you will see that other then the fix they are low risk. And I have tested them extensively. (I pushed it to linux-next and will let it cook for 48 hours) Thanks for your help Boaz -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html