On Tue, 6 Jan 2015 14:39:49 -0500 "J. Bruce Fields" <bfields@xxxxxxxxxxxx> wrote: > On Tue, Jan 06, 2015 at 06:39:57PM +0100, Christoph Hellwig wrote: > > On Tue, Jan 06, 2015 at 12:16:58PM -0500, J. Bruce Fields wrote: > > > > +file system must sit on shared storage (typically iSCSI) that is accessible > > > > +to the clients as well as the server. The file system needs to either sit > > > > +directly on the exported volume, or on a RAID 0 using the MD software RAID > > > > +driver with the version 1 superblock format. If the filesystem uses sits > > > > +on a RAID 0 device the clients will automatically stripe their I/O over > > > > +multiple LUNs. > > > > + > > > > +On the server pNFS block volume support is automatically if the file system > > > > > > s/automatically/automatically enabled/. > > > > > > So there's no server-side configuration required at all? > > > > The only required configuration is the fencing helper script if you > > want to be able to fence a non-responding client. For simple test setups > > everything will just work out of the box. > > I think we want at a minimum some kind of server-side "off" switch. > > If nothing else it'd be handy for troubleshooting. ("Server crashing? > Could you turn off pnfs blocks and try again?") > > --b. Or maybe an "on" switch? We have some patches (not posted currently) that add a "pnfs" export option. Maybe we should add that and only enable pnfs on exports that have that option present? -- Jeff Layton <jlayton@xxxxxxxxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html