Shehjar Tikoo wrote:
The answer to that lies in another question, "why would anyone use a standardized NFS over GlusterFS?" Here are three points from pnfs.com on why: 1. Ensures Interoperability among vendor solutions 2. Allows Choice of best-of-breed products 3. Eliminates Risks of deploying proprietary technology
Argument 3 is largely shot down by the fact that "proprietary technology" is still deployed on the server side. In my experience with large, change-resistant entities the difficulty is in getting something approved for use in the first place. Once you get it approved for the back end, the front end isn't nearly as difficult.
And besides, the risk is all down to the implementation and the maturity thereof, not the protocol itself. Deploying a new, immature NFS server is no more risky than deploying a different, equally immature protocol stack.
I'll add a fourth one: Familiarity of the protocol, is very important, especially in the storage world, where conservatism is preferred over fancy technology. NFS has been tried and tested over 2 decades.
But not backed by a proprietary file system that has only been around for a short time. These arguments are not in the full context. You are, in effect, saying that the NFS client itself won't have to be debugged, when you are in fact adding an additional layer of complexity to debug where it is most counter-productive. There is still the glusterfs client to debug underneath on the server side, only it is now glazed over by the NFS export translator. The conservatism you mention is there for one reason alone - stability; and I don't see how it can possibly be argued that adding another layer of complexity would somehow aid stability.
I accept that there is no counter-argument against "paying customers absolutely, explicitly want this feature" - I'm merely questioning the purely technical aspect of the approach.
Gordan