On 12/5/06, Rob Ross <rross@xxxxxxxxxxx> wrote:
Hi, I agree that it is not feasible to add new system calls every time somebody has a problem, and we don't take adding system calls lightly. However, in this case we're talking about an entire *community* of people (high-end computing), not just one or two people. Of course it may still be the case that that community is not important enough to justify the addition of system calls; that's obviously not my call to make!
I have the feeling that openg stuff is rushed without looking into all solutions, that don't require changes to the current interface. I don't see any numbers showing where exactly the time is spent? Is opening too slow because of the number of requests that the file server suddently has to respond to? Does having an operation that looks up multiple names instead of a single name good enough? How much time is spent on opening the file once you have resolved the name?
I'm sure that you meant more than just to rename openg() to lookup(), but I don't understand what you are proposing. We still need a second call to take the results of the lookup (by whatever name) and convert that into a file descriptor. That's all the openfh() (previously named sutoc()) is for.
The idea is that lookup doesn't open the file, just does to name resolution. The actual opening is done by openfh (or whatever you call it next :). I don't think it is a good idea to introduce another way of addressing files on the file system at all, but if you still decide to do it, it makes more sense to separate the name resolution from the operations (at the moment only open operation, but who knows what'll somebody think of next;) you want to do on the file.
I think the subject line might be a little misleading; we're not just talking about NFS here. There are a number of different file systems that might benefit from these enhancements (e.g. GPFS, Lustre, PVFS, PanFS, etc.).
I think that the main problem is that all these file systems resove a path name, one directory at a time bringing the server to its knees by the huge amount of requests. I would like to see what the performance is if you a) cache the last few hundred lookups on the server side, and b) modify VFS and the file systems to support multi-name lookups. Just assume for a moment that there is no any way to get these new operations in (which is probaly going to be true anyway :). What other solutions can you think of? :) Thanks, Lucho - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html