On Fri, 2011-03-25 at 12:31 +0100, Bart Van Assche wrote: > On Fri, Mar 25, 2011 at 1:18 AM, James Bottomley > <James.Bottomley@xxxxxxx> wrote: > >> OK, so what about an upcall to userspace to create the necessary > > directories? That could be driven by the kernel and still not require > > any implementation in configfs. > > Hi James, > > I might have missed something, but which upcall mechanism are you > referring to ? Personally I'm not fond of the upcall concept because > as far as I can see any synchronous upcall mechanism can potentially > be used to trigger lock inversions not detectable by the PROVE_LOCKING > mechanism. Any ... we have relay, netlink, uevent etc. I don't see how there could be a lock inversion. Upcalls are by definition asynchronous. The use case for ibmvscsi, which seems the most pressing, is simply that on load it unpacks the config information, does the upcall and exits. The daemon creates the necessary directories with the information and the vscsi interface is functional once everything is set up. There's no locking problems in that use case. > Regarding kernel-space driven directory creation in configfs: I have > been wondering whether it is possible to implement any configuration > filesystem such that directories can be created synchronously from > kernel space without triggering lock inversion. I don't see this as a > configfs limitation but as an inherent limitation of a configuration > filesystem. In a similar way a self-declarative kernel interface like > sysfs has the limitation that it is not possible to add configfs-style > configuration functionality without triggering lock inversion. Both > sysfs and configfs have important advantages over mechanisms like > ioctl() and netlink but there are some disadvantages too. Explain first what lock inversion problems you see ... those usually only happen if you have the in the kernel upcall thread waiting for completion. James -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html