On Mon, 2011-02-07 at 14:44 -0800, Joel Becker wrote: > On Mon, Feb 07, 2011 at 10:38:45PM +0100, Emmanuel Florac wrote: > > Le Mon, 07 Feb 2011 12:09:39 -0800 vous Ãcriviez: > > The configuration files are actually programs, because even a simple > > bash script is a program. I'm getting uneasy. These are not your > > regular, non executable config files. > > This is independent of configfs vs other methods, and probably > wants to be visited on its own. More on this below. > > > > Having python code walk this layout and output the exact > > > running configuration is how we save -> reinstate the running config, > > > and it's really quite trivial with configfs. > > > > Please understand that at this point, anybody lacking a deep > > understanding of kernel innards like me or 99.9% of linux system > > admins or users, really don't give a fsck. I actually understand the > > Nor should they. That's kind of the point. Again, see below. > > > Aaaaargh. Are you trying to convince me that you've done the windows > > registry right this time? I don't want no stinking object oriented > > library to manage my scsi targets. > > > > But finally, OK, I see the point for infrastructure. Now tell me, could > > we avoid more virtual fs creep in the future ? > > Configfs is not a registry. > Let me try to restate the issue. The kernel doesn't persist > state; whether DM maps, iSCSI target configs, or network interface > addresses, some userspace entity has to tell the kernel about it on > each boot. We have multiple methods of doing this: proc/sys, sysfs, > configfs, ioctl(2)s, etc. So if you want to have your system advertise > sdb1 as an iSCSI LUN, you have to tell the target code. > configfs was designed for exactly this case. You want to create > a kernel object (eg a "target" or a "LUN mapping"), and userspace drives > this. You might have multiple objects that need to be connected > ("targets" and "LUNs advertised by this target"), and configfs handles > this too. > You could do this with an ioctl(2), but you'd still have to have > some persistent configuration in userspace so that the ioctl(2) gets > called on each reboot to rebuild the mapping. This is exactly analogous > to "mdadm --assemble" running on each boot. mdadm(8) uses an ioctl(2), > but it still needs /etc/mdadm.conf to know what to send down the > ioctl(2). Similarly, /etc/sysctl.conf tells sysctl(8) what to store on > boot. > So any of these services have to provide some userspace program > that reads a configuration and knows how to send it down to the kernel. > They should be isolating the "how I send it down to the kernel" from > your average admin. Whether that config is a simple text file or a > complex scripty mess is independent of this. One would hope that, like > mdadm(8), you have a simple human-readable text file that a program > turns into the proper kernel invocation. Completely agreed. I prefer having a /bin/sh walkable layout of target kernel data structures using struct config_group references, and using easily maintainable intrepreted code running in userspace to handle changes to that layout. IOCTLs are fine for nuts-and-bolts pieces like mdadm, but I think trying to provide an easily expandable control interface across multiple linux kernel modules w/ internal data structure reference counting is a pretty painful exercise for actually adding new fabric module kernel and new <cough> and flexiable userspace code. > It's great that Nick has fancy tools to fly around the target > configfs space, but that's all fluff for people writing and debugging > the tools. Same with mdadm(8) and the MD ioctl(2)s. You don't want to > know the details of the ioctl(2) call; you just want mdadm(8) to work. > So wrt to the last point, I would like to clarify one item wrt to the RTS userspace tools with LIO target mode v4 kernel code. We (RisingTideSystems & Linux-iSCSI.org) will be publically releasing our single-node high level shell+CLI (rtsadmin) and object oriented python library (rtslib) to drive real-time management of 'for-38' mainline /sys/kernel/config/target/ configfs code. These are the userspace packages that RTS/LIO customers+partners+friends have been using in production with iscsi_target_mod and tcm_loop in v3.x lio-core-backports.git before the mainline adoption of TCM v4 code. The public documentation for both of these is available here: http://www.risingtidesystems.com/doc/rtsadmin/html/rtsadmin_reference.html http://www.risingtidesystems.com/doc/rtslib/html/ Our code will also be including generic support for all v4 compatiable -> target_core_fabric_configfs.c modules, and will be released in git trees as the 'for-39' merge window opens later this spring. We are currently offering rtsadmin-frozen packages on v4.0-rc and v3.x LIO kernel code on a case-by-case basis to a number of folks, and we would be happy to provide new forward looking TCM fabric module developers with this high level shell that can be used directly during their own fabric port I/O bringup efforts from generic fabric level struct config_group /sys/kernel/config/target/core/$HBA/$DEV/ backends on for-38 code. Best Regards, --nab -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html