Hi, On Mon, 2022-01-31 at 17:33 -0800, Luis Chamberlain wrote: > It would seem we keep tacking on things with ioctls for the block > layer and filesystems. Even for new trendy things like io_uring [0]. > For a few years I have found this odd, and have slowly started > asking folks why we don't consider alternatives like a generic > netlink family. I've at least been told that this is desirable > but no one has worked on it. *If* we do want this I think we just > not only need to commit to do this, but also provide a target. LSFMM > seems like a good place to do this. > > Possible issues? Kernels without CONFIG_NET. Is that a deal breaker? > We already have a few filesystems with their own generic netlink > families, so not sure if this is a good argument against this. > > mcgrof@fulton ~/linux-next (git::master)$ git grep > genl_register_family fs > fs/cifs/netlink.c: ret = > genl_register_family(&cifs_genl_family); > fs/dlm/netlink.c: return genl_register_family(&family); > fs/ksmbd/transport_ipc.c: ret = > genl_register_family(&ksmbd_genl_family); > fs/quota/netlink.c: if (genl_register_family("a_genl_family) > != 0) > mcgrof@fulton ~/linux-next (git::master)$ git grep > genl_register_family drivers/block > drivers/block/nbd.c: if (genl_register_family(&nbd_genl_family)) { > > Are there other reasons to *not* use generic netlink for new > features? > For folks with experience using generic netlink on the block layer > and > their own fs, any issues or pain points observed so far? > > [0] > https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/commit/?h=nvme-passthru-wip.2&id=d11e20acbd93fbbcdaf87e73615cdac53b814eca > > Luis > I think it depends very much on what the interface is, as to which of the available APIs (or even creating a new one) is the most appropriate option. Netlink was investigated a little while back as a potential interface for filesystem notifications. The main reason for this is that it solves one of the main issues there, which is the potentially unbounded number of notifications that might be issued into a queue of finite capacity. Netlink was originally designed for network routing messages which have a similar issue. As such a mechanism was built in to allow dropping of messages when the queue overflows, but in a way that it is known that this has happened, so one can then resync from the kernel's information. For things such as mount notifications, which can be numerous in various container scenarios, this is an important requirement. However, it is also clear that netlink has some disadvantages too. The first of these is that it is aligned to the network subsystem in terms of namespaces. Since the kernel has no concept of a container per se, the fact that netlink is in the network namespace rather than the filesystem namespace makes using it with filesystems more difficult. Another issue is that netlink has gained a number of additional features and layers over the years, leading to some overhead that is perhaps not needed in applications on the filesystem side. That is why, having carefully considered the options David Howells created a new interface for the notifications project. It solves the problems mentioned above, while still retaining the advantages or being able to deal with producer/consumer problems. I'm not sure from the original posting though exactly which interfaces you had in mind when proposing this topic. Depending on what they are it is possible that another solution may be more appropriate. I've included the above mostly as a way to explain what has already been considered in terms of netlink pros/cons for one particular application, Steve.