Hi On Wed, Mar 25, 2015 at 7:12 PM, Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote: > On Wed, Mar 25, 2015 at 10:29 AM, David Herrmann <dh.herrmann@xxxxxxxxx> wrote: [...] >>> I could be wrong about the lack of use cases. If so, please enlighten me. >> >> We have several dbus APIs that allow clients to register as a special >> handler/controller/etc. (eg., see systemd-logind TakeControl()). The >> API provider checks the privileges of a client on registration and >> then just tracks the client ID. This way, the client can be privileged >> when asking for special access, then drop privileges and still use the >> interface. You cannot re-connect in between, as the API provider >> tracks your bus ID. Without message-metadata, all your (other) calls >> on this bus would always be treated as privileged. We *really* want to >> avoid this. > > Connect twice? The remote peer might cache your connection ID to track you. You have to use the same connection to talk to that peer, in this given scenario. That's how D-Bus1 is used today, and we have to follow the semantics. > You *already* have to reconnect or connect twice because you have > per-connection metadata. That's part of my problem with this scheme > -- you support *both styles*, which seems like it'll give you most of > the downsides of both without the upsides. Not necessarily. Connection metadata describes the state at the time you connected to the bus. If someone ask for this information, they will get exactly that. In this model, you cannot drop privileges, if you need to be privileged during setup. If someone asks for per-message metadata, they better ought not ask for per-connection creds. It's not the information they're looking for, so it will not match the data that at the time the message was sent. There is no immediate need to make both match. For security decisions, we mandate per-message creds. Per-connection creds are for backwards-compatibility to dbus1 and for passive introspection of bus connections. >> >> Another example is logging, where we want exact data at the time a >> message is logged. Otherwise, the data is useless. > > Why? > > No, really, why is exact data at the time of logging so important? It > sounds nice, but I really don't see it. Example: If you don't have message-metadata, you don't know the thread which sent a log-message. In a multi-threaded application, that's incredibly useful information. After all, logging is all about correct data. Logging creds that were not effective at the time the message was sent is futile. >> >> kdbus is a _bus_, not a p2p channel. Thus, a peer may talk to multiple >> destinations, and it may want to look different to each of them. DBus >> method-calls allow 'syscall'-ish behavior when calling into other >> processes. We *want* to be able to drop privileges after doing process >> setup. We want further bus-calls to no longer be treated privileged. > > You could have an IOCTL that re-captures your connection metata. I don't see how this makes the model easier, or more predictable. On the contrary, per-connection metadata is no longer compatible to UDS / dbus1, nor is a per-message metadata concept reliable. >> >> Furthermore, DBus was designed to allow peers to track other peers >> (which is why it always had the NameOwnerChanged signal). This is an >> essential feature, that simplifies access-management considerably, as >> you can cache it together with the unique name of a peer. We only open >> a single connection to a bus. glib, libdbus, efl, ell, qt, sd-bus, and >> others use cached bus-connections that are shared by all code of a >> single thread. Hence, the bus connection is kinda part of the process >> itself, like stdin/stdout. Without message-metadata, it is impossible >> to ever drop privileges on a bus, without losing all state. > > See above about an IOCTL that re-captures your connection metadata. > > Again, you seem to be arguing that per-connection metadata is bad, but > you still have an implementation of per-connection metadata, so you > still have all these problems. I don't see why we get the problems of per-connection metadata. Just because you _can_ use it, doesn't mean you should use it for all imaginable use-cases. The same goes for reading information from /proc. There are valid use-cases to do so, but also a lot of cases where it will not provide the information you want. > I'm actually okay with per-message metadata in principle, but I'd like > to see evidence (with numbers, please) that a send+recv of per-message > metadata is *not* significantly slower than a recv of already-captured > per-connection metadata. If this is in fact the case, then maybe you > should trash per-connection metadata instead and the legacy > compatibility code can figure out a way to deal with it. IMO that > would be a pretty nice outcome, since you would never have to worry > whether your connection to the bus is inadvertantly privileged. Per-message metadata makes SEND about 25% slower, if you transmit the full set of all possible information. Just 3% if you only use PIDs+UIDs. The expensive metadata is cgroup-path and exe-path. If a service needs that information, however, and if that information is not guaranteed to be up-to-date, the service _will_ go and look it up in /proc or somewhere else, which is certainly a whole lot more expensive than the code in kdbus. In general, there seems to be a number of misconception in this thread about what kdbus is supposed to be. We're not inventing something new here with a clean slate, but we're moving parts of an existing implementation that has tons of users into the kernel, in order to fix issues that cannot be fixed otherwise in userspace (most notably, the race gaps that exist when retrieving per-message metadata). Therefore, we have to keep existing semantics stable, otherwise the exercise is somewhat pointless. Thanks David -- To unsubscribe from this list: send the line "unsubscribe linux-api" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html