On Sat, Jul 24, 2021 at 2:51 PM Florian Westphal <fw@xxxxxxxxx> wrote: > Paul Moore <paul@xxxxxxxxxxxxxx> wrote: > > Tow main drivers on my side: > > > - there are use cases/deployments that do not use them. > > > - moving them around was doable in term of required changes. > > > > > > There are no "slow-path" implications on my side. For example, vlan_* > > > fields are very critical performance wise, if the traffic is tagged. > > > But surely there are busy servers not using tagget traffic which will > > > enjoy the reduced cachelines footprint, and this changeset will not > > > impact negatively the first case. > > > > > > WRT to the vlan example, secmark and nfct require an extra conditional > > > to fetch the data. My understanding is that such additional conditional > > > is not measurable performance-wise when benchmarking the security > > > modules (or conntrack) because they have to do much more intersting > > > things after fetching a few bytes from an already hot cacheline. > > > > > > Not sure if the above somehow clarify my statements. > > > > > > As for expanding secmark to 64 bits, I guess that could be an > > > interesting follow-up discussion :) > > > > The intersection between netdev and the LSM has a long and somewhat > > tortured past with each party making sacrifices along the way to get > > where we are at today. It is far from perfect, at least from a LSM > > perspective, but it is what we've got and since performance is usually > > used as a club to beat back any changes proposed by the LSM side, I > > would like to object to these changes that negatively impact the LSM > > performance without some concession in return. It has been a while > > since Casey and I have spoken about this, but I think the prefered > > option would be to exchange the current __u32 "sk_buff.secmark" field > > with a void* "sk_buff.security" field, like so many other kernel level > > objects. Previous objections have eventually boiled down to the > > additional space in the sk_buff for the extra bits (there is some > > additional editorializing that could be done here, but I'll refrain), > > but based on the comments thus far in this thread it sounds like > > perhaps we can now make a deal here: move the LSM field down to a > > "colder" cacheline in exchange for converting the LSM field to a > > proper pointer. > > > > Thoughts? > > Is there a summary disucssion somewhere wrt. what exactly LSMs need? My network access is limited for the next week so I don't have the ability to dig through the list archives, but if you look through the netdev/LSM/lists over the past decade (maybe go back ~15 years?) you will see multiple instances where we/I've brought up different solutions with the netdev folks only to hit a brick wall. The LSM ask for sk_buff is really the same as any other kernel object that we want to control with LSM access controls, e.g. inodes; we basically want a void* blob with the necessary hooks so that the opaque blob can be managed through the skb's lifetime. > There is the skb extension infra, does that work for you? I was hopeful that when the skb_ext capability was introduced we might be able to use it for the LSM(s), but when I asked netdev if they would be willing to accept patches to leverage the skb_ext infrastructure I was told "no". -- paul moore www.paul-moore.com