Re: [PATCH bpf-next 0/4] Make inode storage available to tracing prog

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dr. Greg, 

Thanks for your input on this. 

> On Nov 14, 2024, at 8:36 AM, Dr. Greg <greg@xxxxxxxxxxxx> wrote:
> 
> On Wed, Nov 13, 2024 at 10:57:05AM -0800, Song Liu wrote:
> 
> Good morning, I hope the week is going well for everyone.
> 
>> On Wed, Nov 13, 2024 at 10:06???AM Casey Schaufler <casey@xxxxxxxxxxxxxxxx> wrote:
>>> 
>>> On 11/12/2024 5:37 PM, Song Liu wrote:
>> [...]
>>>> Could you provide more information on the definition of "more
>>>> consistent" LSM infrastructure?
>>> 
>>> We're doing several things. The management of security blobs
>>> (e.g. inode->i_security) has been moved out of the individual
>>> modules and into the infrastructure. The use of a u32 secid is
>>> being replaced with a more general lsm_prop structure, except
>>> where networking code won't allow it. A good deal of work has
>>> gone into making the return values of LSM hooks consistent.
>> 
>> Thanks for the information. Unifying per-object memory usage of
>> different LSMs makes sense. However, I don't think we are limiting
>> any LSM to only use memory from the lsm_blobs. The LSMs still
>> have the freedom to use other memory allocators. BPF inode
>> local storage, just like other BPF maps, is a way to manage
>> memory. BPF LSM programs have full access to BPF maps. So
>> I don't think it makes sense to say this BPF map is used by tracing,
>> so we should not allow LSM to use it.
>> 
>> Does this make sense?
> 
> As involved bystanders, some questions and thoughts that may help
> further the discussion.
> 
> With respect to inode specific storage, the currently accepted pattern
> in the LSM world is roughly as follows:
> 
> The LSM initialization code, at boot, computes the total amount of
> storage needed by all of the LSM's that are requesting inode specific
> storage.  A single pointer to that 'blob' of storage is included in
> the inode structure.
> 
> In an include file, an inline function similar to the following is
> declared, whose purpose is to return the location inside of the
> allocated storage or 'LSM inode blob' where a particular LSM's inode
> specific data structure is located:
> 
> static inline struct tsem_inode *tsem_inode(struct inode *inode)
> {
> return inode->i_security + tsem_blob_sizes.lbs_inode;
> }
> 
> In an LSM's implementation code, the function gets used in something
> like the following manner:
> 
> static int tsem_inode_alloc_security(struct inode *inode)
> {
> struct tsem_inode *tsip = tsem_inode(inode);
> 
> /* Do something with the structure pointed to by tsip. */
> }

Yes, I am fully aware how most LSMs allocate and use these 
inode/task/etc. storage. 

> Christian appears to have already chimed in and indicated that there
> is no appetite to add another pointer member to the inode structure.

If I understand Christian correctly, his concern comes from the 
size of inode, and thus the impact on memory footprint and CPU
cache usage of all the inode in the system. While we got easier 
time adding a pointer to other data structures, for example socket,
I personally acknowledge Christian's concern and I am motivated to 
make changes to reduce the size of inode. 

> So, if this were to proceed forward, is it proposed that there will be
> a 'flag day' requirement to have each LSM that uses inode specific
> storage implement a security_inode_alloc() event handler that creates
> an LSM specific BPF map key/value pair for that inode?
> 
> Which, in turn, would require that the accessor functions be converted
> to use a bpf key request to return the LSM specific information for
> that inode?

I never thought about asking other LSMs to make any changes. 
At the moment, none of the BPF maps are available to none BPF
code. 

> A flag day event is always somewhat of a concern, but the larger
> concern may be the substitution of simple pointer arithmetic for a
> body of more complex code.  One would assume with something like this,
> that there may be a need for a shake-out period to determine what type
> of potential regressions the more complex implementation may generate,
> with regressions in security sensitive code always a concern.
> 
> In a larger context.  Given that the current implementation works on
> simple pointer arithmetic over a common block of storage, there is not
> much of a safety guarantee that one LSM couldn't interfere with the
> inode storage of another LSM.  However, using a generic BPF construct
> such as a map, would presumably open the level of influence over LSM
> specific inode storage to a much larger audience, presumably any BPF
> program that would be loaded.

To be honest, I think bpf maps provide much better data isolation 
than a common block of storage. The creator of each bpf map has 
_full control_ who can access the map. The only exception is with
CAP_SYS_ADMIN, where the root user can access all bpf maps in the 
system. I don't think this has any security concern over the common
block of storage, as the root user can easily probe any data in the 
common block of storage via /proc/kcore. 

> 
> The LSM inode information is obviously security sensitive, which I
> presume would be be the motivation for Casey's concern that a 'mistake
> by a BPF programmer could cause the whole system to blow up', which in
> full disclosure is only a rough approximation of his statement.
> 
> We obviously can't speak directly to Casey's concerns.  Casey, any
> specific technical comments on the challenges of using a common inode
> specific storage architecture?
> 
> Song, FWIW going forward.  I don't know how closely you follow LSM
> development, but we believe an unbiased observer would conclude that
> there is some degree of reticence about BPF's involvement with the LSM
> infrastructure by some of the core LSM maintainers, that in turn makes
> these types of conversations technically sensitive.

I think I indeed got much more push back than I would imagine. 
However, as always, I value everyone's perspective and I am
willing make reasonable changes to address valid concerns. 

Thanks,
Song





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux