Re: nfsd vurlerability submit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 21, 2021 at 05:19:32PM -0600, Patrick Goetz wrote:
> On 1/21/21 4:04 PM, bfields@xxxxxxxxxxxx wrote:
> >As I said, NFS allows you to look up objects by filehandle (so,
> >basically by inode number), not just by path
> 
> Except surely this doesn't buy you much if you don't have root
> access to the system?  Is this all only an issue when the
> filesystems are exported with no_root_squash?
> 
> I feel like I must be missing something, but it seems to me that if
> I'm not root, I'm not going to be able to access inodes I don't have
> permissions to access even when directly connected to the exporting
> server.

If an attacker has access to the network (so they can send their own
hand-crafted NFS requests), then filehandle guessing allows them to
bypass the normal process of looking up a file.  So if you were
depending on lookup permissions along that path, or on hiding that path
somehow, you're out of luck.

But it doesn't let them bypass the permissions on the file itself once
they get there.  If the permissions on the file don't allow read, the
server still won't let them read it.

> >>It's not practical to making everything you export its own partition;
> >>although I suppose one could do this with ZFS datasets.
> >
> >I'd be happy to hear about any use cases where that's not practical.
> 
> Sure. The xray example is taken from one of my research groups which
> collects thousands of very large electron microscopy images, along
> with some xray data. I will certainly design this differently in the
> next iteration (most likely using ZFS), but our current server has a
> 519T attached storage device which presents itself as a single
> device: /dev/sdg.  Different groups need access to different classes
> of data, which I export separately and with are presented on the
> workstations as /xray, /EM, etc..
> 
> Yes, I could partition the storage device, but then I run into the
> usual issues where one partition runs out of space while others are
> barely utilized. This is one good reason to switch to ZFS datasets.
> The other is that -- with 450T+ of ever changing data, currently
> rsync backups are almost impossible.  I'm hoping zfs send/receive is
> going to save me here.
> 
> >As Christophe pointed out, xfs/ext4 project ids are another option.
> 
> I must have missed this one, but it just leaves me more confused.
> Project ID's are filesystem metadata, yet this affords better
> boundary enforcement than a bind mount?

Right.  The project ID is stored in the inode, so it's easy to look up
from the filehandle.  (Whereas figuring out what paths might lead to
that inode is a little tricker.)

> Also, the only use case for Project ID's I was able to find are
> project quotas, so am not even sure how this would be implemented, and
> used by NFS.

Project ID's were implemented for quotas, but they also have the
characteristics to work well as NFS export boundaries.

That said, I think Christoph was suggesting this is something we *could*
support, not something that we now do.  Though looking at it quickly, I
think it shouldn't take much code at all.  I'll put it on my list....

Other options for doing this kind of thing might be btrfs subvolumes or
lvm thin provisioning.  I haven't personally used either, but they
should both work now.

--b.



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux