Re: [PATCH v2] drm: enable render-nodes by default

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/20/2014 10:05 AM, David Herrmann wrote:
> Hi Thomas
>
> On Thu, Mar 20, 2014 at 9:48 AM, Thomas Hellstrom <thellstrom@xxxxxxxxxx> wrote:
>> I'm merely trying to envision the situation where a distro wants to
>> create, for example an udev rule for the render nodes.
>>
>> How should the distro know that the implementation is not insecure?
>>
>> Historically drm has refused to upstream drivers without a proper
>> command validation mechanism in place (via for example),
>> but that validation mechanism only needed to make sure no random system
>> memory was ever accessible to an authenticated DRM client.
> I expect all drivers to protect system-memory. All that I am talking
> about is one exec-buffer writing to memory of another _GPU_ buffer
> that this client doesn't have access to. But maybe driver authors can
> give some insights. I'd even expect non-texture/data GPU buffers to be
> protected.
>
>> Now, render nodes are designed to provide also user data isolation. But
>> if we allow insecure implementations, wouldn't that compromise the whole
>> idea?
>> Now that we have a secure API within reach, wouldn't it be reasonable to
>> require implementations to also be secure, following earlier DRM practices?
> I don't understand what this has to do with render-nodes? The API _is_
> secure. What would be the gain of disabling render-nodes for broken
> (even deliberately) drivers? User-space is not going to assume drivers
> are broken and rely on hand-crafted exec-buffers that cross buffer
> boundaries. So yes, we're fooling our selves with API guarantees that
> drivers cannot deliver, but what's the downside?
>
> Thanks
> David

OK, let me give one example:

A user logs in to a system where DRI clients use render nodes. The
system grants rw permission on the render nodes for the console user. 
User starts editing a secret document, starts some GPGPU structural FEM
computations of  the Pentagon building. Locks the screen and goes for lunch.

A malicious user logs in using fast user switching and becomes the owner
of the render node. Tries to map a couple of random offsets, but that
fails, due to security checks. Now crafts a malicious command stream to
dump all GPU memory to a file. Steals the first user's secret document
and the intermediate Pentagon data. Logs out and starts data mining.

Now if we require drivers to block these malicious command streams this
can never happen, and distros can reliably grant rw access to the render
nodes to the user currently logged into the console.

I guest basically what I'm trying to say that with the legacy concept it
was OK to access all GPU memory, because an authenticated X user
basically had the same permissions.

With render nodes we're allowing multiple users into the GPU at the same
time, and it's not OK anymore for a client to access another clients GPU
buffer through a malicious command stream.

/Thomas
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux