[PATCH 09/13] drm/i915: add reset_state for hw_contexts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/28/2013 03:14 AM, Chris Wilson wrote:
> On Wed, Feb 27, 2013 at 05:15:08PM -0800, Ian Romanick wrote:
>> On 02/27/2013 01:13 AM, Chris Wilson wrote:
>>> On Tue, Feb 26, 2013 at 05:47:12PM -0800, Ian Romanick wrote:
>>>> On 02/26/2013 03:05 AM, Mika Kuoppala wrote:
>>>>> For arb-robustness, every context needs to have it's own
>>>>> reset state tracking. Default context will be handled in a identical
>>>>> way as the no-context case in further down in the patch set.
>>>>> For no-context case, the reset state will be stored in
>>>>> the file_priv part.
>>>>>
>>>>> v2: handle default context inside get_reset_state
>>>>
>>>> This isn't the interface we want.  I already sent you the patches
>>>> for Mesa, and you seem to have completely ignored them.  Moreover,
>>>> this interface provides no mechanism to query for its existence
>>>> (other than relying on the kernel version), and no method to
>>>> deprecate it.
>>>
>>> It's existence, and the existence of any successor, is the only test you
>>> need to check for the interface. Testing the flag field up front also
>>> lets you know if individual flags are known prior to use.
>>>
>>>> Based on e-mail discussions, I think danvet agrees with me here.
>>>>
>>>> Putting guilty / innocent counting in kernel puts policy decisions
>>>> in the kernel that belong with the user space API that implements
>>>> them. Putting these choices in the kernel significantly decreases
>>>> how "future proof" the interface is.  Since any kernel/user
>>>> interface has to be kept forever, this is just asking for
>>>> maintenance trouble down the road.
>>>
>>> I think you have the policy standpoint reversed.
>>
>> Can you elaborate?  I've been out of the kernel loop for a long
>> time, but I always thought policy and mechanism were supposed to be
>> separated.  In the case of drivers with a user-mode component, the
>> mechanism was in the kernel (where else could it be?), and policy
>> decisions were in user-mode.  This is often at odds with keeping the
>> interfaces between kernel and user thin, so that may be where my
>> misunderstanding is.
>
> Right, the idea is to keep policy out of the kernel. I disagree that
> your suggestion succeeds in doing this.

I was trying to put the mechanism for determining what things happened 
in the kernel and the policy for interpreting what those things mean in 
user-mode.

Ignoring some bugs / misimplementation in my patch (which I try to 
address below), I thought I was doing that.  User-mode gets some 
information about the state it was in when the reset occurred.  It then 
uses that state to decide the value to return to the application for 
glGetGraphicsResetStatusARB.

After reading the bits below where I talk about other problems in the 
kernel code I posted, maybe you can address specific ways my proposal 
fails.  I want any glaring interface deficiencies fixed before any code 
goes in the kernel... I fully understand the pain, on both sides, of 
shipping a kernel interface only to have to replace it in a year.

> [snip to details]
>
>> There are two requirements in the ARB_robustness extension (and
>> additional layered extensions) that I believe cause problems with
>> all of the proposed counting interfaces.
>>
>> 1. GL contexts that are not affected by a reset (i.e., didn't lose
>> any objects, didn't lose any rendering, etc.) should not observe it.
>> This is the big one that all the browser vendors want.  If you have
>> 5 tabs open, a reset caused by a malicious WebGL app in one tab
>> shouldn't, if possible, force them to rebuild all of their state for
>> all the other tabs.
>>
>> 2. After a GL context observes a reset, it is garbage.  It must be
>> destroyed and a new one created.  This means we only need to know
>> that a reset of any variety affected the context.
>
> For me observes implies the ability for the context to inspect global
> system state, whereas I think you mean if the context and its associated
> state is affected by a reset.

I had thought about explicitly defining what I meant by a couple terms 
in my previous e-mail, but it was already getting quite long.  I guess I 
should have. :(  Here's what I meant:

A. "Affect the context" means the reset causes the context to lose some 
data.  It may be rendering that was queued (so framebuffer data is 
lost), it may be the contents of a buffer, or it may be something else. 
  Either way, the state of some data is not what the application expects 
it to be because of the reset.

B. "Observe the reset" means the GL context gets some data to know that 
the reset happened.  From the application point of view, this means 
glGetGraphicsResetStatusARB returns a value other than GL_NO_ERROR.

What I was trying the describe in point 1 in my previous message (above) 
is that B should occur if and only if A occurs.  Once B occurs, the 
application has to create a new context... recompile all of its shaders, 
reload all of its textures, reload all of its vertex data, etc.  We 
don't want to make apps do that any more often than absolutely necessary.

>> In addition, I'm concerned about having one GL context be able to
>> observe, in any manner, resets that did not affect it.  Given that
>> people have figured out how to get encryption keys by observing
>> cache timings, it's not impossible for a malicious program to use
>> information about reset counts to do something.  Leaving a potential
>> gap like this in an extension that's used to improved security
>> seems... like we're just asking for a Phoronix article.  We don't
>> have any requirement to expose this information, so I don't think we
>> should.
>
> So we should not do minor+major pagefault tracking either? I only
> suggested that you could read the global state because I thought you
> implied you wanted it. From the display server POV, global state is the
> most useful as I am more interested in whether I can reliably use the GPU
> and prevent a DoS from a misbehaving set of clients.

In the X model, this would be easy to solve.  We would make the 
total-number-of-resets query only available to be DRM master.  As we 
move away from that model, it becomes more difficult.

It seems that the usage is different enough that having separate queries 
for the global reset count and the per-context resets feels right.

That still leaves the potential information leakage issue, and I'm 
honestly not sure what the right answer is there.  Perhaps we could add 
that query after we've gotten some feedback from an actual security 
expert.  There may be other options that we haven't considered yet. 
Maybe something like advertising the GPU reset "load average."  Dunno.

>> We also don't have any requirement to expose this functionality on
>> anything pre-Sandy Bridge.  We may want, at some point, to expose it
>> on Ironlake.  Based on that, it seems reasonable to tie the
>> availability of reset notifications to hardware contexts.  Since we
>> won't test it, Mesa certainly isn't going to expose it on 8xx or
>> 915.
>
>>From a design standpoint hw contexts are just one function of the context
> object and not the reverse. Ben has been rightfully arguing that we need
> contexts in the kernel for far more than supporting logical hw contexts.

That's a fair point, and I also tend to agree with Ben.

I think it may also be an orthogonal issue.  As things currently stand 
(at least as far as I can tell) kernel contexts are tied to logical hw 
contexts.  Since we don't have a requirement for reset query support on 
older GPUs, it doesn't seem to make sense to put extra code in the 
kernel, that nobody will use, to do it.  If at some point the linkage 
between kernel contexts and hw contexts, and we can get a bunch of this 
mostly for free, then it makes sense to add the support then.

Every single time I've written "future" code like that, it has either 
been broken or it turns out to not be what I need when the future 
arrives.  I know I'm not the only one with that experience.

>> Based on this, a simplified interface became obvious:  for each
>> hardware context, track a mask of kinds of resets that have affected
>> that context.  I defined three bits in the mask:
>
> The problem is that you added an atomic-read-and-reset into the ioctl. I
> objected to that policy when Mika presented it earlier as it prevents
> concurrent accesses and so imposes an explicit usage model.

I may have mentioned that my kernel patch was very early work and should 
be taken with a grain of salt. :)  You probably noticed that the patch 
was generated by diff and not by git-format-patch.  I hadn't even 
committed any of that to my local tree.  Given its shabby state, I was 
reluctant to even send it out.  However, if I'm going to be critical of 
another person's work, I have a responsibility to be fully open.

I too came to the conclusion that the atomic-read-and-reset part was a 
bad idea... I had forgotten that I even coded it that way, or I would 
have mentioned that in my previous message.

The other thing in my kernel patch that I think is a bad idea (as you 
note below) is that processes can query the reset state of HW contexts 
associated with different fds.  That coupled with the 
atomic-read-and-reset is completely disastrous.  I think this should be 
tightened up for all the reasons we've both mentioned.

>> 1. The hw context had a batch executing when the reset occurred.
>>
>> 2. The hw context had a batch pending when the reset occurred.
>> Hopefully in the future we could make most occurrences of this go
>> away by re-queuing the batch.  It may also be possible to do this in
>> user-mode, but that may be more difficult.
>>
>> 3. "Other."
>>
>> There's also the potential to add other bits to communicate
>> information about lost buffers, textures, etc.  This could be used
>> to create a layered extension that lets applications transfer data
>> from the dead context to the new context.  Browsers may not be too
>> interested in this, but I could imagine compositors or other kinds
>> of applications benefiting.  We may never implement that, but it's
>> easier to communicate this through a set of flags than through a set
>> of counts.
>>
>> The guilty / innocent counts in this patch series lose information.
>
> Disagree. The flags represent a lose of information, as you explained
> earlier.

With the counts in this series, user-mode can't tell the difference 
between a reset that didn't affect the context and one that did.

Once a GL context observes a reset (i.e., when 
glGetGraphicsResetStatusARB returns non-GL_NO_ERROR), it is dead.  The 
only numbers that matter in that case are zero and not-zero.  If we 
assume that applications using this interface will query resets on a 
periodic basis, and both the major Linux browser vendors have told me 
that they do, the count will either be zero or some very small number. 
Given that, there's very little information lost, and the information 
that is lost is not important to the consumers of that information.

Now, this isn't the case with the display manager use case.  That case 
clearly wants counts, and it wants counts over time.  Like I said above, 
I think this is a different enough use case that having a separate 
mechanism may be the right approach.

>> We can probably reconstruct the current flags from the counts, but
>> we would have to do some refactoring (or additional counting) to get
>> other flags in the future.  All of this would make the kernel code
>> more complex. Why reconstruct the data you want when you could just
>> track that data in the first place?
>
> Because you are constructing policy decision based on raw information.
>
>> Since the functionality in not available on all hardware, I also
>> added a query to determine the availability.  This has the added
>> benefit that, should this interface prove to be insufficient in the
>> future, we could deprecate it by having the query always return
>> false.
>
> That offers no more information than just probing the ioctl.

There are a couple other bits of kernel functionality that Mesa queries 
using this method before using (e.g., GEN7 SOL reset).  Since this was, 
as far as I could tell, similarly querying kernel support for some 
hardware functionality, I assumed it should have a similar sort of 
query.  In the future, what metric should I use to determine which is which?

Also, what does "probing the ioctl" mean exactly?  Just call it and see 
whether you get ENOSYS?  Doesn't that run afoul of the requirement to 
never remove interfaces?

>> All software involved has to be able the handle the case
>> where the interface is not available, so I don't think this should
>> run awry of the rules about kernel/user interface breakage.  Please
>> correct me if I'm wrong.
>
> [snip patch locations]
>
> The biggest change you made was to reduce the counts to a set of flags,
> to make it harder for an attacker to analyse the reset frequency.
> Instead that attacker can just poll the flags and coarsely reconstruct
> the counts... If that is their goal. Instead it transfers userspace
> policy into the kernel, which is what we were seeking to avoid, right?
> -Chris



[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux