Re: [PATCH 2/3] drm: simplify authentication management

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

On Mon, May 4, 2015 at 5:25 PM, Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote:
> On Mon, May 04, 2015 at 05:14:51PM +0200, David Herrmann wrote:
>> Hi
>>
>> On Mon, May 4, 2015 at 5:11 PM, Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote:
>> > On Mon, May 04, 2015 at 05:02:37PM +0200, David Herrmann wrote:
>> >> Hi
>> >>
>> >> On Mon, May 4, 2015 at 4:46 PM, Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote:
>> >> > On Mon, May 04, 2015 at 04:05:13PM +0200, David Herrmann wrote:
>> >> >> The magic auth tokens we have are a simple map from cyclic IDs to drm_file
>> >> >> objects. Remove all the old bulk of code and replace it with a simple,
>> >> >> direct IDR.
>> >> >>
>> >> >> The previous behavior is kept. Especially calling authmagic multiple times
>> >> >> on the same magic results in EINVAL except on the first call. The only
>> >> >> difference in behavior is that we never allocate IDs multiple times, even
>> >> >> if they were already authenticated. To trigger that, you need 2^31 open
>> >> >> DRM file descriptions at the same time, though.
>> >> >>
>> >> >> Diff-stat:
>> >> >>    5 files changed, 45 insertions(+), 157 deletions(-)
>> >> >>
>> >> >> Signed-off-by: David Herrmann <dh.herrmann@xxxxxxxxx>
>> >> >
>> >> > Just one quibble,
>> >> >
>> >> >> @@ -189,10 +80,17 @@ int drm_authmagic(struct drm_device *dev, void *data,
>> >> >>       struct drm_file *file;
>> >> >>
>> >> >>       DRM_DEBUG("%u\n", auth->magic);
>> >> >> -     if ((file = drm_find_file(file_priv->master, auth->magic))) {
>> >> >> +
>> >> >> +     if (auth->magic >= INT_MAX)
>> >> >> +             return -EINVAL;
>> >> >
>> >> > The idr upper bound is INT_MAX [inclusive].
>> >> >
>> >> >> +     mutex_lock(&dev->struct_mutex);
>> >> >> +     file = idr_find(&file_priv->master->magic_map, auth->magic);
>> >> >
>> >> > But given that it will return NULL anyway, do you really want to
>> >> > short-circuit the erroneous lookup, and just leave it to the idr to
>> >> > validate it itself?
>> >>
>> >> Indeed, I can just change it to "auth->magic > INT_MAX". Fixed!
>> >>
>> >> > So... How efficient is the cyclic idr for a long running system that has
>> >> > accumulated several hundred thousand stale magics?
>> >> >
>> >> > Maybe an igt to create a couple of billion shortlived clients (with
>> >> > overlapping lifetimes) and check the behaviour?
>> >>
>> >> I'm not sure this is an interesting benchmark. I mean, you usually
>> >> have a dozen DRM clients max, anyway. So the only interesting detail
>> >> is whether the cyclic behavior causes the IDR tree to degenerate. But
>> >> in that case, we should either fix IDR or see whether we can avoid the
>> >> cyclic behavior.
>> >
>> > I was actually more concerned about what happens if we end up with 2
>> > billion stale clients - do we get 2 billion entries? Can we even
>> > allocate that many? I suspect we end up with a DoS.
>>
>> Ehh, the magic-entries are dropped on close(). So to actually keep 2
>> billion entries, you also need 2 billion "struct file*", "struct
>> drm_file*", ...
>>
>> The idr_alloc_cyclic() does _not_ mark old entries as "used". All it
>> does is try to remember new IDs in a cyclic order. On wrap-around, old
>> IDs will get re-used.
>
> But the layers are only freed if all below them are empty (iiui).
> Basically, I am not entirely familar with how idr will cope on a long
> running system, and would like some elaboration (and a test for future
> reference!).

IDR uses a tree of idr_layer objects (basically an optimized suffix
tree on the binary ID). Tree depth is increased dynamically if a layer
runs full. Max depths of layers is 4. Each layer manages 256 children
(leafs store the void* data, nodes store pointers to sub-layers). Size
of a layer is roughly 2k on 64bit.

If the ID-range is small and packed, the storage is used optimally.
But a sparse ID-range might occupy one layer per ID in worst case.
Lookup speed should be negligible as we are limited to a depth of 4
layers. However, memory consumption might get huge if the IDs are
spread even. But this really depends on the allocation scheme.

I'm not sure how to write a benchmark for this. I mean, I can easily
craft one that causes the IDR to degenerate, but it requires us to
keep huge numbers of files open.
But this fact makes IDR rather suboptimal for cyclic allocations, so I
switched to idr_alloc() now. This guarantees tight/packed ID ranges
and user-space cannot degenerate the layers, anymore. That is, unless
you open more than 256 FDs on a device in parallel, we're fine with a
single IDR layer; always. This should work fine, right?

Thanks
David
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel





[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux