Re: [PATCH] drm/i915/selftest: allow larger memory allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Krzysztof,

[...]

> > > I guess a check for contiguousness does not hurt, but the test behavior
> > > is fundamentally different here. Maybe manually rounding size up to the
> > > nearest larger power of two would be a better idea here?
> > 
> > Before changes were made to the allocator we knew that there
> > was a corner case with rounding size to the power of two. Now,
> > we know that the allocator will take in any appropriate size and
> > give us a valid object (correct me if I'm wrong here) - if that
> > is the case, then this should just be a check if we fail on a
> > size that is too large (unless this is covered by some other
> > test).
> > 
> 
> That's what I meant with rounding up to the power of two - make an
> object intentionally larger than we can accomodate so we can check if
> that gets rejected. I suggested using that rounding up so we replicate
> the previous behavior, but I guess it could as well be size + 1 or size
> * 2 or any larger value; the point is to verify that the driver doesn't
> allow allocs larger than expected.

I think what I wanted to convey, but failed with my wording, is
that I do not think we need to add more complexity to this test
by introducing rounding size down to the closest power of two,
but rather explicitly make the size too big (as you mentioned,
something like <max_available_size> + 1 would work).

[...]

> > >         obj = igt_object_create(mem, &objects, rounddown_pow_of_two(size),
> > >                                 I915_BO_ALLOC_CONTIGUOUS);
> > >         if (IS_ERR(obj)) {
> > >                 pr_err("%s largest possible contiguous allocation failed\n",
> > >                        __func__);
> > >                 err = PTR_ERR(obj);
> > >                 goto out_close;
> > >         }
> > > 
> > > This is the next check - we see if the largest possible allocation
> > > (according to the old logic, it would be 'size' rounded down to the
> > > largest smaller or equal power of two) _does_ go through. The success of
> > > this particular check isn't affected by the allocator changes, but since
> > > rounddown_pow_of_two(size) is not the largest possible allocation
> > > anymore, maybe it's better to change this too (e.g. drop the rounddown
> > > function). This way we keep the intended test behavior here as well.
> > > I suppose this is still in scope of the patch.
> > 
> > Yes, this would be a bit different than the original behavior,
> > but we'd be sure to fail object allocation. If no other test
> > checks this condition then this test could do that here.
> > 
> 
> I mean, if this test was doing that, I don't think we should patch that
> out for no reason.

Oh, I got confused and thought that you wanted to add some more
checks there.
I think that we could have this part of the test ensure that the
max possible allocation is still, well, "possible", so this
would turn out to be more of a sanity check.

> IMO we need to tweak those two checks to check for the same thing
> functionally but keep the new allocator behavior in mind. Then
> contiguousness is a good thing to check, but I think it should be added
> as a third check so we don't lose either of the other two.

I agree. We need the following:
 1) Fail check on "too large",
 2) Pass check on "max possible size",
 3) Pass check on contiguous object.

@Mikolaj, could you add these tweaks to this patch?

Best Regards,
Krzysztof



[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux