Re: [PATCH 11/41] drm/i915: Introduce an internal allocator for disposable private objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 14/10/2016 13:54, Chris Wilson wrote:
On Fri, Oct 14, 2016 at 01:42:35PM +0100, Tvrtko Ursulin wrote:
On 14/10/2016 13:18, Chris Wilson wrote:
+	gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_RECLAIMABLE;
+	if (IS_CRESTLINE(i915) || IS_BROADWATER(i915)) {
+		/* 965gm cannot relocate objects above 4GiB. */
+		gfp &= ~__GFP_HIGHMEM;
+		gfp |= __GFP_DMA32;
+	}
+
+	do {
+		int order = min(fls(npages) - 1, max_order);
I still have reservations on going back to max_order when previous
chunks required an order decrease. Size of the object is unbound
since it is indirectly controlled by userspace, correct? How about
decreasing the max_order on every repeated order decrease, following
failed order allocation?
+		struct page *page;
+
+		do {
+			page = alloc_pages(gfp | (order ? QUIET : 0), order);
+			if (page)
+				break;
+			if (!order--)
+				goto err;
Like:
			/* Limit future allocations as well */
			max_order = order;

+		} while (1);
We do pass NORETRY | NOWARN for the higher order allocations, so it
shouldn't be as bad it seems?

I don't know for sure without looking into the implementation details. But I assumed even with NORETRY it does some extra work to try and free up the space. And if it fails, and we ask for it again, it is just doing that extra work for nothing. Because within a single allocation it sounds unlikely that something would change so dramatically that it would start working.

So yes, permanently limiting would sound safer to me.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux