On 25/09/2020 08:13, Leon Romanovsky wrote:
On Thu, Sep 24, 2020 at 09:21:20AM +0100, Tvrtko Ursulin wrote:
On 22/09/2020 09:39, Leon Romanovsky wrote:
From: Maor Gottlieb <maorg@xxxxxxxxxxxx>
Extend __sg_alloc_table_from_pages to support dynamic allocation of
SG table from pages. It should be used by drivers that can't supply
all the pages at one time.
This function returns the last populated SGE in the table. Users should
pass it as an argument to the function from the second call and forward.
As before, nents will be equal to the number of populated SGEs (chunks).
So it's appending and growing the "list", did I get that right? Sounds handy
indeed. Some comments/questions below.
Yes, we (RDMA) use this function to chain contiguous pages.
I will eveluate if i915 could start using it. We have some loops which
build page by page and coalesce.
[snip]
if (unlikely(ret))
diff --git a/tools/testing/scatterlist/main.c b/tools/testing/scatterlist/main.c
index 0a1464181226..4899359a31ac 100644
--- a/tools/testing/scatterlist/main.c
+++ b/tools/testing/scatterlist/main.c
@@ -55,14 +55,13 @@ int main(void)
for (i = 0, test = tests; test->expected_segments; test++, i++) {
struct page *pages[MAX_PAGES];
struct sg_table st;
- int ret;
+ struct scatterlist *sg;
set_pages(pages, test->pfn, test->num_pages);
- ret = __sg_alloc_table_from_pages(&st, pages, test->num_pages,
- 0, test->size, test->max_seg,
- GFP_KERNEL);
- assert(ret == test->alloc_ret);
+ sg = __sg_alloc_table_from_pages(&st, pages, test->num_pages, 0,
+ test->size, test->max_seg, NULL, 0, GFP_KERNEL);
+ assert(PTR_ERR_OR_ZERO(sg) == test->alloc_ret);
Some test coverage for relatively complex code would be very welcomed. Since
the testing framework is already there, even if it bit-rotted a bit, but
shouldn't be hard to fix.
A few tests to check append/grow works as expected, in terms of how the end
table looks like given the initial state and some different page patterns
added to it. And both crossing and not crossing into sg chaining scenarios.
This function is basic for all RDMA devices and we are pretty confident
that the old and new flows are tested thoroughly.
We will add proper test in next kernel cycle.
Patch seems to be adding a requirement that all callers of
(__)sg_alloc_table_from_pages pass in zeroed struct sg_table, which
wasn't the case so far.
Have you audited all the callers and/or fixed them? There seems to be
quite a few. Gut feel says problem would probably be better solved in
lib/scatterlist.c and not by making all the callers memset. Should be
possible if you make sure you only read st->nents if prev was passed in?
I've fixed the unit test and with this change the existing tests do
pass. But without zeroing it does fail on the very first, single page,
test scenario.
You can pull the unit test hacks from
git://people.freedesktop.org/~tursulin/drm-intel sgtest.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx