Re: [PATCH 2/2] mm/vmalloc: rework the drain logic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Uladzislau Rezki <urezki@xxxxxxxxx> writes:
>> >> >> And I found the long latency avoidance logic in
>> >> >> __purge_vmap_area_lazy() appears problematic,
>> >> >> 
>> >> >>          if (atomic_long_read(&vmap_lazy_nr) < resched_threshold)
>> >> >>              cond_resched_lock(&free_vmap_area_lock);
>> >> >> 
>> >> >> Shouldn't it be something as follows?
>> >> >> 
>> >> >>          if (i >= BATCH && atomic_long_read(&vmap_lazy_nr) <
>> >> >> resched_threshold) {
>> >> >>              cond_resched_lock(&free_vmap_area_lock);
>> >> >>              i = 0;
>> >> >>          } else
>> >> >>              i++;
>> >> >> 
>> >> >> This will accelerate the purging via batching and slow down vmalloc()
>> >> >> via holding free_vmap_area_lock.  If it makes sense, can we try this?
>> >> >> 
>> >> > Probably we can switch to just using "batch" methodology:
>> >> >
>> >> > <snip>
>> >> >     if (!(i++ % batch_threshold))
>> >> >         cond_resched_lock(&free_vmap_area_lock);
>> >> > <snip>
>> >> 
>> >> That's the typical long latency avoidance method.
>> >> 
>> >> > The question is, which value we should use as a batch_threshold: 100, 1000, etc.
>> >> 
>> >> I think we can do some measurement to determine it?
>> >> 
>> > Hmm.. looking at it one more time i do not see what batching solves.
>> 
>> Without batch protection, we may release the lock and CPU anytime during
>> looping if "vmap_lazy_nr < resched_threshold".  Too many vmalloc/vfree
>> may be done during that.  So I think we can restrict it.  Batching can
>> improve the performance of purging itself too.
>> 
> In theory:
> I see your point. It is a trade-off though, to allow faster vmalloc or vfree.
> Batching will make alloc more tight, and yes, speed up the process of draining
> holding a CPU until batch is drained + introducing latency for other tasks.
>
> In practical:
> I mentioned about that, i think we need to measure the batching approach, say
> we set it to 100, providing some figures so we see some evidence from practical
> point of view. For example run test_vmalloc.sh to analyze it. If you see some
> advantages from performance point of view it would be great. Just share some
> data.

Per my understanding, this is the common practice in kernel to satisfy
both throughput and latency requirement.  But it may be not important
for this specific case.  I am afraid I have no time to work on this now.
Just my 2 cents.  If you don't think that's a good idea, just ignore it.

>> > Anyway we need to have some threshold(what we do have), that regulates
>> > a priority between vmalloc()/vfree().
>> >
>> > What we can do more with it are:
>> >
>> > - purging should be just performed asynchronously in workqueue context.
>> > Giving the fact, that now we also do a merge of outstanding areas, the
>> > data structure(rb-tree) will not be so fragmented.
>> 
>> Async works only if there are idle CPU time on other CPUs.  And it may
>> punish other innocent workloads instead of the heavy vmalloc/vfree
>> users.  So we should be careful about that.
>> 
> Yep, scheduling latency will be as a side affect of such approach. The question
> is if it is negligible and can be considered as a risk. I do not think it would 
> be a big problem.
>
> I have other issue with it though, which i can not explain so far. If i am doing 
> the "purge" in the separate worker, i see that a memory leaks after heavy test
> runs.
>
>> > - lazy_max_pages() can slightly be decreased. If there are existing
>> > workloads which suffer from such long value. It would be good to get
>> > real complains and evidence.
>> >
>> >> > Apart of it and in regard to CONFIG_KASAN_VMALLOC, it seems that we are not
>> >> > allowed to drop the free_vmap_area_lock at all. Because any simultaneous
>> >> > allocations are not allowed within a drain region, so it should occur in
>> >> > disjoint regions. But i need to double check it.
>> >> >
>> >> >>
>> >> >> And, can we reduce lazy_max_pages() to control the length of the
>> >> >> purging list?  It could be > 8K if the vmalloc/vfree size is small.
>> >> >>
>> >> > We can adjust it for sure. But it will influence on number of global
>> >> > TLB flushes that must be performed.
>> >> 
>> >> Em...  For example, if we set it to 100, then the number of the TLB
>> >> flushes can be reduced to 1% of the un-optimized implementation
>> >> already.  Do you think so?
>> >> 
>> > If we set lazy_max_pages() to vague value such as 100, the performance
>> > will be just destroyed.
>> 
>> Sorry, my original words weren't clear enough.  What I really want to
>> suggest is to control the length of the purging list instead of reduce
>> lazy_max_pages() directly.  That is, we can have a "atomic_t
>> nr_purge_item" to record the length of the purging list and start
>> purging if (vmap_lazy_nr > lazy_max_pages && nr_purge_item >
>> max_purge_item).  vmap_lazy_nr is to control the virtual address space,
>> nr_purge_item is to control the batching purging latency.  "100" is just
>> an example, the real value should be determined according to the test
>> results.
>> 
> OK. Now i see what you meant. Please note, the merging is in place, so
> the list size gets reduced.

Yes.  In theory, even with merging, the length of the purging list may
become too long in some cases.  And the code/algorithm changes that are
needed by controlling the length of the purging list is much less than
that are needed by merging.  So I suggest to do length controlling
firstly, then merging.  Again, just my 2 cents.

Best Regards,
Huang, Ying




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux