[no subject]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I also considered 4), which I did not put into the slide, because it
is less effective than 3)
4) migrating the swap entries, which require scan page table entry.
I briefly mentioned it during the session.

3) should might qualify as your transparent solution. It is just much
harder to implement.
Even when we have 3), having some form of 1) can be beneficial as
well. (less IO count, no indirect layer of swap offset).

>
> I haven't thought about them thoroughly, but at least we may think about
>
> - promoting low order non-full cluster when we find a free high order
>   swap entries.
>
> - stealing a low order non-full cluster with low usage count for
>   high-order allocation.

Now we are talking.
These two above fall well within 2) the buddy allocators
But the buddy allocator will not be able to address all fragmentation
issues, due to the allocator not being controlled the life cycle of
the swap entry.
It will not help Barry's zsmalloc usage case much because android
likes to keep the swapfile full. I can already see that.

> - freeing more swap entries when swap devices become fragmented.

That requires a scan page table to free the swap entry, basically 4).

It is all about investment and return. 1) is relatively easy to
implement and with good improvement and return.

Chris

> >> >> >> that's really important for you, I think that it's better to design
> >> >> >> something like hugetlbfs vs core mm, that is, be separated from the
> >> >> >> normal swap subsystem as much as possible.
> >> >> >
> >> >> > I am giving hugetlbfs just to make the point using reservation, or
> >> >> > isolation of the resource to prevent mixing fragmentation existing in
> >> >> > core mm.
> >> >> > I am not suggesting copying the hugetlbfs implementation to the swap
> >> >> > system. Unlike hugetlbfs, the swap allocation is typically done from
> >> >> > the kernel, it is transparent from the application. I don't think
> >> >> > separate from the swap subsystem is a good way to go.
> >> >> >
> >> >> > This comes down to why you don't like the reservation. e.g. if we use
> >> >> > two swapfile, one swapfile is purely allocate for high order, would
> >> >> > that be better?
> >> >>
> >> >> Sorry, my words weren't accurate.  Personally, I just think that it's
> >> >> better to make reservation related code not too intrusive.
> >> >
> >> > Yes. I will try to make it not too intrusive.
> >> >
> >> >> And, before reservation, we need to consider something else firstly.
> >> >> Whether is it generally good to swap-in with swap-out order?  Should we
> >> >
> >> > When we have the reservation patch (or other means to sustain mix size
> >> > swap allocation/free), we can test it out to get more data to reason
> >> > about it.
> >> > I consider the swap in size policy an orthogonal issue.
> >>
> >> No.  I don't think so.  If you swap-out in higher order, but swap-in in
> >> lower order, you make the swap clusters fragmented.
> >
> > Sounds like that is the reason to apply swap-in the same order of the swap out.
> > In any case, my original point still stands. We need to have the
> > ability to allocate high order swap entries with reasonable success
> > rate *before* we have the option to choose which size to swap in. If
> > allocating a high order swap always fails, we will be forced to use
> > the low order one, there is no option to choose from. We can't evalute
> > "is it generally good to swap-in with swap-out order?" by actual runs.
>
> I think we don't need to fight for that.  Just prove the value of your
> patchset with reasonable use cases and normal workloads.  Data will
> persuade people.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux