Re: [PATCH 6/8] mm/highmem: make kmap cache coloring aware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 22, 2014 at 11:35 PM, Leonid Yegoshin
<Leonid.Yegoshin@xxxxxxxxxx> wrote:
> On 07/22/2014 12:01 PM, Max Filippov wrote:
>>
>> From: Leonid Yegoshin <Leonid.Yegoshin@xxxxxxxxxx>
>>
>> Provide hooks that allow architectures with aliasing cache to align
>> mapping address of high pages according to their color. Such architectures
>> may enforce similar coloring of low- and high-memory page mappings and
>> reuse existing cache management functions to support highmem.
>>
>> Cc: linux-mm@xxxxxxxxx
>> Cc: linux-arch@xxxxxxxxxxxxxxx
>> Cc: linux-mips@xxxxxxxxxxxxxx
>> Cc: David Rientjes <rientjes@xxxxxxxxxx>
>> Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@xxxxxxxxxx>
>> [ Max: extract architecture-independent part of the original patch, clean
>>    up checkpatch and build warnings. ]
>> Signed-off-by: Max Filippov <jcmvbkbc@xxxxxxxxx>
>> ---
>> Changes since the initial version:
>> - define set_pkmap_color(pg, cl) as do { } while (0) instead of /* */;
>> - rename is_no_more_pkmaps to no_more_pkmaps;
>> - change 'if (count > 0)' to 'if (count)' to better match the original
>>    code behavior;
>>
>>   mm/highmem.c | 19 ++++++++++++++++---
>>   1 file changed, 16 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/highmem.c b/mm/highmem.c
>> index b32b70c..88fb62e 100644
>> --- a/mm/highmem.c
>> +++ b/mm/highmem.c
>> @@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
>>    */
>>   #ifdef CONFIG_HIGHMEM
>>   +#ifndef ARCH_PKMAP_COLORING
>> +#define set_pkmap_color(pg, cl)                do { } while (0)
>> +#define get_last_pkmap_nr(p, cl)       (p)
>> +#define get_next_pkmap_nr(p, cl)       (((p) + 1) & LAST_PKMAP_MASK)
>> +#define no_more_pkmaps(p, cl)          (!(p))
>> +#define get_next_pkmap_counter(c, cl)  ((c) - 1)
>> +#endif
>> +
>>   unsigned long totalhigh_pages __read_mostly;
>>   EXPORT_SYMBOL(totalhigh_pages);
>>   @@ -161,19 +169,24 @@ static inline unsigned long map_new_virtual(struct
>> page *page)
>>   {
>>         unsigned long vaddr;
>>         int count;
>> +       int color __maybe_unused;
>> +
>> +       set_pkmap_color(page, color);
>> +       last_pkmap_nr = get_last_pkmap_nr(last_pkmap_nr, color);
>>     start:
>>         count = LAST_PKMAP;
>>         /* Find an empty entry */
>>         for (;;) {
>> -               last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
>> -               if (!last_pkmap_nr) {
>> +               last_pkmap_nr = get_next_pkmap_nr(last_pkmap_nr, color);
>> +               if (no_more_pkmaps(last_pkmap_nr, color)) {
>>                         flush_all_zero_pkmaps();
>>                         count = LAST_PKMAP;
>>                 }
>>                 if (!pkmap_count[last_pkmap_nr])
>>                         break;  /* Found a usable entry */
>> -               if (--count)
>> +               count = get_next_pkmap_counter(count, color);
>> +               if (count)
>>                         continue;
>>                 /*
>
> I would like to return back to "if (count >0)".
>
> The reason is in easy way to jump through the same coloured pages - next
> element is calculated via decrementing by non "1" value of colours and it
> can easy become negative on last page available:
>
> #define     get_next_pkmap_counter(c,cl)    (c - FIX_N_COLOURS)
>
> where FIX_N_COLOURS is a max number of page colours.

Initial value of c (i.e. LAST_PKMAP) should be a multiple of FIX_N_COLOURS,
so that should not be a problem.

> Besides that it is a good practice in stopping cycle.

But I agree with that.

-- 
Thanks.
-- Max


[Index of Archives]     [Linux MIPS Home]     [LKML Archive]     [Linux ARM Kernel]     [Linux ARM]     [Linux]     [Git]     [Yosemite News]     [Linux SCSI]     [Linux Hams]

  Powered by Linux