Re: [PATCH] MIPS: tlbex: Fix bugs in tlbchange handler

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, David and Maciej,

I *do* have answered your quesition, which you can see here:
https://patchwork.linux-mips.org/patch/12240/

When unaligned access triggered, do_ade() will access user address
with EXL=1, and that may trigger tlb refill.

Huacai

On Thu, Feb 25, 2016 at 9:06 AM, David Daney <ddaney.cavm@xxxxxxxxx> wrote:
> On 02/24/2016 04:40 PM, Maciej W. Rozycki wrote:
>>
>> On Wed, 24 Feb 2016, Huacai Chen wrote:
>>
>>> If a tlb miss triggered when EXL=1, tlb refill exception is treated as
>>> tlb invalid exception, so tlbp may fails. In this situation, CP0_Index
>>> register doesn't contain a valid value. This may not be a problem for
>>> VTLB since it is fully-associative. However, FTLB is set-associative so
>>> not every tlb entry is valid for a specific address. Thus, we should
>>> use tlbwr instead of tlbwi when tlbp fails.
>>
>>
>>   Can you please explain exactly why this change is needed?  You're
>> changing pretty generic code which has worked since forever.  So why is a
>> change suddenly needed?  Our kernel entry/exit code has been written such
>> that no mapped memory is accessed with EXL=1 so no TLB exception is
>> expected to ever happen in these circumstances.  So what's your scenario
>> you mean to address?  Your patch description does not explain it.
>>
>
> I asked this exact same question back on Jan. 26, when the patch was
> previously posted.
>
> No answer was given, all we got was the same thing again with no
> explanation.
>
> David Daney
>
>
>
>
>>> @@ -1913,7 +1935,14 @@ build_r4000_tlbchange_handler_tail(u32 **p, struct
>>> uasm_label **l,
>>>         uasm_i_ori(p, ptr, ptr, sizeof(pte_t));
>>>         uasm_i_xori(p, ptr, ptr, sizeof(pte_t));
>>>         build_update_entries(p, tmp, ptr);
>>> +       uasm_i_mfc0(p, ptr, C0_INDEX);
>>> +       uasm_il_bltz(p, r, ptr, label_tail_miss);
>>> +       uasm_i_nop(p);
>>>         build_tlb_write_entry(p, l, r, tlb_indexed);
>>> +       uasm_il_b(p, r, label_leave);
>>> +       uasm_i_nop(p);
>>> +       uasm_l_tail_miss(l, *p);
>>> +       build_tlb_write_entry(p, l, r, tlb_random);
>>>         uasm_l_leave(l, *p);
>>>         build_restore_work_registers(p);
>>>         uasm_i_eret(p); /* return from trap */
>>
>>
>>   Specifically you're causing a performance hit here, which is a fast
>> path,
>> for everyone.  If you have a scenario that needs this change, then please
>> make it conditional on the circumstances and keep the handler unchanged in
>> all the other cases.
>>
>>    Maciej
>>
>>
>>
>
>




[Index of Archives]     [Linux MIPS Home]     [LKML Archive]     [Linux ARM Kernel]     [Linux ARM]     [Linux]     [Git]     [Yosemite News]     [Linux SCSI]     [Linux Hams]

  Powered by Linux