Re: [PATCH Part2 v5 06/45] x86/sev: Invalid pages from direct map when adding it to RMP table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Boris,


On 9/29/21 9:34 AM, Borislav Petkov wrote:

...


>> Improve the rmp_make_private() to
>> invalid state so that pages cannot be used in the direct-map after its
>> added in the RMP table, and restore to its default valid permission after
>> the pages are removed from the RMP table.
> That sentence needs rewriting into proper english.
>
> The more important thing is, though, this doesn't talk about *why*
> you're doing this: you want to remove pages from the direct map when
> they're in the RMP table because something might modify the page and
> then the RMP check will fail?

I'll work to improve the commit description.


> Also, set_direct_map_invalid_noflush() simply clears the Present and RW
> bit of a pte.
>
> So what's up?

The set_direct_map_invalid_noflush() does two steps

1) Split the host page table (if needed)

2) Clear the present and RW bit from pte

In previous patches I was using the set_memory_4k() to split the direct
map before adding the pages in the RMP table. Based on Sean's review
comment [1], I switched to using set_direct_map_invalid_noflush() so
that page is not just split but also removed from the direct map. The
thought process is if in the future  set_direct_map_default_noflush() is
improved to restore the large mapping then it will all work transparently.

[1] https://lore.kernel.org/lkml/YO9kP1v0TAFXISHD@xxxxxxxxxx/#t


>> Signed-off-by: Brijesh Singh <brijesh.singh@xxxxxxx>
>> ---
>>  arch/x86/kernel/sev.c | 61 ++++++++++++++++++++++++++++++++++++++++++-
>>  1 file changed, 60 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
>> index 8627c49666c9..bad41deb8335 100644
>> --- a/arch/x86/kernel/sev.c
>> +++ b/arch/x86/kernel/sev.c
>> @@ -2441,10 +2441,42 @@ int psmash(u64 pfn)
>>  }
>>  EXPORT_SYMBOL_GPL(psmash);
>>  
>> +static int restore_direct_map(u64 pfn, int npages)
> restore_pages_in_direct_map()
>
>> +{
>> +	int i, ret = 0;
>> +
>> +	for (i = 0; i < npages; i++) {
>> +		ret = set_direct_map_default_noflush(pfn_to_page(pfn + i));
>> +		if (ret)
>> +			goto cleanup;
>> +	}
> So this is looping over a set of virtually contiguous pages, I presume,
> and if so, you should add a function called
>
> 	set_memory_p_rw()
>
> to arch/x86/mm/pat/set_memory.c which does
>
> 	return change_page_attr_set(&addr, numpages, __pgprot(_PAGE_PRESENT | _PAGE_RW), 0);
>
> so that you can do all pages in one go.

I will look into it.

thanks



[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux