Re: [RFC PATCH 03/12] s390/mm: force swiotlb for protected virtualization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 9 Apr 2019 12:16:47 +0200
Cornelia Huck <cohuck@xxxxxxxxxx> wrote:

> On Fri,  5 Apr 2019 01:16:13 +0200
> Halil Pasic <pasic@xxxxxxxxxxxxx> wrote:
> 
> > On s390 protected virtualization guests also have to use bounce I/O
> > buffers.  That requires some plumbing.
> > 
> > Let us make sure any device using DMA API accordingly is spared from the
                    ^,                                   ^,
Maybe this helps...

> > problems that hypervisor attempting I/O to a non-shared secure page would
> > bring.
> 
> I have problems parsing this sentence :(
> 
> Do you mean that we want to exclude pages for I/O from encryption?

The intended meaning is:
* Devices that do use DMA API (properly) to get get/map the memory
  that is used to talk to hypervisor should be OK with PV (protected
  virtualizaton). I.e. for such devices PV or not PV is basically
  transparent.
* But if a device does not use DMA API for the memory that is used to
  talk to the hypervisor this patch won't help.

And yes the gist of it is: memory accessed by the hypervisor needs to
be on pages excluded from protection (which in case of PV is technically
not encryption).

Does that help?

> 
> > 
> > Signed-off-by: Halil Pasic <pasic@xxxxxxxxxxxxx>
> > ---
> >  arch/s390/Kconfig                   |  4 ++++
> >  arch/s390/include/asm/Kbuild        |  1 -
> >  arch/s390/include/asm/dma-mapping.h | 13 +++++++++++
> >  arch/s390/include/asm/mem_encrypt.h | 18 +++++++++++++++
> >  arch/s390/mm/init.c                 | 44 +++++++++++++++++++++++++++++++++++++
> >  5 files changed, 79 insertions(+), 1 deletion(-)
> >  create mode 100644 arch/s390/include/asm/dma-mapping.h
> >  create mode 100644 arch/s390/include/asm/mem_encrypt.h
> 
> (...)
> 
> > @@ -126,6 +129,45 @@ void mark_rodata_ro(void)
> >  	pr_info("Write protected read-only-after-init data: %luk\n", size >> 10);
> >  }
> >  
> > +int set_memory_encrypted(unsigned long addr, int numpages)
> > +{
> > +	/* also called for the swiotlb bounce buffers, make all pages shared */
> > +	/* TODO: do ultravisor calls */
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(set_memory_encrypted);
> > +
> > +int set_memory_decrypted(unsigned long addr, int numpages)
> > +{
> > +	/* also called for the swiotlb bounce buffers, make all pages shared */
> > +	/* TODO: do ultravisor calls */
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(set_memory_decrypted);
> > +
> > +/* are we a protected virtualization guest? */
> > +bool sev_active(void)
> > +{
> > +	/*
> > +	 * TODO: Do proper detection using ultravisor, for now let us fake we
> > +	 *  have it so the code gets exercised.
> 
> That's the swiotlb stuff, right?
> 

You mean 'That' == code to get exercised == 'swiotlb stuff'?

If yes then the answer is kind of. The swiotlb (i.e. bounce buffers) is
when we map (like we map the buffers pointed to by the descriptors in
case of the virtio ring). The other part of it is the memory allocated
as DMA coherent (i.e. the virtio ring (desc, avail used) itself).

> (The patches will obviously need some reordering before it is actually
> getting merged.)
> 

What do you mean by reordering?

One reason why this is an early RFC is the missing dependency (i.e. the
stuff described by most of the TODO comments). As pointed out in the
cover letter. Another reason is that I wanted to avoid putting a lots of
effort into fine-polishing before clarifying the getting some feedback
on the basics from the community. ;)


> > +	 */
> > +	return true;
> > +}
> > +EXPORT_SYMBOL_GPL(sev_active);
> > +
> > +/* protected virtualization */
> > +static void pv_init(void)
> > +{
> > +	if (!sev_active())
> > +		return;
> > +
> > +	/* make sure bounce buffers are shared */
> > +	swiotlb_init(1);
> > +	swiotlb_update_mem_attributes();
> > +	swiotlb_force = SWIOTLB_FORCE;
> > +}
> > +
> >  void __init mem_init(void)
> >  {
> >  	cpumask_set_cpu(0, &init_mm.context.cpu_attach_mask);
> > @@ -134,6 +176,8 @@ void __init mem_init(void)
> >  	set_max_mapnr(max_low_pfn);
> >          high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
> >  
> > +	pv_init();
> > +
> >  	/* Setup guest page hinting */
> >  	cmma_init();
> >  
> 




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux