On 2/18/19 3:31 PM, Michael S. Tsirkin wrote: > On Mon, Feb 18, 2019 at 09:04:57PM +0100, David Hildenbrand wrote: >>>>>>> So I'm fine with a simple implementation but the interface needs to >>>>>>> allow the hypervisor to process hints in parallel while guest is >>>>>>> running. We can then fix any issues on hypervisor without breaking >>>>>>> guests. >>>>>> Yes, I am fine with defining an interface that theoretically let's us >>>>>> change the implementation in the guest later. >>>>>> I consider this even a >>>>>> prerequisite. IMHO the interface shouldn't be different, it will be >>>>>> exactly the same. >>>>>> >>>>>> It is just "who" calls the batch freeing and waits for it. And as I >>>>>> outlined here, doing it without additional threads at least avoids us >>>>>> for now having to think about dynamic data structures and that we can >>>>>> sometimes not report "because the thread is still busy reporting or >>>>>> wasn't scheduled yet". >>>>> Sorry I wasn't clear. I think we need ability to change the >>>>> implementation in the *host* later. IOW don't rely on >>>>> host being synchronous. >>>>> >>>>> >>>> I actually misread it :) . In any way, there has to be a mechanism to >>>> synchronize. >>>> >>>> If we are going via a bare hypercall (like s390x, like what Alexander >>>> proposes), it is going to be a synchronous interface either way. Just a >>>> bare hypercall, there will not really be any blocking on the guest side. >>> It bothers me that we are now tied to interface being synchronous. We >>> won't be able to fix it if there's an issue as that would break guests. >> I assume with "fix it" you mean "fix kfree taking longer on every X call"? >> >> Yes, as I initially wrote, this mimics s390x. That might be good (we >> know it has been working for years) and bad (we are inheriting the same >> problem class, if it exists). And being synchronous is part of the >> approach for now. > BTW on s390 are these hypercalls handled by Linux? > >> I tend to focus on the first part (we don't know anything besides it is >> working) while you focus on the second part (there could be a potential >> problem). Having a real problem at hand would be great, then we would >> know what exactly we actually have to fix. But read below. > If we end up doing a hypercall per THP, maybe we could at least > not block with interrupts disabled? Poll in guest until > hypervisor reports its done? That would already be an > improvement IMHO. E.g. perf within guest will point you > in the right direction and towards disabling hinting. > > >>>> Via virtio, I guess it is waiting for a response to a requests, right? >>> For the buffer to be used, yes. And it could mean putting some pages >>> aside until hypervisor is done with them. Then you don't need timers or >>> tricks like this, you can get an interrupt and start using the memory. >> I am very open to such an approach as long as we can make it work and it >> is not too complicated. (-> simple) >> >> This would mean for example >> >> 1. Collect entries to be reported per VCPU in a buffer. Say magic number >> 256/512. >> >> 2. Once the buffer is full, do crazy "take pages out of the balloon >> action" and report them to the hypervisor via virtio. Let the VCPU >> continue. This will require some memory to store the request. Small >> hickup for the VCPU to kick of the reporting to the hypervisor. >> >> 3. On interrupt/response, go over the response and put the pages back to >> the buddy. >> >> (assuming that reporting a bulk of frees is better than reporting every >> single free obviously) >> >> This could allow nice things like "when OOM gets trigger, see if pages >> are currently being reported and wait until they have been put back to >> the buddy, return "new pages available", so in a real "low on memory" >> scenario, no OOM killer would get involved. This could address the issue >> Wei had with reporting when low on memory. >> >> Is that something you have in mind? > Yes that seems more future proof I think. > >> I assume we would have to allocate >> memory when crafting the new requests. This is the only reason I tend to >> prefer a synchronous interface for now. But if allocation is not a >> problem, great. > There are two main ways to avoid allocation: > 1. do not add extra data on top of each chunk passed If I am not wrong then this is close to what we have right now. One issue I see right now is that I am polling while host is freeing the memory. In the next version I could tie the logic which returns pages to the buddy and resets the per cpu array index value to 0 with the callback. (i.e.., it happens once we receive an response from the host) Other change which I am testing right now is to only capture 'MAX_ORDER - 1' pages. > 2. add extra data but pre-allocate buffers for it > >> -- >> >> Thanks, >> >> David / dhildenb -- Regards Nitesh
Attachment:
signature.asc
Description: OpenPGP digital signature