On Thu, Dec 12, 2019 at 09:12:04AM +0100, Paolo Bonzini wrote: > On 12/12/19 08:36, Michael S. Tsirkin wrote: > > On Thu, Dec 12, 2019 at 01:08:14AM +0100, Paolo Bonzini wrote: > >>>> I'd say it won't be a big issue on locking 1/2M of host mem for a > >>>> vm... > >>>> Also note that if dirty ring is enabled, I plan to evaporate the > >>>> dirty_bitmap in the next post. The old kvm->dirty_bitmap takes > >>>> $GUEST_MEM/32K*2 mem. E.g., for 64G guest it's 64G/32K*2=4M. If with > >>>> dirty ring of 8 vcpus, that could be 64K*8=0.5M, which could be even > >>>> less memory used. > >>> > >>> Right - I think Avi described the bitmap in kernel memory as one of > >>> design mistakes. Why repeat that with the new design? > >> > >> Do you have a source for that? > > > > Nope, it was a private talk. > > > >> At least the dirty bitmap has to be > >> accessed from atomic context so it seems unlikely that it can be moved > >> to user memory. > > > > Why is that? We could surely do it from VCPU context? > > Spinlock is taken. Right, that's an implementation detail though isn't it? > >> The dirty ring could use user memory indeed, but it would be much harder > >> to set up (multiple ioctls for each ring? what to do if userspace > >> forgets one? etc.). > > > > Why multiple ioctls? If you do like virtio packed ring you just need the > > base and the size. > > You have multiple rings, so multiple invocations of one ioctl. > > Paolo Oh. So when you said "multiple ioctls for each ring" - I guess you meant: "multiple ioctls - one for each ring"? And it's true, but then it allows supporting things like resize in a clean way without any effort in the kernel. You get a new ring address - you switch to that one. -- MST