On Tue, 2013-07-02 at 13:51 +0200, Paolo Bonzini wrote: > Il 02/07/2013 13:40, Jiri Denemark ha scritto: > > On Tue, Jul 02, 2013 at 12:34:44 +0200, Paolo Bonzini wrote: > >> Il 02/07/2013 08:34, Jiri Denemark ha scritto: > >>> I'm not sure if that's > >>> the right think to do or not but it's certainly better than before when > >>> memory locking limit completely ignore the need for VRAM and per-disk > >>> cache. Unfortunately, the original formula was suggested by Avi, who > >>> moved to new challenges. Perhaps others could jump in and share their > >>> opinions (Paolo? :-P). > >> > >> I think the disk cache should not be counted in the memory locking > >> limit. > > > > Hmm, I guess you're right. However, we're computing a limit and I feel > > like allowing QEMU to lock a bit more shouldn't make any bad effects or > > am I wrong? On the other hand, it would be pretty easy to let the > > function know what kind of limit it's going to compute each time it's > > called. > > Yes, both ways are fine. But you should at least have a comment. > > >> Apart from that, the code you posted below makes sense. > > > > Even with the 1GB addition for VFIO? I have to admit I'm a bit ignorant > > of VFIO but shouldn't that limit be derived from the number of attached > > devices > > If that would be the amount of memory reserved for BARs (PCI memory > mapped regions), 1 GB should be enough. Let's just ask Alex Williamson. Yes, the extra is for the MMIO space of devices that gets mapped into the IOMMU and pinned in the host. 1G is sufficient unless we start using 64bit MMIO space and have lots and lots of devices. It's possible we could get rid of this, at the cost of losing peer-to-peer, but I'm not sure how useful that is anyway. In the past I haven't been able to tell the difference between guest RAM and device MMIO, but I expect that's getting better. Thanks, Alex -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list