On Friday 21 March 2008 01:04:17 Anthony Liguori wrote: > Rusty Russell wrote: > > From: Paul TBBle Hampson <Paul.Hampson@xxxxxxxxx> > > > > This creates a file in $HOME/.lguest/ to directly back the RAM and DMA > > memory mappings created by map_zeroed_pages. > > I created a test program recently that measured the latency of a > reads/writes to an mmap() file in /dev/shm and in a normal filesystem. > Even after unlinking the underlying file, the write latency was much > better with a mmap()'d file in /dev/shm. How odd! Do you have any idea why? > /dev/shm is not really for general use. I think we'll want to have our > own tmpfs mount that we use to create VM images. If we're going to mod the kernel, how about a "mmap this part of their address space" and having the kernel keep the mappings in sync. But I think that if we want to get speed, we should probably be doing the copy between address spaces in-kernel so we can do lightweight exits. > I also prefer to use a > unix socket for communication, unlink the file immediately after open, > and then pass the fd via SCM_RIGHTS to the other process. Yeah, I shied away from that because cred passing kills whole litters of puppies. It makes for better encapsulation tho, so I'd do it that way in a serious implementation. Cheers, Rusty. _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/virtualization