Il 29/01/2013 17:47, Anthony Liguori ha scritto: > Paolo Bonzini <pbonzini@xxxxxxxxxx> writes: > >> Il 29/01/2013 16:41, Juan Quintela ha scritto: >>> * Replacing select(2) so that we will not hit the 1024 fd_set limit in the >>> future. (stefan) >>> >>> Add checks for fd's bigger than 1024? multifunction devices uses lot >>> of fd's for device. >>> >>> Portability? >>> Use glib? and let it use poll underneath. >>> slirp is a problem. >>> in the end loop: moving to a glib event loop, how we arrive there is the discussion. >> >> We can use g_poll while keeping the main-loop.c wrappers around the glib >> event loop. Both slirp and iohandler.c access the fd_sets randomly, so >> we need to remember some state between the fill and poll functions. We >> can use two main-loop.c functions: >> >> int qemu_add_poll_fd(int fd, int events); >> >> select: writes the events into three fd_sets, returns the file >> descriptor itself >> >> poll: writes a GPollFD into a dynamically-sized array (of GPollFDs) >> and returns the index in the array. >> >> int qemu_get_poll_fd_revents(int index); >> >> select: takes the file descriptor (returned by qemu_add_poll_fd), >> makes up revents based on the three fd_sets >> >> poll: takes the index into the array and returns the corresponding >> revents >> >> iohandler.c can simply store the index into struct IOHandlerRecord, and >> use it later. slirp can do the same for struct socket. >> >> The select code can be kept for Windows after POSIX switches to poll. > > Doesn't g_poll already do this under the covers for Windows? No, g_poll is for synchronization objects (like Linux eventfd or timerfd). Sockets still require select. You can tie a socket to a synchronization object; this way socket events can exit g_poll, and in fact that's exactly what QEMU does. But you still need to retrieve the currently-active events with select, so iohandler.c and slirp (which use sockets) need to work in terms of select. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html