Hi, I'll start with a one-off question here, so please cc me on the reply. We are running a largish cluster and are currently buying GPGPU systems (Tesla and soon Fermi based). We will have at least 2 possibly 4 of these cards per box and have the problem that some codes need different CUDA kernel drivers to run. As these boxes have 4 CPU cores, 12 GB of memory and CPU-VT support we thought that this might be solvable by creating (para-) virtualized guests on the boxes and passing one GPGPU device into a guest at a time. In there we then can run any kernel/driver combo necessary. But since my current virtualization experience only stretches to OpenVZ and VirtualBox (tinkering with Xen a couple of years back), I don't know if KVM is the right approach here. We need something which we can automatically set-up via CLI, i.e. starting and stopping the guests need to be fully automatic, we don't need a graphical environment within the guests, just plain text is good enough. What do you think, is looking at KVM the right choice for this? Can we pass a device directly into a guest? Cheers Carsten -- Dr. Carsten Aulbert - Max Planck Institute for Gravitational Physics Callinstrasse 38, 30167 Hannover, Germany Phone/Fax: +49 511 762-17185 / -17193 http://www.top500.org/system/9234 | http://www.top500.org/connfam/6/list/3 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html