Re: Infiniband / RDMA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Lance Couture
I.T. Services, WestGrid
Simon Fraser University
v. 778.782.8188

On 2011-08-05, at 3:20 PM, Javier Guerra Giraldez wrote:

> On Fri, Aug 5, 2011 at 2:14 PM, Lance Couture <lance@xxxxxx> wrote:
>> We are looking at implementing KVM-based virtual machines in our HPC cluster.
>> 
>> Our storage runs over Infiniband using RDMA, but we have been unable to find any real documentation regarding Infiniband, let alone using RDMA.
> 
> usually it's best to manage storage in the host system, not in the
> VMs.  In that case, it's just a usual Linux setup.  Once you get the
> storage running in your host, KVM will take it, no matter what
> technology is used underneath.
> 
> If, on the other hand, you do use Infiniband for some applications
> _inside_ the VMs, then you would need to pass the PCI devices to the
> VMs, a totally different issue; and one I don't know about.
> 
> -- 
> Javier


Thanks for the reply, Javier.

What we need to do is have an IB adapter show in each VM so we can mount our cluster's global file system which uses Lustre. Lustre is running over the native IB protocol, which implies RDMA.

I have recently found out that our nodes all support Intel VT-d and our IB cards support SR-IOV, so I think the best way we can do this is possibly passthrough.


If someone else with passthrough experience has some thoughts on this, it would be appreciated.

- Lance--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux