Re: Infiniband crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It isn’t that obscure. A HPE C7000 rack with 8 x BL860c Ithanium sever and Infinband module. Half of the server still running a rhel4 like single system image cluster (but totally outdated) but at least with 20 GBit HBAs (Mellanox) for a fast cluster file system. Ideal for the most kind of system services (mail …) because of the intrinsic loadbalance of the cluster. Here the Infiniband still works with ofed 1.4

On the other hand my attempt to replace it with a up to day system (gentoo). I have corosync and pacemaker (and crmsh) to cluster it and the Infiniband problem is the only thing missing.


Regards Rudi

Von meinem iPhone gesendet

> Am 17.10.2022 um 12:13 schrieb Christoph Lameter <cl@xxxxxxxxx>:
> 
> On Fri, 14 Oct 2022, Jason Gunthorpe wrote:
> 
>>> On Fri, Oct 14, 2022 at 06:16:51PM +0000, rug@xxxxxxxxxx wrote:
>>> Hi to whom it may concern,
>>> 
>>> We are getting on a 6.0.0 (and also on 5.10 up) the following Mellanox
>>> infiniband problem (see below).
>>> Can you please help (this is on a running ia64 cluster).
>> 
>> The fastest/simplest way to get help on something so obscure would be
>> to bisection search to the problematic commit
>> 
>> You might be the only user left in the world of this combination :)
> 
> And CC the linux-ia64 mailing list? Gentoo on ia64.. Wow.
> 
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature


[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux