Re: Tuning CephFS on NVME for HPC / IO500

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den lör 3 dec. 2022 kl 22:52 skrev Sebastian <sebcio.t@xxxxxxxxx>:
>
> One thing to this discussion.
> I had a lot of problems with my clusters. I spent some time debugging.
> What I found and what I confirmed on AMD nodes, everything starts working like a charm when I added to kernel param iommu=pt
> Plus some other tunings, I can’t share, all information now, but this iommu=pt should help.
> On beginning everything looks like something in kernel stack slowdown packets.

We're seeing the same on our AMD OSD servers, but we disable iommu in
the BIOS, but kernel options probably work as well.
For us, nvme drives "get stuck".

We think this centos bug is relevant/related,
https://bugs.centos.org/view.php?id=17104


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux