Set async+rdma in Ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I try to set rdma setting in cluster of 6 nodes (3 mon, 3 OSD Nodes and 10 OSDs on each OSD nodes).
OS : CentOS Stream release 8.

I've followed the step bellow, but I got an error.

[root@mon1 ~]# cephadm shell
Inferring fsid 9414e1bc-9061-11ed-90fc-00163e4f92ad
Using recent ceph image quay.io/ceph/ceph@sha256:3cd25ee2e1589bf534c24493ab12e27caf634725b4449d50408fd5ad4796bbfa
[ceph: root@mon1 /]# ceph config set global ms_type async+rdma
2023-01-21T11:11:49.182+0000 7fab5922e700 -1 Infiniband verify_prereq!!! WARNING !!! For RDMA to work properly user memlock (ulimit -l) must be big enough to allow large amount of registered memory. We recommend setting this parameter to infinity
/usr/include/c++/8/bits/stl_vector.h:932: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = Worker*; _Alloc = std::allocator<Worker*>; std::vector<_Tp, _Alloc>::reference = Worker*&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__builtin_expect(__n < this->size(), true)' failed.
Aborted (core dumped)
[ceph: root@mon1 /]#

With error I can see suggestion about ulimit, but with a containerized deployment, How can I configure properly async+rdma.

Best regards,

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux