OSD memory usage after cephadm adoption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,

We recently migrate a cluster from ceph-ansible to cephadm. Everything went as expected.
But now we have some alerts on high memory usage. Cluster is running ceph 16.2.13.

Of course, after adoption OSDs ended up in the <unmanaged> zone:

NAME PORTS RUNNING REFRESHED AGE PLACEMENT
osd 88 7m ago - <unmanaged>

But the weirdest thing I observed, is that the OSDs seem to use more memory that the mem limit:

NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
osd.0 <node> running (5d) 2m ago 5d 19.7G 6400M 16.2.13 327f301eff51 ca07fe74a0fa
osd.1 <node> running (5d) 2m ago 5d 7068M 6400M 16.2.13 327f301eff51 6223ed8e34e9
osd.10 <node> running (5d) 10m ago 5d 7235M 6400M 16.2.13 327f301eff51 073ddc0d7391 osd.100 <node> running (5d) 2m ago 5d 7118M 6400M 16.2.13 327f301eff51 b7f9238c0c24

Does anybody knows why OSDs would use more memory than the limit?

Thanks

Luis Domingues
Proton AG
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux