That's the weird thing. Processes and user-space memory is the same in good memory and bad memory. ceph-osd memory usage looks good in all machines, cache is more of less the same. When I do a ps, htop or any other process review all look good, and coherent between all machines, containers or not. Only difference I can see is using smem on the noncache kernel memory on containerized machines. Maybe it's a podman issue, maybe a kernel. It does not seem related to ceph directly. I just asked here to see if anyone got the same issue. Anyway, thanks for your time. Luis Domingues Proton AG ------- Original Message ------- On Wednesday, July 26th, 2023 at 09:01, Konstantin Shalygin <k0ste@xxxxxxxx> wrote: > Without determining what exactly process (kernel or userspace) "eat" memory, the ceph-users can't tell what exactly use memory, because don't see your display with your eyes 🙂 > > You should run this commands on good & bad hosts to see the real difference. This may be related to kernel version, or Ceph options in container config or ... > > > k > Sent from my iPhone > > > On 26 Jul 2023, at 07:26, Luis Domingues luis.domingues@xxxxxxxxx wrote: > > > > First, thank you for taking time to reply to me. > > > > However, my question was not on user-space memory neither on cache usage, as I can see on my machines everything sums up quite nicely. > > > > My question is: with packages, the non-cache kernel memory is around 2G to 3G, while with Podman usage, it is more around 10G, and it can go up to 40G-50G. Do anyone knows if this is expected and why this is the case? > > > > Maybe this is a podman related question and ceph-dev is not the best place to ask this kind of question, but maybe someone using cephadm saw similar behavior. > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx