Different memory usage on OSD nodes after update to Nautilus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear all

In the mid of January I updated my ceph cluster from Luminous to Nautilus.

Attached you can see the memory metrics collected on one OSD node (I see
the very same behavior on all OSD hosts) graphed via Ganglia
This is Centos 7 node, with 64 GB of RAM, hosting 10 OSDs.

So before the update there were about 20 GB of FreeMem.
Now FreeMem is basically 0, but I see 20 GB of Buffers,

I guess this triggered some swapping, probably because I forgot to
set vm.swappiness to 0 (it was set to 60, the default value).

I was wondering if this the expected behavior

PS: Actually besides updating ceph, I also updated all the other packages
(yum update), so I am not sure that this different memory usage is because
of the ceph update
For the record in this update the kernel was updated from 3.10.0-1062.1.2
to 3.10.0-1062.9.1

Thanks, Massimo
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux