Hammer OSD memory increase when add new machine

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

We have a ceph cluster only use rbd. The cluster contains several
group machines, each group contains several machines, then each
machine has 12 SSDs, each ssd as an OSD (journal and data together).
eg:
group1: machine1~machine12
group2: machine13~machine24
......
each group is separated with other group, which means each group has
separated pools.

we use Hammer(0.94.6) compiled with jemalloc(4.2).

We have found that when we add a new group machine, the other group
machine's memory increase 5% more or less (OSDs usage).

each group's data is separated with others, so backfill only in group,
not across.
Why add a group of machine cause others memory increase? Is this reasonable?

Attachment: memory_usage.png
Description: PNG image

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux