Fwd: Hammer OSD memory increase when add new machine

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



any sugesstions?

Thanks.


---------- Forwarded message ----------
From: Dong Wu <archer.wudong@xxxxxxxxx>
Date: 2016-10-27 18:50 GMT+08:00
Subject: Re:  Hammer OSD memory increase when add new machine
To: huang jun <hjwsm1989@xxxxxxxxx>
抄送: ceph-users <ceph-users@xxxxxxxxxxxxxx>


2016-10-27 17:50 GMT+08:00 huang jun <hjwsm1989@xxxxxxxxx>:
> how do you add the new machine ?
> does it first added to default ruleset and then you add the new rule
> for this group?
> do you have data pool use the default rule, does these pool contain data?

we dont use default ruleset, when we add new group machine,
crush_location auto generate root and chassis, then we add a new rule
for this group.


> 2016-10-27 17:34 GMT+08:00 Dong Wu <archer.wudong@xxxxxxxxx>:
>> Hi all,
>>
>> We have a ceph cluster only use rbd. The cluster contains several
>> group machines, each group contains several machines, then each
>> machine has 12 SSDs, each ssd as an OSD (journal and data together).
>> eg:
>> group1: machine1~machine12
>> group2: machine13~machine24
>> ......
>> each group is separated with other group, which means each group has
>> separated pools.
>>
>> we use Hammer(0.94.6) compiled with jemalloc(4.2).
>>
>> We have found that when we add a new group machine, the other group
>> machine's memory increase 5% more or less (OSDs usage).
>>
>> each group's data is separated with others, so backfill only in group,
>> not across.
>> Why add a group of machine cause others memory increase? Is this reasonable?
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> Thank you!
> HuangJun
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux