Re: Fwd: [ceph-users] Hammer OSD memory increase when add new machine

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> ---------- Forwarded message ----------
> From: Dong Wu <archer.wudong@xxxxxxxxx>
> Date: 2016-10-27 18:50 GMT+08:00
> Subject: Re: [ceph-users] Hammer OSD memory increase when add new machine
> To: huang jun <hjwsm1989@xxxxxxxxx>
> 抄送: ceph-users <ceph-users@xxxxxxxxxxxxxx>
> 
> 
> 2016-10-27 17:50 GMT+08:00 huang jun <hjwsm1989@xxxxxxxxx>:
> > how do you add the new machine ?
> > does it first added to default ruleset and then you add the new rule
> > for this group?
> > do you have data pool use the default rule, does these pool contain data?
> 
> we dont use default ruleset, when we add new group machine,
> crush_location auto generate root and chassis, then we add a new rule
> for this group.
> 
> 
> > 2016-10-27 17:34 GMT+08:00 Dong Wu <archer.wudong@xxxxxxxxx>:
> >> Hi all,
> >>
> >> We have a ceph cluster only use rbd. The cluster contains several
> >> group machines, each group contains several machines, then each
> >> machine has 12 SSDs, each ssd as an OSD (journal and data together).
> >> eg:
> >> group1: machine1~machine12
> >> group2: machine13~machine24
> >> ......
> >> each group is separated with other group, which means each group has
> >> separated pools.
> >>
> >> we use Hammer(0.94.6) compiled with jemalloc(4.2).
> >>
> >> We have found that when we add a new group machine, the other group
> >> machine's memory increase 5% more or less (OSDs usage).
> >>
> >> each group's data is separated with others, so backfill only in group,
> >> not across.
> >> Why add a group of machine cause others memory increase? Is this reasonable?

It could be cached OSDmaps (they get slightly larger when you add OSDs) 
but it's hard to say.  It seems more likely that the pools and crush rules 
aren't configured right and you're adding OSDs to the wrong group.

If you look at the 'ceph daemon osd.NNN perf dump' output you can see, 
among other things, how many PGs are on the OSD.  Can you capture the 
output before and after the change (and 5% memory footprint increase)?

sage

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux