Re: luminous OSD memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 1 Sep 2017, xiaoyan li wrote:
> On Wed, Aug 30, 2017 at 11:17 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
> > Hi Aleksei,
> >
> > On Wed, 30 Aug 2017, Aleksei Gutikov wrote:
> >> Hi.
> >>
> >> I'm trying to synchronize osd daemons memory limits and bluestore cache
> >> settings.
> >> For 12.1.4 we have hdd osds usage about 4G with default settings.
> >> For ssds we have limit 6G and they are been oom killed periodically.
> >
> > So,
> >
> >> While
> >> osd_op_num_threads_per_shard_hdd=1
> >> osd_op_num_threads_per_shard_ssd=2
> >> and
> >> osd_op_num_shards_hdd=5
> >> osd_op_num_shards_ssd=8
> >
> > aren't relevant to memory usage.  The _per_shard is about how many bytes
> > are stored in each rocksdb key, and the num_shards is about how many
> > threads we use.
> 
> I can't understand the point about _per_shard. I notice that
> osd_op_num_threads_per_shard is used to set cache shards in BlueStore.

  store->set_cache_shards(get_num_op_shards());

and the osd op queue thread count is then shards * threads_per_shard.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux