Re: How To Scale Ceph for Large Numbers of Clients?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Zack Brenton (zack@xxxxxxxxxxxx):
> On Tue, Mar 12, 2019 at 6:10 AM Stefan Kooman <stefan@xxxxxx> wrote:
> 
> > Hmm, 6 GiB of RAM is not a whole lot. Especially if you are going to
> > increase the amount of OSDs (partitions) like Patrick suggested. By
> > default it will take 4 GiB per OSD ... Make sure you set the
> > "osd_memory_target" parameter accordingly [1].
> >
> 
> @Stefan: Not sure I follow you here - each OSD pod has 6GiB RAB allocated
> to it, which accounts for the default 4GiB + 20% mentioned in the docs for
> `osd_memory_target` plus a little extra. The pods are running on AWS
> i3.2xlarge instances, which have 61GiB total RAM available, leaving plenty
> of room for an additional OSD pod to manage the additional partition
> created on each node. Why would I need to increase the RAM allocated to
> each OSD pod and adjust `osd_memory_target`? Does using the default values
> leave me at risk of running into some other kind of priority inversion
> issue / deadlock / etc.?

Somehow I understood that the server hosting all OSDs had 6 GiB RAM
available, not 6 GiB per OSD. I'm not up to speed with Ceph hosted in
kubernetes / PODs, so that might explain. Sorry for the noise.

Gr. Stefan

-- 
| BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux