Re: Memory leak in Ceph OSD?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Caspar Smit (casparsmit@xxxxxxxxxxx):
> Stefan,
> 
> How many OSD's and how much RAM are in each server?

Currently 7 OSDs, 128 GB RAM. Max wil be 10 OSDs in these servers. 12
cores (at least one core per OSD).

> bluestore_cache_size=6G will not mean each OSD is using max 6GB RAM right?

Apparently. Sure they will use more RAM than just cache to function
correctly. I figured 3 GB per OSD would be enough ...

> Our bluestore hdd OSD's with bluestore_cache_size at 1G use ~4GB of total
> RAM. The cache is a part of the memory usage by bluestore OSD's.

A factor 4 is quite high, isn't it? Where is all this RAM used for
besides cache? RocksDB?

So how should I size the amount of RAM in a OSD server for 10 bluestore SSDs in a
replicated setup?

Thanks,

Stefan

-- 
| BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux