Re: New Ceph cluster design

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 09, 2018 at 03:06:15PM +0100, Ján Senko wrote:
:We are looking at 100+ nodes.
:
:I know that the Ceph official recommendation is 1GB of RAM per 1TB of disk.
:Was this ever changed since 2015?
:CERN is definitely using less (source:
:https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf)

looking at my recently (re)installed luminous bluestore nodes

Looking at 24hr peak (5min average) RAM utilization I'm seeing 40G
Commitied and ~30G active RAM on nodes with 10x4T drives and ~82G
Committed 57G active on nodes with 24x2T drives ( average 45.77% full)

  data:
    pools:   19 pools, 10240 pgs
    objects: 16820k objects, 77257 GB
    usage:   228 TB used, 271 TB / 499 TB avail
    pgs:     10240 active+clean

(12 storage-nodes 173 osds )

This is almost entierly RBD for OpenStack VMs, only a negligible
amount is radosgw type object storage none is erasure coded.

I spec'ed bit over recommended RAM  (for example 64T to 40G storage )
so I've nto had memory issues with older filestore or newer bluestore
implementations, but I would still round up rather than down for my
use case any way.

:RedHat suggests using 16GB + 2GB/HDD as the latest requirements.
:
:BTW: Anyone has comments on SSD sizes for Bluestore or the other questions?

These systems are using 10G:1T SSD:7.2K_SAS_DISK (ie 40GB SSD for 4T
HDD) this seem sufficient (running with WAL and DB on spinners really
tanks IOPS capacity) but I don't know that it is optimal.  It is close
enough to RedHat recommendation that I would believe them.

Note that we've moved to more smaller disks (the 2T are newer) as we
were running out of IOPS, maybe more SSD in front would help or maybe
our use pattern being so heavy in active volume use as opposed to cold
object storage is unusual. Obviously 10k or 15k drive would help & my
next expantion probably will be as we're still at higher % of our IOPS
capacity utilization than were are of our storage capacity utilization...

-Jon
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux