On 09/25/2017 05:02 PM, Nigel Williams wrote:
On 26 September 2017 at 01:10, David Turner <drakonstein@xxxxxxxxx> wrote:
If they are on separate
devices, then you need to make it as big as you need to to ensure that it
won't spill over (or if it does that you're ok with the degraded performance
while the db partition is full). I haven't come across an equation to judge
what size should be used for either partition yet.
Is it the case that only the WAL will spill if there is a backlog
clearing entries into the DB partition? so the WAL's fill-mark
oscillates but the DB is going to steadily grow (depending on the
previously mentioned factors of "...extents, checksums, RGW bucket
indices, and potentially other random stuff".
The WAL should never grow larger than the size of the buffers you've
specified. It's the DB that can grow and is difficult to estimate both
because different workloads will cause different numbers of extents and
objects, but also because rocksdb itself causes a certain amount of
space-amplification due to a variety of factors.
Is there an indicator that can be monitored to show that a spill is occurring?
I think there's a message in the logs, but beyond that I don't remember
if we added any kind of indication in the user tools. At one point I
think I remember Sage mentioning he wanted to add something to ceph df.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com