OK,
I just understand the need of transactions for the trim takes place
after changing settings.
What is the risk to have too low a value for the parameter
osd_min_pg_log_entries (not osd_max_pg_log_entries in degraded
environment) ?
David.
On 07/20/2015 03:13 PM, Sage Weil wrote:
On Sun, 19 Jul 2015, David Casier AEVOO wrote:
Hi,
I have a question about PGLog and RAM consumption.
In the documentation, we read "OSDs do not require as much RAM for regular
operations (e.g., 500MB of RAM per daemon instance); however, during recovery
they need significantly more RAM (e.g., ~1GB per 1TB of storage per daemon)"
But in fact, all pg log are read in the start of ceph-osd daemon and put in
RAM ( pg->read_state(store, bl); )
Is this normal behavior or I have a defect in my environment?
There are two tunables that control how many pg log entries we keep
around. When teh PG is healthy, we keep ~1000, and when the PG is
degraded, we keep more, to expand the time window over which a recovering
OSD will be able to do regular log-based recovery instead of a more
expensive backfill. This is one source of additional memory.
Others are the missing sets (lists of missing/degraded objects) and
messages/data/state associated with objcts that are being
recovered/copied.
Note that the numbers in teh documentation are pretty rough rules of
thumb. At some point it would be great to build a model for how much RAM
the osd consumes as a function of the various configurables (pg log size,
pg count, avg object size, etc.).
sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html