After searching the code,
osd_snap_trim_cost and osd_snap_trim_priority exist in Master but not in Jewel or Kraken. If osd_snap_trim_sleep was made useless in Jewel by moving snap trimming to the main op thread and no new feature was added
to Jewel to allow clusters to throttle snap trimming... What recourse do people that use a lot of snapshots to use Jewel? Luckily this thread came around right before we were ready to push to production and we tested snap trimming heavily in QA and found
that we can't even deal with half of our snap trimming on Jewel that we would need to. All of these settings are also not injectable into the osd daemon so it would take a full restart of the all of the osds to change their settings...
Does anyone have any success stories for snap trimming on Jewel?
From: Samuel Just [sjust@xxxxxxxxxx]
Sent: Thursday, January 26, 2017 1:14 PM To: Nick Fisk Cc: David Turner; ceph-users Subject: Re: [ceph-users] osd_snap_trim_sleep keeps locks PG during sleep? Just an update. I think the real goal with the sleep configs in general was to reduce the number of concurrent snap trims happening. To that end, I've put together a branch which adds an AsyncReserver (as with backfill) for snap trims to each
OSD. Before actually starting to do trim work, the primary will wait in line to get one of the slots and will hold that slot until the repops are complete. https://github.com/athanatos/ceph/tree/wip-snap-trim-sleep
is the branch (based on master), but I've got a bit more work to do (and testing to do) before it's ready to be tested.
-Sam
On Fri, Jan 20, 2017 at 2:05 PM, Nick Fisk
<nick@xxxxxxxxxx> wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com