We still had blocked requests with osd_snap_trim_cost set to 1GB and osd_snap_trim_priority set to 1 in our test cluster. The test has 20 threads writing to RBD's and 1 thread
deleting snapshots on RBD's with an osd_map.
The snap_trim_q on the PGs holds at empty on the PGs unless we use osd_snap_trim_sleep no matter how strictly we set the osd_snap_trim cost and priority settings.
From: David Turner
Sent: Friday, February 03, 2017 11:54 AM To: Samuel Just Cc: Nick Fisk; ceph-users Subject: RE: [ceph-users] osd_snap_trim_sleep keeps locks PG during sleep? We found where it is in 10.2.5. It is implemented in the OSD.h file in Jewel, but it is implemented in OSD.cc in Master. We assumed it would be in the same place.
We delete over 100TB of snapshots spread across thousands of snapshots every day. We haven't yet found any combination of settings that allow us to delete snapshots in Jewel without blocking requests in a test cluster with a fraction of that workload. We went as far as setting osd_snap_trim_cost to 512MB with default osd_snap_trim_priority (before we noticed the priority setting) and set osd_snap_trim_cost to 4MB (the size of our objects) with default_osd_snap_trim_priority set to 1. We stopped testing there as we thought we found that these weren't implemented in Jewel. We will continue our testing and provide an update when we have it. Our current solution in Hammer involves a daemon monitoring the cluster load and setting the osd_snap_trim_sleep accordingly between 0 and 0.35 which does a good job of preventing IO blocking and help us to clear out the snap_trim_q each day. These settings not being injectable in Jewel would negate an option of using variable settings throughout the day. From: Samuel Just [sjust@xxxxxxxxxx]
Sent: Friday, February 03, 2017 11:24 AM To: David Turner Cc: Nick Fisk; ceph-users Subject: Re: [ceph-users] osd_snap_trim_sleep keeps locks PG during sleep? They do seem to exist in Jewel.
-Sam
On Fri, Feb 3, 2017 at 10:12 AM, David Turner
<david.turner@xxxxxxxxxxxxxxxx> wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com