When I add in the next hdd i'll try the method again and see if I just needed to wait longer.
On Tue, Nov 7, 2017 at 11:19 PM Wido den Hollander <wido@xxxxxxxx> wrote:
> Op 7 november 2017 om 22:54 schreef Scottix <scottix@xxxxxxxxx>:
>
>
> Hey,
> I recently updated to luminous and started deploying bluestore osd nodes. I
> normally set osd_max_backfills = 1 and then ramp up as time progresses.
>
> Although with bluestore it seems like I wasn't able to do this on the fly
> like I used to with XFS.
>
> ceph tell osd.* injectargs '--osd-max-backfills 5'
>
> osd.34: osd_max_backfills = '5'
> osd.35: osd_max_backfills = '5' rocksdb_separate_wal_dir = 'false' (not
> observed, change may require restart)
> osd.36: osd_max_backfills = '5'
> osd.37: osd_max_backfills = '5'
>
> As I incorporate more bluestore osds not being able to control this is
> going to drastically affect recovery speed and with the default as 1, on a
> big rebalance, I would be afraid restarting a bunch of osd.
>
Are you sure the backfills are really not increasing? If you re-run the command, what does it output?
I've seen this as well, but the backfills seemed to increase anyway.
Wido
> Any advice in how to control this better?
>
> Thanks,
> Scott
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com