how about also increasing osd_recovery_threads?
On Wed, Sep 4, 2019 at 10:47 AM Guilherme Geronimo <guilherme.geronimo@xxxxxxxxx> wrote:
_______________________________________________Hey hey,
First of all: 10GBps connection.
Then, some magic commands:
# ceph tell 'osd.*' injectargs '--osd-max-backfills 32'
# ceph tell 'osd.*' injectargs '--osd-recovery-max-active 12'
# ceph tell 'osd.*' injectargs '--osd-recovery-op-priority 63'
=D
[]'s Arthur (aKa Guilherme Geronimo)On 04/09/2019 06:44, Amudhan P wrote:
Hi,
I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs.my ceph health status showing warning.
My current setup:3 OSD node each with a single disk, recently I added one more disk in one of the node and ceph cluster status showing warning.I can see the progress but it was more than 12 hours but still its moving objects.
How to increase the speed of moving objects?
output from "ceph -s"
cluster:
id: 7c138e13-7b98-4309-b591-d4091a1742b4
health: HEALTH_WARN
834820/7943361 objects misplaced (10.510%)
services:
mon: 1 daemons, quorum mon01
mgr: mon01(active)
mds: cephfs-tst-1/1/1 up {0=mon01=up:active}
osd: 4 osds: 4 up, 4 in; 12 remapped pgs
data:
pools: 2 pools, 64 pgs
objects: 2.65 M objects, 178 GiB
usage: 548 GiB used, 6.7 TiB / 7.3 TiB avail
pgs: 834820/7943361 objects misplaced (10.510%)
52 active+clean
11 active+remapped+backfill_wait
1 active+remapped+backfilling
io:
recovery: 0 B/s, 6 objects/s
output from "ceph osd df "
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
0 hdd 1.81940 1.00000 1.8 TiB 88 GiB 1.7 TiB 4.71 0.64 40
3 hdd 1.81940 1.00000 1.8 TiB 96 GiB 1.7 TiB 5.15 0.70 24
1 hdd 1.81940 1.00000 1.8 TiB 182 GiB 1.6 TiB 9.79 1.33 64
2 hdd 1.81940 1.00000 1.8 TiB 182 GiB 1.6 TiB 9.79 1.33 64
TOTAL 7.3 TiB 548 GiB 6.7 TiB 7.36
MIN/MAX VAR: 0.64/1.33 STDDEV: 2.43
regardsAmudhan P
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx