Hi, Am 28.03.19 um 20:03 schrieb ceph@xxxxxxxxxx: > Hi Uwe, > > Am 28. Februar 2019 11:02:09 MEZ schrieb Uwe Sauter <uwe.sauter.de@xxxxxxxxx>: >> Am 28.02.19 um 10:42 schrieb Matthew H: >>> Have you made any changes to your ceph.conf? If so, would you mind >> copying them into this thread? >> >> No, I just deleted an OSD, replaced HDD with SDD and created a new OSD >> (with bluestore). Once the cluster was healty again, I >> repeated with the next OSD. >> >> >> [global] >> auth client required = cephx >> auth cluster required = cephx >> auth service required = cephx >> cluster network = 169.254.42.0/24 >> fsid = 753c9bbd-74bd-4fea-8c1e-88da775c5ad4 >> keyring = /etc/pve/priv/$cluster.$name.keyring >> public network = 169.254.42.0/24 >> >> [mon] >> mon allow pool delete = true >> mon data avail crit = 5 >> mon data avail warn = 15 >> >> [osd] >> keyring = /var/lib/ceph/osd/ceph-$id/keyring >> osd journal size = 5120 >> osd pool default min size = 2 >> osd pool default size = 3 >> osd max backfills = 6 >> osd recovery max active = 12 > > I guess should decrease this last two parameters to 1. This should help to avoid to much pressure on your drives... > Unlikely to help as no recovery / backfilling is running when the situation appears. > Hth > - Mehmet > >> >> [mon.px-golf-cluster] >> host = px-golf-cluster >> mon addr = 169.254.42.54:6789 >> >> [mon.px-hotel-cluster] >> host = px-hotel-cluster >> mon addr = 169.254.42.55:6789 >> >> [mon.px-india-cluster] >> host = px-india-cluster >> mon addr = 169.254.42.56:6789 >> >> >> >> >>> >>> >> ---------------------------------------------------------------------------------------------------------------------------------- >>> *From:* ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of >> Vitaliy Filippov <vitalif@xxxxxxxxxx> >>> *Sent:* Wednesday, February 27, 2019 4:21 PM >>> *To:* Ceph Users >>> *Subject:* Re: Blocked ops after change from filestore >> on HDD to bluestore on SDD >>> >>> I think this should not lead to blocked ops in any case, even if the >>> performance is low... >>> >>> -- >>> With best regards, >>> Vitaliy Filippov >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@xxxxxxxxxxxxxx >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@xxxxxxxxxxxxxx >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com