Hello, I implemented a new cluster with 48 Nodes á 24 OSDs. I have a replicated pool with 4 replica. The crushrule distributes the replicas to different racks. With this cluster I tested a upgrade from Nautilis (14.2.20) to Octopus (15.2.13). The update itself worked well until I began the restarts of the OSDs in the 4th rack. Since then I get slow ops while stopping OSDs. I think something happend here, after all replica partners are running on the new version. This issue remains after completing the upgrade. With Nautilus I had similar issues with slow ops when stopping OSDs. I could resolve this with the option „osd_fast_shutdown → false“. I let this option set to false while upgrading. For testing/debugging, I set this to true (default value) and got better results when stopping OSDs, but the problem is not completely vanished. Had someone else this problem and could fix it? What can I do to get rid of slow ops when resarting OSDs? All Servers are connected with 2x10G network links Manuel _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx