Hi, In my test ceph octopus cluster I was trying to simulate a failure case of when client mounted cephfs thru kernel client and doing read and write process, shutting down entire cluster with OSD flags like no down, no out, no backfiling and no recovery. Cluster is 4 node composed of 3 mons, 2 mgr, 2 mds, 48 OSD's. Public IP range : 10.0.103.0 and Cluster IP range : 10.0.104.0 Write and Read got stalled after some time cluster was brought live and healthy. But when reading file thru kernel mount read start at above 100MB/s and suddenly drops to byte and continues for long. only error msg I could see in the client machine. [ 167.591095] ceph: loaded (mds proto 32) [ 167.600010] libceph: mon0 10.0.103.1:6789 session established [ 167.601167] libceph: client144519 fsid f8bc7682-0d11-11eb-a332-0cc47a5ec98a [ 272.132787] libceph: osd1 10.0.104.1:6891 socket closed (con state CONNECTING) What went wrong why is this issue.? regards Amudhan P _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx