Re: MDS fails repeatedly while handling many concurrent meta data operations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Update: I had to wipe my CephFS, because after I increased the beacon grace period on the last attempt, I couldn't get the MDSs to rejoin anymore at all without running out of memory on the machine. I tried wiping all sessions and the journal, but it didn't work. In the end all I achieved was that the daemons crashed right after starting with some assertion error. So now I have a fresh CephFS and will try to copy the data from scratch.


On 24.07.19 15:36, Feng Zhang wrote:
Does Ceph-fuse mount also has the same issue?

That's hard to say. I started with the kernel module and saw the same behaviour again. I got to 930k inodes after only two minutes and stopped there. Since then, this number has not gone back down, not even after I disconnected all clients. I retried the same with ceph-fuse and the number did not increase any further (although it did not decrease either). When I unmounted the share and remounted it with the kernel module again, the number rose to 948k almost immediately.

So it looks like the problem only occurs with the kernel module, but maybe ceph-fuse is just too slow to tell. In fact, it is a magnitude slower. I only get 1.3k reqs/s compared to the 20k req/s with the kernel module, which is not practical at all.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux