Re: avoid 3-mds fs laggy on 1 rejoin?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yan, Zheng пишет:

It seems you have 16 mounts. Are you using kernel client or fuse
client, which version are they?


1) On every of 4x ceph node:
4.1.8 kernel mount to /mnt/ceph0 (used only for samba/ctdb lockfile);
fuse mount to /mnt/ceph1 (some used);
samba cluster (ctdb) with vfs_ceph;
2) On 2 additional out-of-cluster (service) nodes:
4.1.8 (now 4.2.3) kernel mount;
4.1.0 both mounts;
3) 2 VMs:
kernel mounts (most active: web & mail);
4.2.3;

fuse mounts - same version with ceph;


please run "ceph daemon mds.x session ls" to find which client has
largest number of caps. mds.x is ID of active mds.

1) This command is not valid more ;) - "cephfs-table-tool x show session" now.

2) I have 3 active mds now. I try, it works, keep it. Restart still problematic.

3) Yes, more caps on master VM (4.2.3 kernel mount, there are web+mail+heartbeat cluster 2xVMs) on apache root. In this place was (no more now) described CLONE_FS -> CLONE_VFORK deadlocks. But 4.2.3 installed just before tests, was 4.1.8 with similar effects (but log from 4.2.3 on VM clients).

Waith this night for MDSs restart.

--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux