Re: luminous ceph-fuse with quotas breaks 'mount' and 'df'

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Looks like Greg may be onto something!

If the quota is 10000000 (bytes), the mount point appears in 'df':
ceph-fuse   8.0M     0  8.0M   0% /srv/smb/winbak
and 'mount':
ceph-fuse on /srv/smb/winbak type fuse.ceph-fuse (rw,relatime,user_id=0,group_id=0,allow_other)

If quota is 1000000, the mount point no longer appears in 'df', but does appear in 'mount'.

I wasn't able to get it to disappear from 'mount' even at quota 1 byte.

Below is a debug session as suggested by John. I used quota 1000000 on the mount point. (Segfault occurred when I ctrl-C to kill process after initial mount.)

Thanks!
Chad.



2018-08-17 14:34:54.952636 7f0e300b5140 0 ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable), process ceph-fuse, pid 30502
ceph-fuse[30502]: starting ceph client
2018-08-17 14:34:54.958910 7f0e300b5140 -1 init, newargv = 0x556a3f060120 newargc=9 2018-08-17 14:34:54.961492 7f0e298a6700 10 client.0 ms_handle_connect on 128.104.164.197:6789/0 2018-08-17 14:34:54.965175 7f0e300b5140 10 client.18814183 Subscribing to map 'mdsmap' 2018-08-17 14:34:54.965198 7f0e300b5140 20 client.18814183 trim_cache size 0 max 16384 2018-08-17 14:34:54.966721 7f0e298a6700 1 client.18814183 handle_mds_map epoch 2336272 2018-08-17 14:34:54.966788 7f0e300b5140 20 client.18814183 populate_metadata read hostname 'tardis' 2018-08-17 14:34:54.966824 7f0e300b5140 10 client.18814183 did not get mds through better means, so chose random mds 0
2018-08-17 14:34:54.966826 7f0e300b5140 20 client.18814183 mds is 0
2018-08-17 14:34:54.966828 7f0e300b5140 10 client.18814183 _open_mds_session mds.0 2018-08-17 14:34:54.966858 7f0e300b5140 10 client.18814183 waiting for session to mds.0 to open 2018-08-17 14:34:54.969991 7f0e298a6700 10 client.18814183 ms_handle_connect on 10.128.198.59:6800/2643422990 2018-08-17 14:34:55.033974 7f0e298a6700 10 client.18814183 handle_client_session client_session(open) v1 from mds.0
2018-08-17 14:34:55.034030 7f0e298a6700 10 client.18814183 renew_caps mds.0
2018-08-17 14:34:55.034196 7f0e298a6700 10 client.18814183 connect_mds_targets for mds.0 2018-08-17 14:34:55.034269 7f0e300b5140 10 client.18814183 did not get mds through better means, so chose random mds 0
2018-08-17 14:34:55.034276 7f0e300b5140 20 client.18814183 mds is 0
2018-08-17 14:34:55.034280 7f0e300b5140 10 client.18814183 send_request rebuilding request 1 for mds.0 2018-08-17 14:34:55.034285 7f0e300b5140 20 client.18814183 encode_cap_releases enter (req: 0x556a3ee79200, mds: 0) 2018-08-17 14:34:55.034287 7f0e300b5140 20 client.18814183 send_request set sent_stamp to 2018-08-17 14:34:55.034287 2018-08-17 14:34:55.034291 7f0e300b5140 10 client.18814183 send_request client_request(unknown.0:1 getattr pAsLsXsFs #0x1/backups/winbak 2018-08-17 14:34:54.966812 caller_uid=0, caller_gid=0{}) v4 to mds.0 2018-08-17 14:34:55.034331 7f0e300b5140 20 client.18814183 awaiting reply|forward|kick on 0x7ffd3c6a1970 2018-08-17 14:34:55.035114 7f0e298a6700 10 client.18814183 handle_client_session client_session(renewcaps seq 1) v1 from mds.0 2018-08-17 14:34:55.035612 7f0e298a6700 20 client.18814183 handle_client_reply got a reply. Safe:1 tid 1 2018-08-17 14:34:55.035664 7f0e298a6700 10 client.18814183 insert_trace from 2018-08-17 14:34:55.034287 mds.0 is_target=1 is_dentry=0 2018-08-17 14:34:55.035707 7f0e298a6700 10 client.18814183 features 0x3ffddff8eea4fffb 2018-08-17 14:34:55.035744 7f0e298a6700 10 client.18814183 update_snap_trace len 48 2018-08-17 14:34:55.035783 7f0e298a6700 20 client.18814183 get_snap_realm 0x1 0x556a3ee5ea90 0 -> 1 2018-08-17 14:34:55.035857 7f0e298a6700 10 client.18814183 update_snap_trace snaprealm(0x1 nref=1 c=0 seq=0 parent=0x0 my_snaps=[] cached_snapc=0=[]) seq 1 > 0 2018-08-17 14:34:55.035901 7f0e298a6700 10 client.18814183 invalidate_snaprealm_and_children snaprealm(0x1 nref=2 c=0 seq=1 parent=0x0 my_snaps=[] cached_snapc=0=[]) 2018-08-17 14:34:55.035962 7f0e298a6700 15 client.18814183 update_snap_trace snaprealm(0x1 nref=2 c=0 seq=1 parent=0x0 my_snaps=[] cached_snapc=0=[]) self|parent updated
2018-08-17 14:34:55.036004 7f0e298a6700 15 client.18814183   snapc 1=[]
2018-08-17 14:34:55.036041 7f0e298a6700 10 client.18814183 no new snap on snaprealm(0x1 nref=2 c=0 seq=1 parent=0x0 my_snaps=[] cached_snapc=1=[]) 2018-08-17 14:34:55.036079 7f0e298a6700 20 client.18814183 put_snap_realm 0x1 0x556a3ee5ea90 2 -> 1 2018-08-17 14:34:55.036117 7f0e298a6700 10 client.18814183 hrm is_target=1 is_dentry=0 2018-08-17 14:34:55.036182 7f0e298a6700 15 inode.get on 0x556a3f128000 0x100002291c4.head now 1 2018-08-17 14:34:55.036224 7f0e298a6700 12 client.18814183 add_update_inode adding 0x100002291c4.head(faked_ino=0 ref=1 ll_ref=0 cap_refs={} open={} mode=40000 size=0/0 nlink=0 mtime=0.000000 caps=- 0x556a3f128000) caps pAsLsXsFs
2018-08-17 14:34:55.036285 7f0e298a6700 20 client.18814183  dir hash is 2
2018-08-17 14:34:55.036324 7f0e298a6700 10 client.18814183 update_inode_file_bits 0x100002291c4.head(faked_ino=0 ref=1 ll_ref=0 cap_refs={} open={} mode=40555 size=0/0 nlink=1 mtime=0.000000 caps=- quota(max_bytes = 1000000 max_files = 0) 0x556a3f128000) - mtime 2018-08-15 14:09:02.547890 2018-08-17 14:34:55.036384 7f0e298a6700 20 client.18814183 get_snap_realm 0x1 0x556a3ee5ea90 1 -> 2 2018-08-17 14:34:55.036423 7f0e298a6700 15 client.18814183 add_update_cap first one, opened snaprealm 0x556a3ee5ea90 2018-08-17 14:34:55.036462 7f0e298a6700 10 client.18814183 add_update_cap issued - -> pAsLsXsFs from mds.0 on 0x100002291c4.head(faked_ino=0 ref=1 ll_ref=0 cap_refs={} open={} mode=40555 size=0/0 nlink=1 mtime=2018-08-15 14:09:02.547890 caps=pAsLsXsFs(0=pAsLsXsFs) quota(max_bytes = 1000000 max_files = 0) 0x556a3f128000) 2018-08-17 14:34:55.036512 7f0e298a6700 15 inode.get on 0x556a3f128000 0x100002291c4.head now 2 2018-08-17 14:34:55.036550 7f0e298a6700 20 client.18814183 put_snap_realm 0x1 0x556a3ee5ea90 2 -> 1 2018-08-17 14:34:55.036587 7f0e298a6700 15 inode.get on 0x556a3f128000 0x100002291c4.head now 3 2018-08-17 14:34:55.036625 7f0e298a6700 20 client.18814183 handle_client_reply signalling caller 0x7ffd3c6a1970 2018-08-17 14:34:55.036668 7f0e298a6700 20 client.18814183 handle_client_reply awaiting kickback on tid 1 0x7f0e298a4c10 2018-08-17 14:34:55.036710 7f0e300b5140 20 client.18814183 sendrecv kickback on tid 1 0x7f0e298a4c10
2018-08-17 14:34:55.036717 7f0e300b5140 20 client.18814183 lat 0.002429
2018-08-17 14:34:55.036728 7f0e300b5140 10 client.18814183 did not get mds through better means, so chose random mds 0
2018-08-17 14:34:55.036731 7f0e300b5140 20 client.18814183 mds is 0
2018-08-17 14:34:55.036733 7f0e300b5140 10 client.18814183 send_request rebuilding request 2 for mds.0 2018-08-17 14:34:55.036735 7f0e300b5140 20 client.18814183 encode_cap_releases enter (req: 0x556a3ee79500, mds: 0) 2018-08-17 14:34:55.036737 7f0e300b5140 20 client.18814183 send_request set sent_stamp to 2018-08-17 14:34:55.036737 2018-08-17 14:34:55.036741 7f0e300b5140 10 client.18814183 send_request client_request(unknown.0:2 getattr pAsLsXsFs #0x1/backups 2018-08-17 14:34:55.036723 caller_uid=0, caller_gid=0{}) v4 to mds.0 2018-08-17 14:34:55.036766 7f0e300b5140 20 client.18814183 awaiting reply|forward|kick on 0x7ffd3c6a1970 2018-08-17 14:34:55.037241 7f0e298a6700 10 client.18814183 put_inode on 0x100002291c4.head(faked_ino=0 ref=3 ll_ref=0 cap_refs={} open={} mode=40555 size=0/0 nlink=1 mtime=2018-08-15 14:09:02.547890 caps=pAsLsXsFs(0=pAsLsXsFs) quota(max_bytes = 1000000 max_files = 0) 0x556a3f128000) 2018-08-17 14:34:55.037302 7f0e298a6700 15 inode.put on 0x556a3f128000 0x100002291c4.head now 2 2018-08-17 14:34:55.037342 7f0e298a6700 10 client.18814183 put_inode on 0x100002291c4.head(faked_ino=0 ref=2 ll_ref=0 cap_refs={} open={} mode=40555 size=0/0 nlink=1 mtime=2018-08-15 14:09:02.547890 caps=pAsLsXsFs(0=pAsLsXsFs) quota(max_bytes = 1000000 max_files = 0) 0x556a3f128000) 2018-08-17 14:34:55.037387 7f0e298a6700 15 inode.put on 0x556a3f128000 0x100002291c4.head now 1 2018-08-17 14:34:55.037945 7f0e298a6700 20 client.18814183 handle_client_reply got a reply. Safe:1 tid 2 2018-08-17 14:34:55.037988 7f0e298a6700 10 client.18814183 insert_trace from 2018-08-17 14:34:55.036737 mds.0 is_target=1 is_dentry=0 2018-08-17 14:34:55.038032 7f0e298a6700 10 client.18814183 features 0x3ffddff8eea4fffb 2018-08-17 14:34:55.038068 7f0e298a6700 10 client.18814183 update_snap_trace len 48 2018-08-17 14:34:55.038106 7f0e298a6700 20 client.18814183 get_snap_realm 0x1 0x556a3ee5ea90 1 -> 2 2018-08-17 14:34:55.038143 7f0e298a6700 10 client.18814183 update_snap_trace snaprealm(0x1 nref=2 c=0 seq=1 parent=0x0 my_snaps=[] cached_snapc=1=[]) seq 1 <= 1 and same parent, SKIPPING 2018-08-17 14:34:55.038183 7f0e298a6700 10 client.18814183 hrm is_target=1 is_dentry=0 2018-08-17 14:34:55.038226 7f0e298a6700 15 inode.get on 0x556a3f128600 0x1000021a21a.head now 1 2018-08-17 14:34:55.038277 7f0e298a6700 12 client.18814183 add_update_inode adding 0x1000021a21a.head(faked_ino=0 ref=1 ll_ref=0 cap_refs={} open={} mode=40000 size=0/0 nlink=0 mtime=0.000000 caps=- 0x556a3f128600) caps pAsLsXsFs
2018-08-17 14:34:55.038323 7f0e298a6700 20 client.18814183  dir hash is 2
2018-08-17 14:34:55.038360 7f0e298a6700 10 client.18814183 update_inode_file_bits 0x1000021a21a.head(faked_ino=0 ref=1 ll_ref=0 cap_refs={} open={} mode=40555 size=0/0 nlink=1 mtime=0.000000 caps=- 0x556a3f128600) - mtime 2018-08-10 15:38:21.499518 2018-08-17 14:34:55.038407 7f0e298a6700 20 client.18814183 get_snap_realm 0x1 0x556a3ee5ea90 2 -> 3 2018-08-17 14:34:55.038445 7f0e298a6700 15 client.18814183 add_update_cap first one, opened snaprealm 0x556a3ee5ea90 2018-08-17 14:34:55.038483 7f0e298a6700 10 client.18814183 add_update_cap issued - -> pAsLsXsFs from mds.0 on 0x1000021a21a.head(faked_ino=0 ref=1 ll_ref=0 cap_refs={} open={} mode=40555 size=0/0 nlink=1 mtime=2018-08-10 15:38:21.499518 caps=pAsLsXsFs(0=pAsLsXsFs) 0x556a3f128600) 2018-08-17 14:34:55.038530 7f0e298a6700 15 inode.get on 0x556a3f128600 0x1000021a21a.head now 2 2018-08-17 14:34:55.038567 7f0e298a6700 20 client.18814183 put_snap_realm 0x1 0x556a3ee5ea90 3 -> 2 2018-08-17 14:34:55.038604 7f0e298a6700 15 inode.get on 0x556a3f128600 0x1000021a21a.head now 3 2018-08-17 14:34:55.038642 7f0e298a6700 20 client.18814183 handle_client_reply signalling caller 0x7ffd3c6a1970 2018-08-17 14:34:55.038686 7f0e298a6700 20 client.18814183 handle_client_reply awaiting kickback on tid 2 0x7f0e298a4c10 2018-08-17 14:34:55.038728 7f0e300b5140 20 client.18814183 sendrecv kickback on tid 2 0x7f0e298a4c10
2018-08-17 14:34:55.038734 7f0e300b5140 20 client.18814183 lat 0.001997
2018-08-17 14:34:55.038743 7f0e300b5140 10 client.18814183 did not get mds through better means, so chose random mds 0
2018-08-17 14:34:55.038747 7f0e300b5140 20 client.18814183 mds is 0
2018-08-17 14:34:55.038749 7f0e300b5140 10 client.18814183 send_request rebuilding request 3 for mds.0 2018-08-17 14:34:55.038751 7f0e300b5140 20 client.18814183 encode_cap_releases enter (req: 0x556a3ee79800, mds: 0) 2018-08-17 14:34:55.038753 7f0e300b5140 20 client.18814183 send_request set sent_stamp to 2018-08-17 14:34:55.038752 2018-08-17 14:34:55.038756 7f0e300b5140 10 client.18814183 send_request client_request(unknown.0:3 getattr pAsLsXsFs #0x1 2018-08-17 14:34:55.038739 caller_uid=0, caller_gid=0{}) v4 to mds.0 2018-08-17 14:34:55.038782 7f0e300b5140 20 client.18814183 awaiting reply|forward|kick on 0x7ffd3c6a1970 2018-08-17 14:34:55.039064 7f0e298a6700 10 client.18814183 put_inode on 0x1000021a21a.head(faked_ino=0 ref=3 ll_ref=0 cap_refs={} open={} mode=40555 size=0/0 nlink=1 mtime=2018-08-10 15:38:21.499518 caps=pAsLsXsFs(0=pAsLsXsFs) 0x556a3f128600) 2018-08-17 14:34:55.039114 7f0e298a6700 15 inode.put on 0x556a3f128600 0x1000021a21a.head now 2 2018-08-17 14:34:55.039153 7f0e298a6700 10 client.18814183 put_inode on 0x1000021a21a.head(faked_ino=0 ref=2 ll_ref=0 cap_refs={} open={} mode=40555 size=0/0 nlink=1 mtime=2018-08-10 15:38:21.499518 caps=pAsLsXsFs(0=pAsLsXsFs) 0x556a3f128600) 2018-08-17 14:34:55.039198 7f0e298a6700 15 inode.put on 0x556a3f128600 0x1000021a21a.head now 1 2018-08-17 14:34:55.039890 7f0e298a6700 20 client.18814183 handle_client_reply got a reply. Safe:1 tid 3 2018-08-17 14:34:55.039954 7f0e298a6700 10 client.18814183 insert_trace from 2018-08-17 14:34:55.038752 mds.0 is_target=1 is_dentry=0 2018-08-17 14:34:55.040004 7f0e298a6700 10 client.18814183 features 0x3ffddff8eea4fffb 2018-08-17 14:34:55.040041 7f0e298a6700 10 client.18814183 update_snap_trace len 48 2018-08-17 14:34:55.040082 7f0e298a6700 20 client.18814183 get_snap_realm 0x1 0x556a3ee5ea90 2 -> 3 2018-08-17 14:34:55.040120 7f0e298a6700 10 client.18814183 update_snap_trace snaprealm(0x1 nref=3 c=0 seq=1 parent=0x0 my_snaps=[] cached_snapc=1=[]) seq 1 <= 1 and same parent, SKIPPING 2018-08-17 14:34:55.040159 7f0e298a6700 10 client.18814183 hrm is_target=1 is_dentry=0 2018-08-17 14:34:55.040211 7f0e298a6700 15 inode.get on 0x556a3f128c00 0x1.head now 1 2018-08-17 14:34:55.040251 7f0e298a6700 12 client.18814183 add_update_inode adding 0x1.head(faked_ino=0 ref=1 ll_ref=0 cap_refs={} open={} mode=40000 size=0/0 nlink=0 mtime=0.000000 caps=- 0x556a3f128c00) caps pAsLsXsFs
2018-08-17 14:34:55.040294 7f0e298a6700 20 client.18814183  dir hash is 2
2018-08-17 14:34:55.040331 7f0e298a6700 10 client.18814183 update_inode_file_bits 0x1.head(faked_ino=0 ref=1 ll_ref=0 cap_refs={} open={} mode=40755 size=0/0 nlink=1 mtime=0.000000 caps=- has_dir_layout 0x556a3f128c00) - mtime 2018-05-16 08:33:31.388505 2018-08-17 14:34:55.040376 7f0e298a6700 20 client.18814183 get_snap_realm 0x1 0x556a3ee5ea90 3 -> 4 2018-08-17 14:34:55.040414 7f0e298a6700 15 client.18814183 add_update_cap first one, opened snaprealm 0x556a3ee5ea90 2018-08-17 14:34:55.040452 7f0e298a6700 10 client.18814183 add_update_cap issued - -> pAsLsXsFs from mds.0 on 0x1.head(faked_ino=0 ref=1 ll_ref=0 cap_refs={} open={} mode=40755 size=0/0 nlink=1 mtime=2018-05-16 08:33:31.388505 caps=pAsLsXsFs(0=pAsLsXsFs) has_dir_layout 0x556a3f128c00) 2018-08-17 14:34:55.040499 7f0e298a6700 15 inode.get on 0x556a3f128c00 0x1.head now 2 2018-08-17 14:34:55.040536 7f0e298a6700 20 client.18814183 put_snap_realm 0x1 0x556a3ee5ea90 4 -> 3 2018-08-17 14:34:55.040573 7f0e298a6700 15 inode.get on 0x556a3f128c00 0x1.head now 3 2018-08-17 14:34:55.040610 7f0e298a6700 20 client.18814183 handle_client_reply signalling caller 0x7ffd3c6a1970
ceph-fuse[30502]: starting fuse
2018-08-17 14:34:55.040653 7f0e298a6700 20 client.18814183 handle_client_reply awaiting kickback on tid 3 0x7f0e298a4c10 2018-08-17 14:34:55.040664 7f0e300b5140 20 client.18814183 sendrecv kickback on tid 3 0x7f0e298a4c10
2018-08-17 14:34:55.040667 7f0e300b5140 20 client.18814183 lat 0.001914
2018-08-17 14:34:55.040670 7f0e300b5140 15 inode.get on 0x556a3f128000 0x100002291c4.head now 2 2018-08-17 14:34:55.040672 7f0e300b5140 20 client.18814183 _ll_get 0x556a3f128000 0x100002291c4 -> 1 2018-08-17 14:34:55.041035 7f0e300b5140 10 client.18814183 ll_register_callbacks cb 0x556a3ee82c80 invalidate_ino_cb 1 invalidate_dentry_cb 1 switch_interrupt_cb 1 remount_cb 1 2018-08-17 14:34:55.048403 7f0e298a6700 10 client.18814183 put_inode on 0x1.head(faked_ino=0 ref=3 ll_ref=0 cap_refs={} open={} mode=40755 size=0/0 nlink=1 mtime=2018-05-16 08:33:31.388505 caps=pAsLsXsFs(0=pAsLsXsFs) has_dir_layout 0x556a3f128c00) 2018-08-17 14:34:55.048421 7f0e298a6700 15 inode.put on 0x556a3f128c00 0x1.head now 2 2018-08-17 14:34:55.048424 7f0e298a6700 10 client.18814183 put_inode on 0x1.head(faked_ino=0 ref=2 ll_ref=0 cap_refs={} open={} mode=40755 size=0/0 nlink=1 mtime=2018-05-16 08:33:31.388505 caps=pAsLsXsFs(0=pAsLsXsFs) has_dir_layout 0x556a3f128c00) 2018-08-17 14:34:55.048429 7f0e298a6700 15 inode.put on 0x556a3f128c00 0x1.head now 1
2018-08-17 14:34:55.051029 7f0e2589e700  1 client.18814183 using remount_cb
2018-08-17 14:34:55.055053 7f0e2509d700 3 client.18814183 ll_getattr 0x100002291c4.head 2018-08-17 14:34:55.055070 7f0e2509d700 10 client.18814183 _getattr mask pAsLsXsFs issued=1 2018-08-17 14:34:55.055074 7f0e2509d700 10 client.18814183 fill_stat on 0x100002291c4 snap/devhead mode 040555 mtime 2018-08-15 14:09:02.547890 ctime 2018-08-17 14:28:09.654639 2018-08-17 14:34:55.055089 7f0e2509d700 3 client.18814183 ll_getattr 0x100002291c4.head = 0 2018-08-17 14:34:55.055100 7f0e2509d700 3 client.18814183 ll_forget 0x100002291c4 1 2018-08-17 14:34:55.055102 7f0e2509d700 20 client.18814183 _ll_put 0x556a3f128000 0x100002291c4 1 -> 1
2018-08-17 14:34:55.965416 7f0e2a8a8700 10 client.18814183 renew_caps()
2018-08-17 14:34:55.965432 7f0e2a8a8700 15 client.18814183 renew_caps requesting from mds.0
2018-08-17 14:34:55.965436 7f0e2a8a8700 10 client.18814183 renew_caps mds.0
2018-08-17 14:34:55.965504 7f0e2a8a8700 20 client.18814183 trim_cache size 0 max 16384 2018-08-17 14:34:55.967114 7f0e298a6700 10 client.18814183 handle_client_session client_session(renewcaps seq 2) v1 from mds.0
ceph-fuse[30502]: fuse finished with error 0 and tester_r 0
*** Caught signal (Segmentation fault) **





On 07/09/2018 08:48 AM, John Spray wrote:
On Fri, Jul 6, 2018 at 6:30 PM Chad William Seys
<cwseys@xxxxxxxxxxxxxxxx> wrote:

Hi all,
    I'm having a problem that when I mount cephfs with a quota in the
root mount point, no ceph-fuse appears in 'mount' and df reports:

Filesystem     1K-blocks  Used Available Use% Mounted on
ceph-fuse              0     0         0    - /srv/smb

If I 'ls' I see the expected files:
# ls -alh
total 6.0K
drwxrwxr-x+ 1 root     smbadmin  18G Jul  5 17:06 .
drwxr-xr-x  5 root     smbadmin 4.0K Jun 16  2017 ..
drwxrwx---+ 1 smbadmin smbadmin 3.0G Jan 18 10:50 bigfix-relay-cache
drwxrwxr-x+ 1 smbadmin smbadmin  15G Jul  6 11:51 instr_files
drwxrwx---+ 1 smbadmin smbadmin    0 Jul  6 11:50 mcdermott-group

Quotas are being used:
getfattr --only-values -n ceph.quota.max_bytes /srv/smb
10000

Turning off the quota at the mountpoint allows df and mount to work
correctly.

I'm running 12.2.4 on the servers and 12.2.5 on the client.

That's pretty weird, not something I recall seeing before.  When
quotas are in use, Ceph is implementing the same statfs() hook to
report usage to the OS, but it's doing a getattr() call to the MDS
inside that function.  I wonder if something is going slowly, and
perhaps the OS is ignoring filesystems that don't return promptly, to
avoid hanging "df" on a misbehaving filesystem?

I'd debug this by setting "debug ms = 1", and finding the client's log
in /var/log/ceph.

John


Is there a bug report for this?
Thanks!
Chad.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux