Re: ls/file access hangs on a single ceph directory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 24, 2013 at 5:43 PM, Michael <michael@xxxxxxxxxxxxxxxxxx> wrote:
> On 24/10/2013 03:09, Yan, Zheng wrote:
>>
>> On Thu, Oct 24, 2013 at 6:44 AM, Michael <michael@xxxxxxxxxxxxxxxxxx>
>> wrote:
>>>
>>> Tying to gather some more info.
>>>
>>> CentOS - hanging ls
>>> [root@srv ~]# cat /proc/14614/stack
>>> [<ffffffffa02d3e81>] wait_answer_interruptible+0x81/0xc0 [fuse]
>>> [<ffffffffa02d415b>] fuse_request_send+0x1cb/0x290 [fuse]
>>> [<ffffffffa02d652c>] fuse_do_getattr+0x10c/0x2c0 [fuse]
>>> [<ffffffffa02d6755>] fuse_update_attributes+0x75/0x80 [fuse]
>>> [<ffffffffa02d67b3>] fuse_getattr+0x53/0x60 [fuse]
>>> [<ffffffff81186d51>] vfs_getattr+0x51/0x80
>>> [<ffffffff81186de0>] vfs_fstatat+0x60/0x80
>>> [<ffffffff81186f2b>] vfs_stat+0x1b/0x20
>>> [<ffffffff81186f54>] sys_newstat+0x24/0x50
>>> [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
>>> [<ffffffffffffffff>] 0xffffffffffffffff
>>>
>>> Ubuntu - hanging ls
>>> root@srv:~# cat /proc/30012/stack
>>> [<ffffffffa061d04b>] ceph_mdsc_do_request+0xcb/0x1a0 [ceph]
>>> [<ffffffffa0608f37>] ceph_do_getattr+0xe7/0x120 [ceph]
>>> [<ffffffffa0608f94>] ceph_getattr+0x24/0x100 [ceph]
>>> [<ffffffff8118d42e>] vfs_getattr+0x4e/0x80
>>> [<ffffffff8118d4ae>] vfs_fstatat+0x4e/0x70
>>> [<ffffffff8118d4ee>] vfs_lstat+0x1e/0x20
>>> [<ffffffff8118d68a>] sys_newlstat+0x1a/0x40
>>> [<ffffffff816a6ba9>] system_call_fastpath+0x16/0x1b
>>> [<ffffffffffffffff>] 0xffffffffffffffff
>>>
>>> Started occurring shortly (within an hour or so) after adding a pool, not
>>> sure if that's relevant yet.
>>>
>>> -Michael
>>>
>>> On 23/10/2013 21:10, Michael wrote:
>>>>
>>>> I have a filesystem shared by several systems mounted on 2 ceph nodes
>>>> with
>>>> a 3rd as a reference monitor.
>>>> It's been used for a couple of months now but suddenly the root
>>>> directory
>>>> for the mount has become inaccessible and requests to files in it just
>>>> hang,
>>>> there's no ceph errors reported before/after and subdirectories of the
>>>> directory can be used (and still are currently being used by VM's still
>>>> running from it). It's being mounted in a mixed kernel driver (ubuntu)
>>>> and
>>>> centos (ceph-fuse) environment.
>>
>> kernel, ceph-fuse and ceph-mds version? the hang was likely caused by an
>> known
>> bug in kernel 3.10.
>>
>> Regards
>> Yan, Zheng
>
>
> Centos 6.4
> 2.6.32-358.23.2.el6.x86_64
> ceph.x86_64                          0.67.4-0.el6
> ceph-fuse.x86_64                     0.67.4-0.el6
>
> Ubuntu 12.04
> 3.5.0-41-generic
> Ceph Version: 0.67.2-1precise

3.5 kernel is too old for cephfs, please use ceph-use instead

Yan, Zheng

>
> ... So it looks like I've let my ceph versions get out of sync, the MDS is
> on a ubuntu box and all of the OSD are on Ubuntu boxes too, the CentOS just
> has another MON on it, think I really drag myself away from CentOS outright
> for ceph. I was previously using fuse on the Ubuntu boxes as well, though
> that changed a few days ago (Currently walking around the ceph features,
> next up was to hook ceph's rdb up to OpenNebula, hence the additional pool).
>
> -Michael
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux