Re: ceph-fuse crush

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 15, 2015 at 10:41 PM, 黑铁柱 <kangqi1988@xxxxxxxxx> wrote:
>
> cluster info:
>    cluster b23b48bf-373a-489c-821a-31b60b5b5af0
>      health HEALTH_OK
>      monmap e1: 3 mons at
> {node1=192.168.0.207:6789/0,node2=192.168.0.208:6789/0,node3=192.168.0.209:6789/0},
> election epoch 24, quorum 0,1,2 node1,node2,node3
>      mdsmap e42: 2/2/1 up {0=0=up:active,1=1=up:active}, 1 up:standby
>      osdmap e474: 33 osds: 33 up, 33 in
>       pgmap v206523: 3200 pgs, 3 pools, 73443 MB data, 1505 kobjects
>             330 GB used, 8882 GB / 9212 GB avail
>                 3200 active+clean
>
>
>
> ceph-client log:
> 2015-10-16 03:01:33.396095 7f63b1ffb700 -1 ./include/xlist.h: In function
> 'xlist<T>::~xlist() [with T = ObjectCacher::Object*]' thread 7f63b1ffb700
> time 2015-10-16 03:01:33.336379
> ./include/xlist.h: 69: FAILED assert(_size == 0)
>
>  ceph version 0.80.10 (ea6c958c38df1216bf95c927f143d8b13c4a9e70)
>  1: ceph-fuse() [0x58fe48]
>  2: (Client::put_inode(Inode*, int)+0x3a6) [0x537f36]
>  3: (Client::_ll_put(Inode*, int)+0xa6) [0x539f86]
>  4: (Client::ll_forget(Inode*, int)+0x3ae) [0x53a84e]
>  5: ceph-fuse() [0x5275e7]
>  6: (()+0x16beb) [0x7f64e2801beb]
>  7: (()+0x13481) [0x7f64e27fe481]
>  8: (()+0x7df3) [0x7f64e2052df3]
>  9: (clone()+0x6d) [0x7f64e0f413dd]
>  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to
> interpret this.
>
> --- begin dump of recent events ---
> -10000> 2015-10-16 03:01:33.100899 7f63c97fa700  3 client.6429 ll_lookup
> 0x7f63c71faf20 D -> 0 (100013a8e0f)
>  -9999> 2015-10-16 03:01:33.100905 7f63c97fa700  3 client.6429 ll_forget
> 100013a8df3 1
>  -9998> 2015-10-16 03:01:33.100910 7f63c97fa700  3 client.6429 ll_getattr
> 100013a8e0f.head
>  -9997> 2015-10-16 03:01:33.100913 7f63c97fa700  3 client.6429 ll_getattr
> 100013a8e0f.head = 0
>  -9996> 2015-10-16 03:01:33.100916 7f63c97fa700  3 client.6429 ll_forget
> 100013a8e0f 1
>  -9995> 2015-10-16 03:01:33.100921 7f63c97fa700  3 client.6429 ll_lookup
> 0x7f64cd8fcd70 doss_web_rep
>  -9994> 2015-10-16 03:01:33.100924 7f63c97fa700  3 client.6429 ll_lookup
> 0x7f64cd8fcd70 doss_web_rep -> 0 (100013a8e10)
>  -9993> 2015-10-16 03:01:33.100928 7f63c97fa700  3 client.6429 ll_forget
> 100013a8e0f 1
>  -9992> 2015-10-16 03:01:33.100944 7f63d19ed700  3 client.6429 ll_getattr
> 100013a8e10.head
>  -9991> 2015-10-16 03:01:33.100949 7f63d19ed700  3 client.6429 ll_getattr
> 100013a8e10.head = 0
>  -9990> 2015-10-16 03:01:33.100955 7f63d19ed700  3 client.6429 ll_forget
> 100013a8e10 1
>  -9989> 2015-10-16 03:01:33.100960 7f63d19ed700  3 client.6429 ll_lookup
> 0x7f64cddee1c0 1051_SPOA3_proj
>  -9988> 2015-10-16 03:01:33.100964 7f63d19ed700  3 client.6429 ll_lookup
> 0x7f64cddee1c0 1051_SPOA3_proj -> 0 (20000153d64)
>  -9987> 2015-10-16 03:01:33.100969 7f63d19ed700  3 client.6429 ll_forget
> 100013a8e10 1
>  -9986> 2015-10-16 03:01:33.100974 7f63d19ed700  3 client.6429 ll_getattr
> 20000153d64.head
>  -9985> 2015-10-16 03:01:33.100979 7f63d19ed700  3 client.6429 ll_getattr
> 20000153d64.head = 0
>  -9984> 2015-10-16 03:01:33.100983 7f63d19ed700  3 client.6429 ll_forget
> 20000153d64 1
>  -9983> 2015-10-16 03:01:33.100987 7f63d19ed700  3 client.6429 ll_lookup
> 0x7f63c62a2a80 tags
>  -9982> 2015-10-16 03:01:33.100989 7f63d19ed700  3 client.6429 ll_lookup
> 0x7f63c62a2a80 tags -> 0 (20000153d7d)
>  -9981> 2015-10-16 03:01:33.100994 7f63d19ed700  3 client.6429 ll_forget
> 20000153d64 1
>  -9980> 2015-10-16 03:01:33.100999 7f63d19ed700  3 client.6429 ll_getattr
> 20000153d7d.head
>  -9979> 2015-10-16 03:01:33.101002 7f63d19ed700  3 client.6429 ll_getattr
> 20000153d7d.head = 0
>  -9978> 2015-10-16 03:01:33.101006 7f63d19ed700  3 client.6429 ll_forget
> 20000153d7d 1
>  -9977> 2015-10-16 03:01:33.101011 7f63d19ed700  3 client.6429 ll_lookup
> 0x7f63c4a9b5a0 v20101206
>  -9976> 2015-10-16 03:01:33.101015 7f63d19ed700  3 client.6429 ll_lookup
> 0x7f63c4a9b5a0 v20101206 -> 0 (20000153d7f)
>
>
> I always have this problem.how to slove?

I assume http://tracker.ceph.com/issues/13472 is yours, right? Can you
please upload your existing client log file to the tracker, and
reproduce this with "debug client = 20"?

Is the entire cluster running .80.10, or just the client?
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux